title
listlengths 0
18
| author
listlengths 0
4.41k
| authoraffiliation
listlengths 0
6.45k
| venue
listlengths 0
9
| abstract
stringlengths 1
37.6k
| doi
stringlengths 10
114
⌀ | pdfurls
listlengths 1
3
⌀ | corpusid
int64 158
259M
| arxivid
stringlengths 9
16
| pdfsha
stringlengths 40
40
| text
stringlengths 66
715k
| github_urls
listlengths 0
36
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"DravidianCodeMix: Sentiment Analysis and Offensive Language Identification Dataset for Dravidian Languages in Code-Mixed Text",
"DravidianCodeMix: Sentiment Analysis and Offensive Language Identification Dataset for Dravidian Languages in Code-Mixed Text",
"DravidianCodeMix: Sentiment Analysis and Offensive Language Identification Dataset for Dravidian Languages in Code-Mixed Text",
"DravidianCodeMix: Sentiment Analysis and Offensive Language Identification Dataset for Dravidian Languages in Code-Mixed Text"
]
| [
"Bharathi Raja [email protected] \nInsight SFI Research Centre for Data Analytics\nData Science Institute\nNational University of Ireland Galway\nGalwayIreland\n",
"Chakravarthi ",
"Ruba Priyadharshini [email protected] \nULTRA Arts and Science College\nMadurai\n\nTamil Nadu\nIndia\n",
"Vigneshwaran Muralidaran [email protected] \nSchool of Computer Science and Informatics\nCardiff University\nCardiffUnited Kingdom\n",
"⋅ Navya \nIndian Institute of Information Technology and Management-Kerala\nKeralaIndia\n",
"Jose ",
"Shardul Suryawanshi [email protected] \nInsight SFI Research Centre for Data Analytics\nData Science Institute\nNational University of Ireland Galway\nGalwayIreland\n",
"Elizabeth Sherly [email protected] \nIndian Institute of Information Technology and Management-Kerala\nKeralaIndia\n",
"John P Mccrae [email protected] \nInsight SFI Research Centre for Data Analytics\nData Science Institute\nNational University of Ireland Galway\nGalwayIreland\n",
"Bharathi Raja ",
"Chakravarthi ",
"Ruba Priyadharshini ",
"Vigneshwaran Muralidaran ",
"Navya Jose ",
"Shardul Suryawanshi ",
"Elizabeth Sherly ",
"John P Mccrae ",
"Bharathi Raja [email protected] \nInsight SFI Research Centre for Data Analytics\nData Science Institute\nNational University of Ireland Galway\nGalwayIreland\n",
"Chakravarthi ",
"Ruba Priyadharshini [email protected] \nULTRA Arts and Science College\nMadurai\n\nTamil Nadu\nIndia\n",
"Vigneshwaran Muralidaran [email protected] \nSchool of Computer Science and Informatics\nCardiff University\nCardiffUnited Kingdom\n",
"⋅ Navya \nIndian Institute of Information Technology and Management-Kerala\nKeralaIndia\n",
"Jose ",
"Shardul Suryawanshi [email protected] \nInsight SFI Research Centre for Data Analytics\nData Science Institute\nNational University of Ireland Galway\nGalwayIreland\n",
"Elizabeth Sherly [email protected] \nIndian Institute of Information Technology and Management-Kerala\nKeralaIndia\n",
"John P Mccrae [email protected] \nInsight SFI Research Centre for Data Analytics\nData Science Institute\nNational University of Ireland Galway\nGalwayIreland\n",
"Bharathi Raja ",
"Chakravarthi ",
"Ruba Priyadharshini ",
"Vigneshwaran Muralidaran ",
"Navya Jose ",
"Shardul Suryawanshi ",
"Elizabeth Sherly ",
"John P Mccrae "
]
| [
"Insight SFI Research Centre for Data Analytics\nData Science Institute\nNational University of Ireland Galway\nGalwayIreland",
"ULTRA Arts and Science College\nMadurai",
"Tamil Nadu\nIndia",
"School of Computer Science and Informatics\nCardiff University\nCardiffUnited Kingdom",
"Indian Institute of Information Technology and Management-Kerala\nKeralaIndia",
"Insight SFI Research Centre for Data Analytics\nData Science Institute\nNational University of Ireland Galway\nGalwayIreland",
"Indian Institute of Information Technology and Management-Kerala\nKeralaIndia",
"Insight SFI Research Centre for Data Analytics\nData Science Institute\nNational University of Ireland Galway\nGalwayIreland",
"Insight SFI Research Centre for Data Analytics\nData Science Institute\nNational University of Ireland Galway\nGalwayIreland",
"ULTRA Arts and Science College\nMadurai",
"Tamil Nadu\nIndia",
"School of Computer Science and Informatics\nCardiff University\nCardiffUnited Kingdom",
"Indian Institute of Information Technology and Management-Kerala\nKeralaIndia",
"Insight SFI Research Centre for Data Analytics\nData Science Institute\nNational University of Ireland Galway\nGalwayIreland",
"Indian Institute of Information Technology and Management-Kerala\nKeralaIndia",
"Insight SFI Research Centre for Data Analytics\nData Science Institute\nNational University of Ireland Galway\nGalwayIreland"
]
| []
| This paper describes the development of a multilingual, manually annotated dataset for three under-resourced Dravidian languages generated from social media comments. The dataset was annotated for sentiment analysis and offensive language identification for a total of more than 60,000 YouTube comments. The dataset consists of around 44,000 comments in Tamil-English, around 7,000 comments in Kannada-English, and around 20,000 comments in Malayalam-English. The data was manually annotated by volunteer annotators and has a high inter-annotator agreement in Krippendorff's alpha. The dataset contains all types of code-mixing phenomena 2 Chakravarthi et al since it comprises user-generated content from a multilingual country. We also present baseline experiments to establish benchmarks on the dataset using machine learning methods. The dataset is available on Github 1 and Zenodo 2 . | 10.1007/s10579-022-09583-7 | [
"https://arxiv.org/pdf/2106.09460v1.pdf"
]
| 246,584,687 | 2106.09460 | aeeb168feb05da1b3d31c0389e7a9196bc8970f6 |
DravidianCodeMix: Sentiment Analysis and Offensive Language Identification Dataset for Dravidian Languages in Code-Mixed Text
17 Jun 2021
Bharathi Raja [email protected]
Insight SFI Research Centre for Data Analytics
Data Science Institute
National University of Ireland Galway
GalwayIreland
Chakravarthi
Ruba Priyadharshini [email protected]
ULTRA Arts and Science College
Madurai
Tamil Nadu
India
Vigneshwaran Muralidaran [email protected]
School of Computer Science and Informatics
Cardiff University
CardiffUnited Kingdom
⋅ Navya
Indian Institute of Information Technology and Management-Kerala
KeralaIndia
Jose
Shardul Suryawanshi [email protected]
Insight SFI Research Centre for Data Analytics
Data Science Institute
National University of Ireland Galway
GalwayIreland
Elizabeth Sherly [email protected]
Indian Institute of Information Technology and Management-Kerala
KeralaIndia
John P Mccrae [email protected]
Insight SFI Research Centre for Data Analytics
Data Science Institute
National University of Ireland Galway
GalwayIreland
Bharathi Raja
Chakravarthi
Ruba Priyadharshini
Vigneshwaran Muralidaran
Navya Jose
Shardul Suryawanshi
Elizabeth Sherly
John P Mccrae
DravidianCodeMix: Sentiment Analysis and Offensive Language Identification Dataset for Dravidian Languages in Code-Mixed Text
17 Jun 2021Received: date / Accepted: dateLanguage Resources and Evaluation Journal manuscript No. (will be inserted by the editor)Dravidian languages ⋅ Sentiment Analysis ⋅ Offensive Language Identification ⋅ Tamil ⋅ Kannada ⋅ Malayalam ⋅ Code-Mixed ⋅ Corpora
This paper describes the development of a multilingual, manually annotated dataset for three under-resourced Dravidian languages generated from social media comments. The dataset was annotated for sentiment analysis and offensive language identification for a total of more than 60,000 YouTube comments. The dataset consists of around 44,000 comments in Tamil-English, around 7,000 comments in Kannada-English, and around 20,000 comments in Malayalam-English. The data was manually annotated by volunteer annotators and has a high inter-annotator agreement in Krippendorff's alpha. The dataset contains all types of code-mixing phenomena 2 Chakravarthi et al since it comprises user-generated content from a multilingual country. We also present baseline experiments to establish benchmarks on the dataset using machine learning methods. The dataset is available on Github 1 and Zenodo 2 .
Introduction
Sentiment analysis is the classification task of mining sentiments from natural language, which finds use in numerous applications such as reputation management, customer support, and moderating content in social media Agarwal et al., 2011;Mahesan, 2019, 2020a). Sentiment analysis has helped industry to compile a summary of human perspectives and interests derived from feedback or even just the polarity of comments (Pang and Lee, 2004;Thavareesan and Mahesan, 2020b). Offensive language identification is another classification task in natural language processing (NLP), where the aim is to moderate and minimise offensive content in social media. In recent years, sentiment analysis and offensive language identification have gained significant interest in the field of NLP.
Social media websites and product review forums provide opportunities for users to create content in informal settings. Moreover, to improve user experience, these platforms ensure that the user communicates his/her opinion in such a way that he/she feels comfortable either using native language or switching between one or more languages in the same conversation . However, most NLP systems are trained on languages in formal settings with proper grammar, which creates issues when it comes to the analysis phase of "user generated" comments (Chanda et al., 2016;Pratapa et al., 2018). Further, most of the developments in sentiment analysis and offensive language identification systems are performed on monolingual data for high-resource languages, while the user-generated content in under-resourced settings are often mixed with English or other high-resource languages (Winata et al., 2019;Jose et al., 2020).
Code-mixing or code-switching is the alternation between two or more languages at the level of the document, paragraph, comments, sentence, phrase, word or morpheme. It is a distinctive aspect of conversation or dialogue in bilingual and multilingual societies (Barman et al., 2014). It is motivated by structural, discourse, pragmatic and socio-linguistic reasons (Sridhar, 1978). Most of the social media comments are code-mixed, while the resources created for sentiment analysis and offensive language identification are primarily available for monolingual texts. Code-mixing occurs in daily life, such as in normal conversation or social media conversation in both audio and text format. Code-mixing refers to the way a bilingual/ multilingual speaker changes his or her utterance into another language. The vast majority of language pairs are under-resourced with regards to code-mixing tasks Jose et al., 2020).
In this paper, we describe the creation of a corpus for Dravidian languages in the context of sentiment analysis and offensive language detection tasks. Dravidian languages are spoken mainly in the south of India (Chakravarthi et al., 2020c). The four major literary languages belonging to the language family are Tamil (ISO 639-3: tam), Telugu (ISO 639-3: tel), Malayalam (ISO 639-3: mal), and Kannada (ISO 639-3: kan). Tamil, Malayalam and Kannada fall under the South Dravidian subgroup while Telugu belongs to the South Central Dravidian subgroup (Vikram and Urs, 2007). Each of the four languages has official status as one of the 22 scheduled languages recognised by the Government of India. Tamil also has official status in Sri Lanka and Singapore (Thamburaj and Rengganathan, 2015). Although the languages are widely spoken by millions of people, the tools and resources available for building robust NLP applications are under-developed for these languages.
Dravidian languages are highly agglutinating languages and each language uses its own script (Krishnamurti, 2003;Mahesan, 2016, 2017). The writing system is a phonemic abugida written from left to right. The Tamil language was written using Tamili, Vattezhuthu, Chola, Pallava and Chola-Pallava scripts at different points in history. The modern Tamil script descended from the Chola-Pallava script that was conceived around the 4th century CE (Sakuntharaj and Mahesan, 2018a,b). The Malayalam script is based on the Vatteluttu script developed from old Vatteluttu with additional letters from Grantha script to write loan words (Thottingal, 2019). Similarly, the Kannada and Telugu scripts evolved from Bhattiprolu Brahmi. Nevertheless, social media users often use the Latin script for typing in these languages due to its ease of use and accessibility in handheld devices and computers .
Monolingual datasets are available for Indian languages for various research aims (Agrawal et al., 2018;Thenmozhi and Aravindan, 2018;. However, there have been few attempts to make datasets for Tamil, Kannada and Malayalam code-mixed text (Chakravarthi et al., 2020b,c;Chakravarthi, 2020;Chakravarthi and Muralidaran, 2021). We believe it is essential to come up with approaches to tackle this resource bottleneck so that these languages can be equipped with NLP support in social media in a way that is both cost-effective and rapid. To create resources for a Tamil-English, Kannada-English and Malayalam-English code-mixed scenario, we collected comments on various Tamil, Kannada and Malayalam movie trailers from YouTube.
The contributions of this paper are:
1. We present the dataset for three Dravidian languages, namely Tamil, Kannada, and Malayalam, for sentiment analysis and offensive language identification tasks. 2. The dataset contains all types 3 of code-mixing. This is the first Dravidian language dataset to contain all types of code-mixing, including mixtures of these scripts and the Latin script. The dataset consists of around 44,000 comments in Tamil-English, around 7,000 comments in Kannada-English, and around 20,000 comments in Malayalam-English. 3. We provide an experimental analysis of logistic regression, naive Bayes, decision tree, random forest, and SVM on our code-mixed data for classification tasks so as to create a benchmark for further research.
Related Work
Sentiment analysis helps to understand the polarity (positive, negative or neutral) of the audience towards a content (comment, tweet, image, video) or an event (Brexit, presidential elections). This data on polarity can help in understanding public opinion. Furthermore, the inclusion of sentiment analysis can improve the performance of tasks such as recommendation system (Krishna et al., 2013;Musto et al., 2017), and hate speech detection (Gitari et al., 2015). Over the last 20 years, social media networks have become a rich data source for sentiment analysis (Clarke and Grieve, 2017;Tian et al., 2017). Extensive research has been done for sentiment analysis of monolingual corpora such as English (Hu and Liu, 2004;Wiebe et al., 2005;Jiang et al., 2019), Russian (Rogers et al., 2018), German (Cieliebak et al., 2017), Norwegian (Maehlum et al., 2019) and Indian languages (Agrawal et al., 2018;Rani et al., 2020). In initial research workds, n-gram features were used widely for classification of sentiments (Kouloumpis et al., 2011). However recently, due to readily available data on social media, these traditional techniques have been replaced by deep neural network techniques. Patwa et al. (2020) conducted sentiment analysis on code-mixed social media text for Hindi-English and Spanish-English languages. However, sentiment analysis in Dravidian languages is under-studied. The anonymous and consequence-free nature of social media posts has proliferated the use of aggressive, hateful or offensive language online. This downturn has encouraged the development of automatic moderation systems. These systems if trained on proper data can help detect aggressive speech thus moderating spiteful content on a public platform. Collection of such data has become a crucial part of social media analysis. To facilitate the researchers working on these problems, there have been shared tasks conducted on aggression identification in social media and offensive language identification (Zampieri et al., 2019) by providing necessary datasets. As English is a commonly used language on social media, a significant amount of research goes into the identification of offensive English text. However, many internet users prefer the use of their native languages. This has given rise to the development of offensive language identification dataset in Arabic, Danish, Greek, and Turkish languages (Zampieri et al., 2020). Inspired by this we developed resources for offensive language identification for Dravidian languages.
In the past few years, cheaper internet and increased use of smartphones have significantly increased social media interaction in code-mixed native languages. Dravidian language speakers (who are often bilingual with English as it is an official language in India) with a population base of 215 million 4 contribute to large portion of such interactions. Hence, there is an ever-increasing need for the analysis of code-mixed text in Dravidian languages. However, the number of freely available code-mixed dataset (Ranjan et al., 2016;Jose et al., 2020) are still limited in number, size, and availability. Towards building language identification (LID) systems in code-mixed languages, Sowmya Lakshmi and Shambhavi (2017) developed a Kannada-English dataset containing English and Kannada text with word-level code-mixing. Also, they employed a stance detection system to detect stance in Kannada-English code-mixed text (on social media) using sentence embeddings. Shalini et al. (2018) have used distributed representations for sentiment analysis of Kannada-English code-mixed texts through neural networks, which had three tags: Positive, Negative and Neutral. However, the dataset for Kannada was not readily available for research purposes. To give motivation for further research we conducted (Chakravarthi et al., 2020a,d;Mandl et al., 2020; a shared task that provided Tamil-English, Kannada-English, and Malayalam-English code-mixed datasets using which participants trained models that identify the sentiments (task A) and offensive classes (task B) in both the languages.
Most of the recent studies on sentiment analysis and offensive language identification have been conducted on high-resourced languages from social media platforms. Models trained on such highly resourced monolingual data have succeeded in predicting sentiment and offensiveness. However, with the increased social media usage of bilingual users, a system trained on under-resourced code-mixed data is needed. In spite of this need, no large datasets for Tamil-English, Kannada-English and Malayalam-English are available. Hence, inspired by Severyn et al. (2014), we collected and created a code-mixed dataset from YouTube. In this work, we describe the process of corpora creation for under-resourced Dravidian languages from YouTube comments. This is an extension of two workshop papers (Chakravarthi et al., 2020b,c) and shared tasks (Chakravarthi et al., 2020d). We present DravidianCodeMix corpora for Tamil-English (40,000+ comments), Kannada-English (7,000+ comments) and Malayalam-English (nearly 20,000 comments) with manually annotated labels for sentiment analysis and offensive language identification. We used Krippendorff's alpha to calculate agreement amongst annotators. We made sure that each comment is annotated by at least three annotators and made the labelled corpora freely available for research purpose. For bench marking, we provided baseline experiments and results on 'DravidianCodeMix' corpora using machine learning models.
Raw Data
Online media, for example, Twitter, Facebook or YouTube, contain quickly changing data produced by millions of users that can drastically alter the reputation of an individual or an association. This raises the significance of programmed extraction of sentiments and offensive language used in online social media. YouTube is one of social media which is getting popular in the Indian subcontinent because of the wide range of content available from the platform such as songs, tutorials, product reviews, trailers and so on. YouTube allows users to create content and other users to comment on the content. It allows for more user-generated content in under-resourced languages. Hence, we chose YouTube to extract comments to create our dataset. We chose movie trailers as the topic to collect data because movies are quite popular among the Tamil, Malayalam, and Kannada speaking populace. This increases the chance of getting varied views on one topic. Figure 1 shows the overview of the steps involved in creating our dataset.
Code-Switching Type Example Translation
Only English Very good movie-making skills in your language.. keep up the good work Very good movie-making skills in your language.. keep up the good work. Thought this movie will not be a success. But it is nice now. Expecting full comedy and an awesome climax. I am waiting. We compiled the comments from different film trailers of Tamil, Kannada, and Malayalam languages from YouTube in the year 2019. The comments were gathered using YouTube Comment Scraper tool 5 . We utilized these comments to make the datasets for sentiment analysis and offensive language identification with manual annotations. We intended to collect comments that contain code-mixing at various levels of the text, with enough representation for each sentiment and offensive language classes in all three languages. It was a challenging task to extract the necessary text that suited our intent from the comment section, which was further complicated by the presence of remarks in other non-target languages. As a part of the preprocessing steps to clean the data, we utilized langdetect library 6 to tell different languages apart and eliminate the unintended languages. Examples of code-mixing in Tamil, Kannada and Malayalam corpora are shown in Figure 2, Figure 3, and Figure 4 along with their translations in English. By keeping data privacy in mind, we made sure that all the user-related information is removed from the corpora. As a part of the text-preprocessing, we removed redundant information such as URL.
Since we collected corpora from social media, our corpora contain different types of real-world code-mixed data. Inter-sentential switching is characterised by change of language between sentences where each sentence is written or spoken in one language. Intra-sentential switching occurs within a single sentence, say one of the clause is in one language and the other clause is in the second language. Our corpora contains all forms of code-mixing ranging from purely monolingual texts in native languages to mixing of scripts, words, morphology, inter-sentential and intra-sentential switches.
We retained all the instances of code-mixing to faithfully preserve the real-world usage.
Methodology of Annotation
We create our corpora for two tasks, namely, sentiment analysis and offensive language identification. We anonymized the data gathered from Youtube in order to protect user privacy.
Annotation Process
In order to find volunteers for the annotation process, we contacted students in Indian Institute of Information Technology and Management-Kerala for Malayalam, Indian Institute of Information Technology-Tiruchirapalli and Madurai Kamaraj University for Tamil. For Kannada, we contacted students in Visvesvaraya College of Engineering, Bangalore University. The student volunteer annotators received the link to a Google Form and did the annotations on their personal computers. The authors' family members also volunteered to annotate the data. We created Google Forms to gather annotations from annotators. Information on gender, education background and medium of schooling were collected to know the diversity of the annotators. The annotators were cautioned that the user remarks may have hostile language. They were given a provision to discontinue with the annotation process in case the content is too upsetting to deal with. They were asked not to be partial to a specific individual, circumstance or occasion during the annotation process. Each Google form had been set to contain up to 100 comments and each page was limited to contain 10 comments. The annotators were instructed to agree that they understood the scheme before they were allowed to proceed further. The annotation setup involved three stages. To begin with, each sentence was annotated by two individuals. In the second step, the data was included in the collection if both the annotations agreed. In the event of contention, a third individual was asked to annotate the sentence. In the third step, in the uncommon case that all the three of them disagreed, at that point, two additional annotators were brought in to label the sentences. Each form was annotated by at least three annotators.
Sentiment Analysis
For sentiment analysis, we followed the methodology taken by Chakravarthi et al. (2020c), and involved at least three annotators to label each sentence. The following annotation schema was given to the annotators in English and Dravidian languages.
-Positive state: Comment contains an explicit or implicit clue in the content recommending that the speaker is in a positive state. -Negative state: Comment contains an explicit or implicit clue in the content recommending that the speaker is in a negative state.
Offensive Language Identification
We constructed offensive language identification dataset for Dravidian languages at different levels of complexity following the work of Zampieri et al. (2019). More generally we expand this expand to a three-level hierarchical annotation schema. We added a new category Not in intended language to account for comments written in a language other than the intended language. Examples for this are the comments written in other Dravidian languages using Roman script. To simplify the annotation decisions, we split offensive language categories into six labels.
-Not Offensive: Comment does not contain offence or profanity.
-Offensive Untargeted: Comment contains offence or profanity not directed towards any target. These are the comments which contain unacceptable language without targeting anyone. -Offensive Targeted Individual: Comment contains offence or profanity which targets an individual. -Offensive Targeted Group: Comment contains offence or profanity which targets a group or a community. -Offensive Targeted Other: Comment contains offence or profanity which does not belong to any of the previous two categories (e.g. a situation, an issue, an organization or an event). -Not in indented language: If the comment is not in the intended language. For example, in Tamil task, if the sentence does not contain Tamil written in Tamil script or Latin script, then it is not Tamil.
Examples of the Google Forms in English and native language for offensive language identification task are given in Figure 7, Figure 8, and Figure 9. Once the Google Form was ready, we sent it out to an equal number of males and females to enquire their willingness to annotate. We got varied responses from them and so our distribution of male and female annotators involved in the task are different. From Table 1, we can see that only two female annotators volunteered to contribute for Tamil while there were more female annotators for Malayalam and Kannada. For offensive language identification, we can see that there is a balance in gender from Table 2. The majority of the annotators have received postgraduate level of education. We were not able to find volunteers of non-binary gender to annotate our dataset. All the annotators who volunteered to annotate the Tamil-English, Kannada-English and Malayalam-English datasets had bilingual proficiency in the respective code-mixed pairs and they were prepared to take up the task seriously. From Table 1 and 2, we can observe that the majority of the annotators' medium of schooling is English even though their mother tongue is Tamil, Kannada or Malayalam. For Kannada and Malayalam languages only one annotator from each language received their education through the medium of their native language. Although the medium of education of the participants was skewed towards the English language, we were careful it would not affect the annotation task by ensuring that all of them are fully proficient in using their native language.
A sample form (first assignment) was annotated by experts and a gold standard was created. We manually compared the gold standard annotations with the volunteer submission form. To control the quality of annotation, we eliminated the annotators whose label assignments in the first form were not good. For instance, if the annotators showed an unreasonable delay in responding or if they labelled all sentences with the same label or if more than fifty annotations in a form were wrong, we eliminated those contributions. A total of 22 volunteers and 23 volunteers, for sentiment analysis and offensive language identification tasks respectively, were involved in the process. Once they filled up the Google Form, 100 sentences were sent to them. If an annotator offered to volunteer more, the next Google Form was sent to them with another set of 100 sentences and in this way each volunteer chose to annotate as many sentences from the corpus as they wanted. Table 3 Inter-annotator agreement in Krippendorff's alpha
Inter-annotator agreement
Inter-annotator agreement is a measure of the extent to which the annotators agree in their rating. This is necessary to ensure that the annotation scheme is consistent and that different raters are able to assign the same sentiment label to a given comment. There are two questions related to inter-annotator agreement: How do the annotators agree or disagree in their annotation? How much of the observed agreement or disagreement among the annotators might be due to chance? While the percentage of agreement is fairly straightforward, answering the second question involves defining and modelling what chance is and how to measure the agreement due to chance. There are different inter-annotator agreement measures that are intended to answer this in order to measure the reliability of the annotation. We utilized Krippendorff's alpha ( ) (Krippendorff, 1970) to gauge the agreement between annotators because of the nature of our annotation setup. Krippendorff's alpha is a rigorous statistical measure that accounts for incomplete data and, consequently, does not require every annotator to annotate every sentence. It is also a measure that considers the level of disagreement between the anticipated classes, which is critical in our annotation scheme. For example, if the annotators differ among Positive and Negative class, this difference is more genuine than when they differ between Mixed feelings and Neutral state. is sensitive to such disagreements. is characterized by:
= 1 −(1)
is the observed disagreement between sentiment labels assigned by the annotators and is the disagreement expected when the coding of sentiments can be attributed to chance rather than due to the inherent property of the sentiment itself.
= 1 ∑ ∑ 2 (2) = 1 ( − 1) ∑ ∑ . 2(3)
Here and refer to the frequencies of values in the coincidence matrices and refers to any metric or level of measurement such as nominal, ordinal, interval, ratio and others. Krippendorff's alpha applies to all these metrics. We used nominal and ordinal metric to calculate inter-annotator agreement. The range of is between '0' and '1', 1 ≥ ≥ 0. When is '1' there is perfect agreement between the annotators and when '0' the agreement is entirely due to chance. Care should be taken in interpreting the reliability of the results shown by Krippendorf's alpha because reliability basically measures the amount of noise in the data. However, the location of noise and the strength of the relationship measured will interfere with the reliability of the estimate. It is customary to require ≥ .800. A reasonable rule of thumb that allows for tentative conclusions to be drawn requires 0.67 ≤ ≤ 0.8 while ≥ .653 is the lowest conceivable limit. We used nltk 7 for calculating Krippendorff's alpha ( ). The results of inter-annotator agreement between our annotators for different languages on both sentiment analysis and offensive language identification tasks are shown in Table 3. Table 4 and Table 5 show the text statistics (number of words, vocabulary size, number of comments, number of sentences, and average number of words per sentences) for sentiment analysis and offensive language identification for Tamil, Malayalam and Kannada. The Tamil dataset had the highest number of samples while Kannada had the least on both the tasks. On average, each comment contained only one sentence. Table 6 and Table 7 show the class distribution across Tamil, Malayalam and Kannada in sentiment analysis and offensive language identification tasks. Furthermore, tree-maps in Figure 10 and Figure 11 depict the comparative analysis of distribution of sentiment and offensive classes across languages. Figure 10 illustrates that there are more number of samples labelled "Positive" than any other class in all the languages. While the disparity between "Positive" and other classes is large in Tamil, it is not the case with Malayalam and Kannada. In Malayalam, "Neutral state" is the secondlargest class in terms of distribution; 6,502 number of comments labelled "Neutral state" could mean that most of the comments in Malayalam are vague remarks as the Figure 11 shows that all languages have not-offensive class in the majority. In the case of Tamil, 71% of the total comments are not offensive, while Malayalam has 85% non-offensive comments. But there is no consistent trend observable amongst offensive classes across the languages shown in Figure 12. In the case of Tamil, 60% of the offensive comments are targeted (group or individual). Similar trends are seen in the case of Malayalam (66%) and Kannada (79%). Absence (Malayalam) or least (Tamil, Kannada) number of targeted other category comments points to the fact that most of the offensive comments are targeted towards either an individual or a group. In the case of Kannada, it is interesting to see that 24% out of total comments are in a language other than Kannada. This could mean that a Kannada movie gets a significant amount of audience who are not native Kannada speakers or that Kannada speakers tend to use more languages other than English to generate code-mixed content online. Our datasets are stored in tab separated files. The first column of the tsv file contains the comments from YouTube and the second column has the final annotation.
Corpus Statistics
Difficult Examples
The social media comments that form our dataset are code-mixed showing a mixture of Dravidian languages and English. This poses a few major difficulties while annotating the sentiments and offensive language categories on our dataset. Dravidian languages are under-resourced languages and the mixing of scripts makes the annotation task difficult since the annotators must have learned both the scripts, be familiar with how English words are modified to native phonology and how the meaning of certain English words have a different meaning in the given local language. Reading and understanding the code mixed text often with non-standardised spelling is difficult. Moreover, we have created the annotation labels with the help of volunteer annotators for three languages (not just one language). It is challenging and time consuming to collect this much amount of data from bilingual, volunteer annotators from three different language groups.
While annotating, it was found that some of the comments were ambiguous in conveying the right sentiment of the viewers. Hence the task of annotation for sentiment analysis and offensive language identification seemed difficult. The problems include the comparison of the movie with movies of same or other industries, expression of opinion of different aspects of the movie in the same sentence. Below are a few examples of such comments and details of how we resolved those issues are provided. In this section, we talk about some examples from Tamil language that were difficult to annotate.
-Enakku iru mugan trailer gnabagam than varuthu -All it reminds me of is the trailer of the movie Irumugan. Not sure whether the speaker enjoyed Irumugan trailer or disliked it or simply observed the similarities between the two trailers.
The annotators found it difficult to identify the sentiment behind the comment consistently. -Rajini ah vida akshay mass ah irukane -Akshay looks more amazing than
Rajini. Difficult to decide if it is a disappointment that the villain looks better than the hero or a positive appreciation for the villain actor. Some annotators interpreted negative sentiment while some others took it as positive. -Ada dei nama sambatha da dei -I wonder, Is this our sampath? Hey!. Conflict between neutral and positive. -Lokesh kanagaraj movie naalae.... English Rap....Song vandurum -If it is a movie of Lokesh kanagaraj, it always has an English rap song. Ambiguous sentiment. -Ayayo bigil aprm release panratha idea iruka lokesh gaaru -Oh Dear! Are you even considering releasing the movie Bigil, Mr.Lokesh?. This comment has a sinlge word 'garu' 8 which is a non-Tamil , non-English word borrowed from Telugu language which is a politeness marker. However, in this context the speaker uses the word sarcastically to insult the director because of the undue delay in releasing the movie. The annotators were inconsistent in interpreting this as offensive or not-Tamil. -No of dislikes la theriyudhu, idha yaru dislike panni irrupanga nu -It is obvious from the number of dislikes as to who would have disliked this (trailer). The comment below the trailer of a movie which talks about the caste issues in contemporary Tamil society. Based on the content of the trailer, the speaker offensively implies that the scheduled caste people are the ones who would have disliked the movie and not other people. Recognising the offensive undercurrent in a seemingly normal comment is difficult and hence these examples complicate the annotation process.
According to the instructions, questions about music director, movie release date and comments containing speaker's remarks about the date and time of watching the video should be treated as belonging to neutral class. However the above examples show that some comments about the actors and movies can be ambiguously interpreted as neutral or positive or negative. We found annotator disagreements in such sentences. Below, we give similar examples from Malayalam.
-Realistic bhoothanghalil ninnu oru vimochanam pratheekshikkunnu -Hoping for a deliverance from realistic demons. No category of audience can be pleased simultaneously. The widespread opinion is that the Malayalam film industry is advancing with more realistic movies. Therefore a group of audience who is more fond of action or non-realistic movies are not satisfied with this culture of realistic movies. In this comment, the viewer is not insulting this growing culture, but expecting that the upcoming film is of his favourite genre. Hence we labelled it non-offensive. -Ithilum valiya jhimikki kammal vannatha -There was an even bigger 'pendant earring'. 'Jhimikki kammal' was a trending song from a movie of the same actor mentioned here. The movie received huge publicity even before its release because of the song but it turned out to be a disappointment after its release. Thus the annotators got confused whether the comment is meant as an insult or not. But we concluded that the viewer is not offending the present trailer but marks his opinion as a warning for the audience to not judge the book by its cover. -Ithu kandittu nalla tholinja comedyaayi thonniyathu enikku mathram aano?-Am I the only person here who felt this a stupid comedy? The meaning of the Malayalam word mentioned here corresponding to the word 'stupid' varies with regions of Kerala. Hence the disparity in opinion between annotators who speaks different dialects of Malayalam was evident. Though in few regions it is offensive, generally it is considered as a byword for 'bad'. aa cinemayude peru kollam. Ithu Dileep ne udheshichanu,ayale mathram udheshichanu -The name of that movie is good. It is named after Dileep and intended only for him. It is quite obvious that there is a chance of imagining several different movie names based on the subjective predisposition of the annotator. As long as the movie name is unknown here, apparently no insult can be proved and there is no profane language used in the sentence either. -Kanditt Amala Paul Aadai Tamil mattoru version aanu ennu thonnunu -It looks like another version of Amala Paul's Tamil movie Aadai. Here the viewer doubts the Malayalam movie 'Helen' is similar to the Tamil movie 'Aadai'. Though the movie 'Aadai' was positively received by viewers and critics, we cannot generalize and assume that this comment also as positive only because of this comparison. Hence we add it to the category of 'mixed feeling'. -Evideo oru Hollywood story varunnilleee. Oru DBT. -Somewhere there is a Hollywood storyline...one doubt. This is also a comparison comment of that same movie 'Helen' mentioned above. Nevertheless, here the difference is that the movie is compared with the Hollywood standard, which is well-known worldwide and is generally considered positive. Hence it is marked as a positive comment. -Trailer pole nalla story undayal mathiyarinu.-It was good enough to have a good story like the trailer. Here viewer mentioned about two aspects of the movie viz: 'trailer' and 'story'. He appreciates the trailer but doubts the quality of the story at the same time. We considered this comment positive because it is clear that he enjoyed the trailer and conveys strong optimism for the movie.
Benchmark Systems
In this section, we report the results obtained in three languages for both the tasks in the corpora introduced above. Like many earlier studies, we approach the tasks as text classification tasks. In order to provide a simple baseline, we applied several traditional machine learning algorithms such as Logistic Regression (LR), Support Vector Machine (SVM), Multinomial Naive Bayes (MNB), K-Nearest Neigbours (KNN), Decision Trees (DT) and Random Forests (RF) separately, for both sentiment analysis and offensive language detection on the code-mixed datasets.
Experiments Setup
We used 90%-5%-5% randomly sampled data split for training, development and test set for all the experimental setup. All the duplicated entries were removed from the dataset before the split to make test and development data truly unseen. All the experiments are tuned to the development set and tested on the test set.
Logistic Regression (LR):
LR is one of the base-line machine learning algorithms, which is also a probabilistic classifier used for the task of classification of data (Genkin et al. (2007)). This is basically the transformed version of linear regression using the logistic function (Park (2013)). Accordingly it takes the real-valued features as input which is later multiplied by a weight and the sum is fed to the sigmoid function ( ) also called the logistic function to obtain the class probability (Shah et al. (2020)). The decision is made based on the value set as threshold. Sigmoid function is as given below:
( ) = 1 1 + −(4)
Logistic regression has a close relationship with neural networks as the latter can also be viewed as a stack of several LR classifiers (de Gispert et al. (2015)). Unlike Naïve Bayes which is a generative classifier, LR is a discriminative classifier (Ng and Jordan (2002)). While Naïve Bayes holds strict conditional independence assumptions, LR is evidently more robust to correlated features (Jin and Pedersen (2018)). It means that when there are more than one features say F1,F2,F3 which are absolutely correlated, it will divide the weight W among the features as W1,W2,W3 respectively. We evaluated the Logistic Regression model with L2 regularization to reduce overfitting. The input features are the Term Frequency Inverse Document Frequency (TF-IDF) values of up to 3 grams. This approach results in the model being trained only on this dataset without taking any pre-trained embeddings.
Support Vector Machine (SVM):
Support Vector Machine are a powerful supervised machine learning algorithm used mainly for classification tasks and for regression as well. The goal of an SVM is to find the hyperplane in an N-dimensional space which distinctly classifies the data points (Ekbal and Bandyopadhyay (2008)). It means, this algorithm clearly draws the decision boundary line between the data points that belong to a particular category and the ones that do not fall into the category. This is applicable to any kind of data that is encoded as a vector. Therefore, if we could produce appropriate vector representations of the data in our hand, we can use SVM to obtain the desired results (Ekbal and Bandyopadhyay (2008)). Here the input features are the same as in LR that is the Term Frequency Inverse Document Frequency (TF-IDF) values of up to 3 grams. We evaluate the SVM model with L2 regularization.
Multinomial Naive Bayes (MNB):
This is a Bayesian classifier that works on the naive assumption of conditional independence of features. This means that each input is independent of the other and this is absolutely unrealistic for real data. Nevertheless, it simplifies several complex tasks and hence validates the need.
We evaluate a Naive Bayes classifier for multinomially distributed data, which is derived from Bayes Theorem that finds the probability of a future event given an observed event. MNB is a specialized version of Naive Bayes that is designed more for text documents. Whereas simple naive Bayes would model a document as the presence and absence of particular words, MNB explicitly models the word counts and adjusts the underlying calculations to deal with in. Therefore, the input text data is considered as the bag of words with the count of occurrence of words(frequency) alone considered and the position of words are ignored.
Laplace smoothing is performed using = 1 to solve the problem of zero probability and then evaluate the MNB model with TF-IDF vectors.
K-Nearest Neighbour (KNN):
KNN is used for the classification and regression problems but mostly used for classification task.The KNN algorithm stores all available data and classifies, on the basis of similarities, a new data point. This implies that it can be conveniently grouped into a well-suite group using the KNN algorithm as new data emerges. The KNN algorithm assumes that the new upcoming data is related to the available cases and places the new case into the column that is more similar to the categories available. KNN is a non-parametric algorithm as it does not make any assumption on underlying data (Nongmeikapam et al. (2017)). It is often referred to as a lazy learner algorithm because it does not automatically learn from the training set, but instead stores the dataset and performs an operation on the dataset at the time of classification. At the training point, the KNN algorithm only stores the dataset and then classifies the data into a group that is somewhat close to the current data as it encounters new data.
We use KNN for classification with 3,4,5, and 9 neighbours by applying uniform weights.
Decision Tree (DT):
The decision tree develops models of classification or regression in the context of a tree structure. A dataset is broken down into smaller and smaller subsets, while an associated decision tree is gradually built at the same time. A tree with decision nodes and leaf nodes is the final product. Therefore, a decision tree classification works by generating a tree structure, where each node corresponds to a feature name, and the branches correspond to the feature values. The leaves of the tree represent the classification labels. After sequentially choosing alternative decisions, each node is recursively split again, and finally, the classifier defines some rules to predict the result. Decision trees can accommodate high dimensional data and perform classification without needing much computation. In general, a decision tree classifier has reasonable accuracy. While speaking about its cons, they are vulnerable to mistakes in classification problems having many classes and a comparatively limited number of training examples. Moreover, it is computationally costly for preparation which implies the method of growing a decision tree is expensive in terms of computation. Each candidate splitting area must be organized at each node before it can find the best split. Combinations of fields are used in some algorithms and a search must be made for optimum combination weights. Pruning algorithms can also be costly, because it is important to shape and compare multiple candidate sub-trees. Here, maximum depth was 800, and minimum sample splits were 5 for DT. The criteria were Gini and entropy.
Random Forest (RF):
Random forest is an ensemble classifier that makes its prediction based on the combination of different decision trees trained on datasets of the same size as training set, called bootstraps, created from a random resampling on the training set itself (Breiman, 2001). Once a tree is constructed, a set of bootstraps, which do not include any particular record from the original dataset [out-of-bag (OOB) samples], is used as test set. The error rate of the classification of all the test sets is the OOB estimate of the generalization error. RF showed important advantages over other methodologies regarding the ability to handle highly non-linearly correlated data, robustness to noise, tuning simplicity, and opportunity for efficient parallel processing. Moreover, RF presents another important characteristic: an intrinsic feature selection step, applied prior to the classification task, to reduce the variables space by giving an importance value to each feature. RF follows specific rules for tree growing, tree combination, self-testing and post-processing, it is robust to overfitting and it is considered more stable in the presence of outliers and in very high dimensional parameter spaces than other machine learning algorithms (Caruana and Niculescu-Mizil, 2006
Results and Discussion
The results of the experiments with the classifiers described above for both sentiment analysis and offensive language detection are shown in terms of precision, recall, F1-score and support in Table 10, Table 11, Table 12, Table 13, Table 14, and Table 15. We used sklearn 9 to develop the models. A macro-average will compute the metrics (precision, recall, F1-score) independently for each class and average them. Thus this metric treats all classes equally, and it does not take the attribute of class imbalance into account. A weighted average takes the metrics from each class just like a macro average, but the contribution of each class to the average is weighted by the number of examples available for it. The number of comments belonging to different classes from both the tasks are listed as the support values in respective tables.
For sentiment analysis, the performance of the various classification algorithms range from being inadequate to average on the code-mixed dataset. Logistic regression, random forest classifiers and decision trees were the ones that fared comparatively better across all sentiment classes. To our surprise, we see that SVM performs poorly, having a worse heterogeneity than the other methods. The precision, recall and F1score are higher for the "Positive" class followed by the "Negative" class. All the other classes performed very poorly. One of the reasons is the nature of the dataset as the classes "Mixed feelings" and "Neutral state" are challenging to label for the annotators owing to the problematic examples described before.
For offensive language detection, all the classification algorithms perform equally poorly. We see that logistic regression and random forest are the ones that performed relatively better than the others. The precision, recall and F1-score are higher for the "Not Offensive" class followed by the "Offensive Targeted Individual" and "OL" classes. The reasons for the poor performance of other classes are as same as sentiment analysis. From the tables, we see that the classification algorithms have performed better on the task of sentiment analysis in comparison to that of offensive language detection. One of the main reasons could be the differences in the distributions of the classes among the two different tasks.
When it comes to sentiment analysis dataset in Kannada, out of the total of 7,671 sentences 46% and 19% belong to the "Positive" and the "Negative" classes respectively while the other classes share 9%,11% and 15% respectively. This distribution is better when compared to the Kannada dataset for offensive language detection task where 56% belong to "Not Offensive", while the other class share a low distribution of 4%,8%,6%,2%,24%. Although the distribution of offensive and non-offensive classes is skewed in all the languages, we were able to observe that overwhelmingly higher percentage of comments belonged to non-offensive class in Tamil and Malayalam datasets than Kannada. 72.4% of comments in Tamil and 88.44% comments in Malayalam datasets were non-offensive while in Kannada only 55.79% of the total comments were non-offensive. This explains why the precision, recall and F-score values of identifying the non-offensive class are consistently higher for Tamil and Malayalam data than Kannada. Next to non-offensive class, the number of comments that belonged to "Not in intended language" class was more than the number of comments belonging to one of the offensive classes in Kannada and Malayalam datasets. In other words, it is easier to recognise the "Not offensive" and "Not in intended language" classes because more comments belong to these two classes than other offensive classes. This trend is shown by the tables 13, 14, and 15.
Since we collected the posts from movie trailers, we got more positive sentiment than others as the people who watch trailers are more likely to be interested in movies and this skews the overall distribution. However, as the code-mixing phenomenon is not incorporated in the earlier models, this resource could be taken as a starting point for further research. There is significant room for improvement in code-mixed research with our dataset. In our experiments, we only utilized the machine learning methods, but more information such as linguistic information or hierarchical meta-embedding can be utilized.
Conclusion
This work introduced code-mixed dataset of the under-resourced Dravidian languages. This data set comprises more than 60,000 comments annotated for sentiment analysis and offensive language identification. To improve the research in the under-resourced Dravidian languages, we created an annotation scheme and achieved a high interannotator agreement in terms of Krippendorff from voluntary annotators on contribu-tions collected using Google Form. We created baselines with gold standard annotated data and presented our results for each class in precision, recall, and F-Score. We expect this resource will enable the researchers to address new and exciting problems in code-mixed research. In future work, we intend to investigate whether we can apply these corpora to build corpora for other under-resourced Dravidian languages.
Fig. 1
1Data collection process.
Fig. 3
3Examples of code mixing in Kannada dataset.
Fig. 4
4Examples of code mixing in Malayalam dataset.
Fig. 5
5Example Google Form with annotation instructions for sentiment analysis -Mixed feelings: Comment contains an explicit or implicit clue in both positive and negative feeling. -Neutral state: Comment does not contain an explicit or implicit indicator of the speaker's emotional state. -Not in intended language: If the comment is not in the intended language. For example, for Tamil, if the sentence does not contain Tamil written in Tamil script or Latin script, then it is not Tamil. Figures 5 and 6 show the sample Google Forms for general instructions and sentiment analysis respectively.
Fig. 6
6Examples from the first page of the Google form for sentiment analysisFig. 7 Example Google Form with annotation instructions for offensive language identification
Fig. 8
8Example Google Form with annotation instructions for offensive language identification
Fig. 9
9Examples from the first page of the Google Form for offensive language identification
Fig. 10
10Treemap for comparing sentiment classes across Tamil, Malayalam and Kannada Fig. 11 Treemap for comparing offensive classes across Tamil, Malayalam and Kannada
Fig. 12
12Treemap for comparing offensive classes (excluding Not Offensive class) across Tamil, Malayalam and Kannada sentiment behind them is unknown. On the other hand, Kannada has the least number of "Neutral state" class.
How long I have been waiting for this kind of movie! Feels joyful just watching this trailer.On behalf of Reddiyar community, I am wishing the best for this movie. Mohan G, All the very best. We are all behind you.Fig. 2 Examples of code mixing in Tamil dataset.My favourite song in 2019 is Taaja samachara. If it is heard by literary lovers, they would want to hear it again. Everybody watch this.Don't know why I am obsessed with Rakshit Shetty's acting. waiting for your movie, expecting it to be a blockbuster. All the best for your bright future.Code-Switching Type
Example
Translation
No Code-mixing
Only Tamil (Written in Tamil
script)
இ மாதிரி ஒ பட ைத தா இ தைன
வ ஷமா எதி பா
ெகா
ேதன
ெர ல பா
ேபாேத மன
அ வள ச ேதாசமா இ
Inter-sentential code-mixing Mix
of English and Tamil (Tamil
written only in Tamil script)
ெர யா ச க சா பாக பட ெவ றி
ெபற வா
க ... Mohan G all the
very best we all behind you.
Only Tamil (Written in Latin
script)
inga niraya perukku illatha kedda
palakkam enkidda irukku. vasool
mannan VIJAY anna
'I have a bad habit which is not found here
in others'. Brother Vijay is the king of
blockbusters.
Code-switching at morphological
level (Written in both Tamil and
Latin script)
ஓ விஜ பட
இ ப தா viewers
sekkuringala
Oh. So this is how you gather more
viewers for Vijay's movie?
Intra-sentential mix of English
and Tamil (written in Latin script
only)
patti thotti engum pattaya kelaputha
ne va thalaiva iam waiting
Rocking performance that will be a hit
among every type of audience. Come on,
my star. I am waiting.
Inter-sentential and
intra-sentential mix (Tamil written
in both Tamil script and Latin
script)
இ த பட த வர விட டா ...இ த பட த
திைரயர கி ஓட விட டா
எவனா
தைட ப னா ...Theatre la vera endha
padam odunaalum screen கிழி
If anybody imposes a ban that this movie
should not be released, that it should not
be allowed to run on theatres, then the
screens will be torn if any other movie is
released.
COMMENT TYPE
EXAMPLE
TRANSLATION
Only English
Concentrate on hindi promotion.. sir
Concentrate on Hindi promotion.. sir
Only Kannada (written in Kannada
Script only)
ಎ ಗುರು ಎ
ಎ ಮೂ
ನಮ
ಮ ಯ ಈ ಾ
ಫ
ೕದ ಆ ದ .
Great lyrics and music mate, Everyone in my
home are obsessed with this song.
Mix of English and Kannada
(Kannada written in Kannada
script only)
My favorite song in 2019 is Taaja
samachara ಾ ತ ಯ ೕ ಒ
ಈ ಾಡು
ೕ
ೕ ಾ ಇ ೕ ಕು ಅನು .... Everybody
watch this.
Only Kannada (written in English)
Neevu varshkke ondu cinema madru
supper 1varshkke 3-4cinema
madobadalige intha ondu cinema saku.
If you make one movie a year it's super,
instead of doing 3-4 movies a year, one
movie of this type is enough.
Only Kannada (written in both
Kannada and English script)
Nanage ಅನು ಈ ೕ
ೕ ವನು ರ ಯ
ಮಂದಣ ಾ
deslike ಾ ರಬಹುದು.
I feel that this video has been disliked by
the fans of Rashmika Mandana.
Mix of English and Kannada
(written in English only)
Wonderful song daily 5/6 kelalill Andre
eno miss madakodante.
A wonderful song, if I don't hear this song
5-6 times a day, I feel like I am missing
something.
Mix of English and Kannada
(Kannada written in both English
and Kannada script)
ೂ ಲ ರ
ಟು ನಟ
ಾನು
ಾ ..
ಾ waiting for ಮೂ .... ಚ
ಬ
ೕ
ಎಲ ಲ ಣ ಇ .. All The best your bright
ಫ ಚ .
Highly promising trailer. It feels like got that old Manju sister back.പടം success ആവി എ ാണ് vichariche. ഇതിേ ാ nice ആണ് ... Full comedy with െപാളി climax pratheekshikunnu..I am waiting.No-code-mixing:
Only Malayalam (Written in Malayalam
Script only)
െപെ
ു പറ
ാൽ ശാപം
എ ിൽ അ
നെ
മന ്
ഉ വരുെട അ മാർ പുരുഷ ാർ
ആേണാ?
If women are a curse, are the mothers
of such people men?
Inter-sentential code-mixing:
Mix of English and Malayalam
(Malayalam written in Malayalam script
only)
Highly promising trailer. പഴയ മ
ു
േച ിെയ തിരി
കി ിയത് േപാെല
േതാ ു ു.
Only Malayalam (Written in Latin script) Ee onathinu nalloru
kudumbhachithram
pratheekshikkunnu.
Expecting a good family entertainer for
this Onam.
Code-switching in morphological level
(written in both Malayalam and Latin
script)
കുറ
കാല
ിനു േശഷം സി ിഖ്
വീ ും comedy യിൽ sajeevamayi.
After some time Siddique became
active again in comedy.
Intra-sentential mix of English and
Malayalam (written in Latin Script only)
Video song kaanathe unlike
adikkunath nallakaaryam alla
It is not fair to unlike a video song
without watching the same.
Inter-sentential and Intra-sentential
mix.
(Malayalam written in both Latin and
Malayalam script)
Table 1 Annotators Statistics for Sentiment AnalysisLanguage
Tamil Malayalam Kannada
Gender
Male
9
2
2
Female
2
4
3
Non-binary
0
0
0
Higher Education
Undegraduate
2
0
1
Graduate
2
0
2
Postgraduate
7
6
2
Medium of Schooling English
6
5
4
Native language
5
1
1
Total
11
6
5
Table 2
2Annotators Statistics for Offensive Language Identification
Table 4
4Corpus statistics for Sentiment AnalysisLanguage
Tamil Malayalam Kannada
Number of words
511,734
202,134
65,702
Vocabulary size
94,772
40,729
20,796
Number of comments
43,919
20,010
7,772
Number of sentences
52,617
23,652
8,586
Average number of words per sentence
11
10
8
Average number of sentences per comment
1
1
1
Table 5
5Corpus statistics for Offensive Language IdentificationClass
Tamil
Malayalam
Kannada
Negative
5,228 (11.87 %) 2,600 (13.25 %) 1,484 (19.34 %)
Not in intended language
2,087 (4.74 %)
1,445 (736 %) 1,136 (14.80 %)
Neutral state
6,904 (15.68 %) 6,502 (33.14 %)
842 (10.97 %)
Mixed feelings
4,928 (1119 %) 1,162 (5.92 %)
691 (9.00 %)
Positive
24,873 (56.50 %) 7,907 (40.30 %) 3,518 (45.86 %)
Total
44,020
19,616
7,671
Table 6
6Sentiment Analysis Dataset DistributionClass
Tamil
Malayalam
Kannada
Not Offensive
31,808 (72.42 %) 17,697 (88.44 %) 4,336 (55.79 %)
O-Untargeted
3,630 (8.26 %)
240 (1.19 %)
278 (3.57 %)
O-Targeted Individual
2,965 (6.75 %)
290 (1.44 %)
628 (8.08 %)
O-Targeted Group
3,140 (7.14 %)
176 (0.87 %)
418 (5.37 %)
O-Targeted Others
590 (1.34 %)
-
153 (1.96 %)
Not in indented lang
1,786 (4.06 %)
1,607 (8.03 %) 1,898 (24.42 %)
Total
43,919
20,010
7,772
Table 7
7Offensive language Identification Dataset Distribution. O-Offensive. O-Untargeted: Offensive Untargeted.
Table 8 Train
8-Development-Test Data Distribution with 90%-5%-5% train-dev-test split for Sentiment
Analysis
Tamil Malayalam Kannada
Training
35,139
16,010
6,217
Development 4,388
1,999
777
Test
4,392
2,001
778
Total
43,919
20,010
7,772
Table 9 Train
9-Development-Test Data Distribution with 90%-5%-5% train-dev-test for Offensive Language
Identification
). We evaluate the RF model with the same features as DT. Classifier Positive Negative Mixed feelings Neutral state Other language Macro Avg Weighted AvgTable 11 Precision, Recall, and F-score for Malayalam Sentiment Analysis Classifier Positive Negative Mixed feelings Neutral state Other language Macro Avg Weighted Avg Table 12 Precision, Recall, and F-score for Kannada Sentiment analysis Classifier Not-O O-untargeted OTI OTG OT-Other Other language Macro Avg Weighted Avg Table 13 Precision, Recall, and F-score for Tamil Offensive Language Identification. O-Offensive, T-Targeted, G-Group. Classifier Not-O O-untargeted OTI OTG OT-Other Other language Macro Avg Weighted AvgTable 14 Precision, Recall, and F-score for Malayalam Offensive Language Identification. O-Offensive, T-Targeted, G-Group. Classifier Not-O O-untargeted OTI OTG OT-Other Other language Macro Avg Weighted Avg Table 15 Precision, Recall, and F-score for Kannada Offensive Language Identification. O-Offensive, T-Targeted, G-Group.Support
2503
547
510
631
211
4402
4402
Precision
SVM
0.57
0.00
0.00
0.00
0.00
0.11
0.32
MNB
0.59
0.79
0.46
0.50
0.50
0.64
0.59
KNN
0.58
0.19
0.13
0.18
0.62
0.34
0.43
DT
0.65
0.32
0.23
0.36
0.52
0.42
0.51
LR
0.76
0.36
0.24
0.39
0.42
0.36
0.58
RF
0.62
0.59
0.71
0.56
0.80
0.66
0.63
Recall
SVM
1.00
0.00
0.00
0.00
0.00
0.20
0.57
MNB
1.00
0.06
0.00
0.04
0.04
0.28
0.59
KNN
0.70
0.04
0.06
0.29
0.07
0.23
0.46
DT
0.80
0.23
0.14
0.27
0.37
0.36
0.55
LR
0.64
0.43
0.28
0.44
0.64
0.40
0.54
RF
0.97
0.17
0.02
0.19
0.43
0.35
0.62
F-Score
SVM
0.72
0.00
0.00
0.00
0.00
0.14
0.41
MNB
0.74
0.11
0.01
0.08
0.08
0.28
0.47
KNN
0.63
0.07
0.08
0.23
0.13
0.23
0.42
DT
0.72
0.27
0.17
0.31
0.44
0.38
0.53
LR
0.69
0.39
0.26
0.41
0.51
0.38
0.56
RF
0.76
0.26
0.05
0.28
0.56
0.38
0.53
Table 10 Precision, Recall, and F-score for Tamil Sentiment Analysis
Classifier Positive Negative Mixed feelings Neutral state Other language Macro Avg Weighted Avg
Support
755
285
131
645
146
1962
1962
Precision
SVM
0.38
0.00
0.00
0.00
0.00
0.08
0.15
MNB
0.49
0.88
0.00
0.60
0.88
0.57
0.58
KNN
0.43
0.32
0.41
0.37
0.59
0.42
0.41
DT
0.51
0.54
0.35
0.61
0.51
0.50
0.54
LR
0.73
0.57
0.34
0.52
0.50
0.53
0.59
RF
0.62
0.74
0.56
0.51
0.76
0.64
0.61
Recall
SVM
1.00
0.00
0.00
0.00
0.00
0.20
0.38
MNB
0.92
0.13
0.00
0.45
0.10
0.32
0.53
KNN
0.67
0.12
0.12
0.34
0.21
0.29
0.41
DT
0.79
0.32
0.21
0.40
0.42
0.43
0.53
LR
0.51
0.45
0.32
0.72
0.66
0.53
0.57
RF
0.63
0.31
0.14
0.77
0.41
0.45
0.58
F-Score
SVM
0.56
0.00
0.00
0.00
0.00
0.11
0.21
MNB
0.64
0.23
0.00
0.52
0.17
0.31
0.46
KNN
0.53
0.17
0.19
0.36
0.30
0.31
0.38
DT
0.62
0.40
0.26
0.49
0.46
0.44
0.51
LR
0.60
0.50
0.33
0.60
0.57
0.52
0.57
RF
0.62
0.44
0.22
0.62
0.53
0.49
0.56
Support
363
162
57
83
103
768
768
Precision
RF
0.59
0.70
0.45
0.48
0.53
0.55
0.58
SVM
0.47
0.00
0.00
0.00
0.00
0.09
0.22
MNB
0.54
0.82
1.00
0.75
0.74
0.77
0.68
KNN
0.51
0.67
0.44
0.50
0.55
0.53
0.54
DT
0.59
0.61
0.21
0.39
0.45
0.45
0.53
LR
0.70
0.60
0.24
0.38
0.45
0.47
0.58
Recall
RF
0.87
0.48
0.06
0.18
0.50
0.42
0.59
SVM
1.00
0.00
0.00
0.00
0.00
0.20
0.47
MNB
0.99
0.36
0.02
0.04
0.14
0.31
0.57
KNN
0.91
0.10
0.07
0.05
0.41
0.31
0.52
DT
0.73
0.48
0.19
0.14
0.47
0.40
0.54
LR
0.69
0.51
0.26
0.36
0.55
0.48
0.57
F-Score
RF
0.7
0.57
0.11
0.27
0.52
0.43
0.55
SVM
0.64
0.00
0.00
0.00
0.00
0.13
0.30
MNB
0.70
0.50
0.03
0.07
0.23
0.31
0.48
KNN
0.65
0.17
0.12
0.09
0.47
0.30
0.43
DT
0.66
0.54
0.20
0.21
0.46
0.41
0.52
LR
0.70
0.55
0.25
0.37
0.50
0.47
0.57
Support
3190
368 315 288
71
160
4392
4392
Precision
RF
0.77
0.48 0.65 0.43
1.00
0.88
0.70
0.72
SVM
0.73
0.67 0.25 0.12
0.00
0.91
0.45
0.65
MNB
0.74
0.79 1.00 1.00
0.00
0.96
0.75
0.78
KNN
0.73
0.67 0.25 0.12
0.00
0.91
0.45
0.65
DT
0.80
0.29 0.28 0.20
0.11
0.70
0.40
0.67
LR
0.87
0.29 0.27 0.14
0.03
0.68
0.38
0.71
Recall
RF
0.99
0.16 0.06 0.03
0.01
0.57
0.31
0.76
SVM
0.99
0.02 0.01 0.02
0.00
0.13
0.19
0.73
MNB
1.00
0.03 0.01 0.00
0.00
0.44
0.25
0.74
KNN
0.99
0.02 0.01 0.02
0.00
0.13
0.19
0.73
DT
0.92
0.20 0.15 0.12
0.03
0.56
0.33
0.72
LR
0.66
0.28 0.30 0.48
0.04
0.72
0.41
0.58
F-Score
RF
0.86
0.24 0.12 0.06
0.03
0.69
0.33
0.69
SVM
0.84
0.03 0.01 0.03
0.00
0.23
0.19
0.63
MNB
0.85
0.06 0.02 0.01
0.00
0.60
0.26
0.65
KNN
0.84
0.03 0.01 0.03
0.00
0.23
0.19
0.63
DT
0.85
0.24 0.20 0.15
0.04
0.62
0.35
0.69
LR
0.75
0.29 0.28 0.22
0.04
0.70
0.38
0.63
Support
1765
29 27
23
-
157
2001
2001
Precision
RF
0.95
1.00 1.00 1.00
-
0.95
0.98
0.95
SVM
0.88
0.00 0.00 0.00
-
0.00
0.18
0.78
MNB
0.89
0.00 0.00 0.00
-
0.90
0.36
0.86
KNN
0.95
1.00 1.00 1.00
-
0.90
0.97
0.95
DT
0.95
0.67 0.79 0.65
-
0.82
0.78
0.93
LR
0.97
0.50 0.33 0.30
-
0.52
0.52
0.91
Recall
RF
1.00
0.45 0.37 0.39
-
0.69
0.58
0.95
SVM
1.00
0.00 0.00 0.00
-
0.00
0.20
0.88
MNB
1.00
0.00 0.00 0.00
-
0.11
0.22
0.89
KNN
0.99
0.48 0.44 0.43
-
0.68
0.61
0.95
DT
0.98
0.55 0.41 0.48
-
0.69
0.62
0.94
LR
0.89
0.72 0.56 0.52
-
0.85
0.71
0.88
F-Score
RF
0.97
0.62 0.54 0.56
-
0.80
0.70
0.94
SVM
0.94
0.00 0.00 0.00
-
0.00
0.19
0.83
MNB
0.94
0.00 0.00 0.00
-
0.20
0.23
0.85
KNN
0.97
0.65 0.62 0.61
-
0.78
0.72
0.94
DT
0.97
0.60 0.54 0.55
-
0.75
0.68
0.94
LR
0.93
0.59 0.42 0.38
-
0.64
0.59
0.89
Support
427
33 75
44
14
185
778
778
Precision
RF
0.65
0.00 0.71 0.43
1.00
0.67
0.58
0.63
SVM
0.55
0.00 0.00 0.00
0.00
0.00
0.09
0.30
MNB
0.60
0.00 0.86 0.00
0.00
0.78
0.37
0.60
KNN
0.61
0.00 0.78 0.67
0.00
0.66
0.45
0.60
DT
0.64
0.21 0.57 0.29
0.25
0.56
0.42
0.57
LR
0.77
0.04 0.63 0.25
0.22
0.64
0.43
0.66
Recall
RF
0.89
0.00 0.35 0.08
0.06
0.54
0.32
0.66
SVM
1.00
0.00 0.00 0.00
0.00
0.00
0.17
0.55
MNB
0.98
0.00 0.33 0.00
0.00
0.22
0.26
0.62
KNN
0.93
0.00 0.19 0.09
0.00
0.34
0.26
0.61
DT
0.78
0.09 0.51 0.18
0.07
0.45
0.35
0.60
LR
0.76
0.03 0.59 0.23
0.29
0.71
0.43
0.66
F-Score
RF
0.75
0.00 0.47 0.14
0.11
0.60
0.34
0.61
SVM
0.71
0.00 0.00 0.00
0.00
0.00
0.12
0.39
MNB
0.74
0.00 0.48 0.00
0.00
0.34
0.26
0.54
KNN
0.73
0.00 0.30 0.16
0.00
0.45
0.27
0.55
DT
0.70
0.13 0.54 0.22
0.11
0.50
0.37
0.58
LR
0.77
0.04 0.61 0.24
0.25
0.68
0.43
0.66
https://github.com/bharathichezhiyan/DravidianCodeMix-Dataset 2 https://zenodo.org/record/4750858#.YJtw0SYo_0M
different types of code-mixing are shown inFigure 2
https://www.britannica.com/topic/Dravidian-languages
https://github.com/philbot9/youtube-remarkscraper 6 https://pypi.org/venture/langdetect/
https://www.nltk.org/
Telugu word for Mr
https://scikit-learn.org/stable/
Sentiment analysis of twitter data. A Agarwal, B Xie, I Vovsha, O Rambow, R Passonneau, Proceedings of the Workshop on Language in Social Media (LSM 2011). the Workshop on Language in Social Media (LSM 2011)Portland, OregonAssociation for Computational LinguisticsAgarwal A, Xie B, Vovsha I, Rambow O, Passonneau R (2011) Sentiment analysis of twitter data. In: Proceedings of the Workshop on Language in Social Media (LSM 2011), Association for Computational Linguistics, Portland, Oregon, pp 30-38, URL https://www.aclweb.org/anthology/W11-0705
No more beating about the bush : A step towards idiom handling for Indian language NLP. R Agrawal, Chenthil Kumar, V Muralidharan, V Sharma, D , Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018). the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)Miyazaki, Japan, URLAgrawal R, Chenthil Kumar V, Muralidharan V, Sharma D (2018) No more beating about the bush : A step towards idiom handling for Indian language NLP. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), European Languages Resources Association (ELRA), Miyazaki, Japan, URL https://www.aclweb.org/anthology/L18-1048
I am borrowing ya mixing ?" an analysis of English-Hindi code mixing in Facebook. K Bali, J Sharma, M Choudhury, Y Vyas, DOI10.3115/v1/W14-3914Proceedings of the First Workshop on Computational Approaches to Code Switching. the First Workshop on Computational Approaches to Code SwitchingDoha, QatarAssociation for Computational LinguisticsBali K, Sharma J, Choudhury M, Vyas Y (2014) "I am borrowing ya mixing ?" an analysis of English-Hindi code mixing in Facebook. In: Proceedings of the First Workshop on Computational Approaches to Code Switching, Association for Computational Linguistics, Doha, Qatar, pp 116-126, DOI 10.3115/v1/W14-3914, URL https://www.aclweb.org/anthology/W14-3914
Code mixing: A challenge for language identification in the language of social media. U Barman, A Das, J Wagner, J Foster, DOI10.3115/v1/W14-3902Proceedings of the First Workshop on Computational Approaches to Code Switching. the First Workshop on Computational Approaches to Code SwitchingDoha, QatarAssociation for Computational LinguisticsBarman U, Das A, Wagner J, Foster J (2014) Code mixing: A challenge for language identification in the language of social media. In: Proceedings of the First Workshop on Computational Approaches to Code Switching, Association for Computational Linguistics, Doha, Qatar, pp 13-23, DOI 10.3115/v1/W14-3902, URL https: //www.aclweb.org/anthology/W14-3902
Random forests. L Breiman, Machine learning. 451Breiman L (2001) Random forests. Machine learning 45(1):5-32
An empirical comparison of supervised learning algorithms. R Caruana, A Niculescu-Mizil, Proceedings of the 23rd international conference on Machine learning. the 23rd international conference on Machine learningCaruana R, Niculescu-Mizil A (2006) An empirical comparison of supervised learn- ing algorithms. In: Proceedings of the 23rd international conference on Machine learning, pp 161-168
HopeEDI: A multilingual hope speech detection dataset for equality, diversity, and inclusion. B R Chakravarthi, Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media. the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social MediaBarcelona, SpainAssociation for Computational LinguisticsChakravarthi BR (2020) HopeEDI: A multilingual hope speech detection dataset for equality, diversity, and inclusion. In: Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media, Association for Computational Linguistics, Barcelona, Spain (Online), pp 41-53, URL https://www.aclweb.org/anthology/2020.peoples-1.5
Findings of the shared task on hope speech detection for equality, diversity, and inclusion. B R Chakravarthi, V Muralidaran, Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion. the First Workshop on Language Technology for Equality, Diversity and InclusionKyivAssociation for Computational LinguisticsChakravarthi BR, Muralidaran V (2021) Findings of the shared task on hope speech detection for equality, diversity, and inclusion. In: Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion, Association for Computational Linguistics, Kyiv, pp 61-72, URL https://www.aclweb.org/ anthology/2021.ltedi-1.8
B R Chakravarthi, Anand Kumar, M Mccrae, J P Premjith, B Soman, K Mandl, T , CEUR-WS. orgOverview of the track on HASOC-offensive Language Identification-DravidianCodeMix. In: Working Notes of the Forum for Information Retrieval Evaluation (FIRE 2020). CEUR Workshop Proceedings. Chakravarthi BR, Anand Kumar M, McCrae JP, Premjith B, Soman K, Mandl T (2020a) Overview of the track on HASOC-offensive Language Identification- DravidianCodeMix. In: Working Notes of the Forum for Information Retrieval Evaluation (FIRE 2020). CEUR Workshop Proceedings, CEUR-WS. org
A sentiment analysis dataset for code-mixed Malayalam-English. B R Chakravarthi, N Jose, S Suryawanshi, E Sherly, J P Mccrae, Proceedings of the 1st Joint Workshop of SLTU (Spoken Language Technologies for Under-resourced languages) and CCURL (Collaboration and Computing for Under-Resourced Languages) (SLTU-CCURL 2020). the 1st Joint Workshop of SLTU (Spoken Language Technologies for Under-resourced languages) and CCURL (Collaboration and Computing for Under-Resourced Languages) (SLTU-CCURL 2020)Marseille, FranceChakravarthi BR, Jose N, Suryawanshi S, Sherly E, McCrae JP (2020b) A sentiment analysis dataset for code-mixed Malayalam-English. In: Proceedings of the 1st Joint Workshop of SLTU (Spoken Language Technologies for Under-resourced languages) and CCURL (Collaboration and Computing for Under-Resourced Lan- guages) (SLTU-CCURL 2020), European Language Resources Association (ELRA), Marseille, France
Corpus creation for sentiment analysis in code-mixed Tamil-English text. B R Chakravarthi, V Muralidaran, R Priyadharshini, J P Mccrae, Proceedings of the 1st Joint Workshop of SLTU (Spoken Language Technologies for Under-resourced languages) and CCURL (Collaboration and Computing for Under-Resourced Languages) (SLTU-CCURL 2020). the 1st Joint Workshop of SLTU (Spoken Language Technologies for Under-resourced languages) and CCURL (Collaboration and Computing for Under-Resourced Languages) (SLTU-CCURL 2020)Marseille, FranceChakravarthi BR, Muralidaran V, Priyadharshini R, McCrae JP (2020c) Corpus cre- ation for sentiment analysis in code-mixed Tamil-English text. In: Proceedings of the 1st Joint Workshop of SLTU (Spoken Language Technologies for Under-resourced languages) and CCURL (Collaboration and Computing for Under-Resourced Lan- guages) (SLTU-CCURL 2020), European Language Resources Association (ELRA), Marseille, France
Overview of the Track on Sentiment Analysis for Dravidian Languages in Code-Mixed Text. B R Chakravarthi, R Priyadharshini, V Muralidaran, S Suryawanshi, N Jose, E Sherly, J P Mccrae, 10.1145/3441501.3441515Forum for Information Retrieval Evaluation. New York, NY, USAAssociation for Computing Machinery2020Chakravarthi BR, Priyadharshini R, Muralidaran V, Suryawanshi S, Jose N, Sherly E, McCrae JP (2020d) Overview of the Track on Sentiment Analysis for Dravid- ian Languages in Code-Mixed Text. In: Forum for Information Retrieval Evalu- ation, Association for Computing Machinery, New York, NY, USA, FIRE 2020, p 21-24, DOI 10.1145/3441501.3441515, URL https://doi.org/10.1145/ 3441501.3441515
B R Chakravarthi, R Priyadharshini, N Jose, M A Kumar, T Mandl, P K Kumaresan, R Ponnusamy, R L H Mccrae, J P , Sherly E , Findings of the shared task on offensive language identification in Tamil, Malayalam, and Kannada. In: Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. KyivAssociation for Computational LinguisticsChakravarthi BR, Priyadharshini R, Jose N, Kumar M A, Mandl T, Kumaresan PK, Ponnusamy R, R L H, McCrae JP, Sherly E (2021) Findings of the shared task on offensive language identification in Tamil, Malayalam, and Kannada. In: Proceed- ings of the First Workshop on Speech and Language Technologies for Dravidian Languages, Association for Computational Linguistics, Kyiv, pp 133-145, URL https://www.aclweb.org/anthology/2021.dravidianlangtech-1.17
Unraveling the English-Bengali code-mixing phenomenon. A Chanda, D Das, C Mazumdar, DOI10.18653/v1/W16-5810Proceedings of the Second Workshop on Computational Approaches to Code Switching. the Second Workshop on Computational Approaches to Code SwitchingAustin, TexasAssociation for Computational LinguisticsChanda A, Das D, Mazumdar C (2016) Unraveling the English-Bengali code-mixing phenomenon. In: Proceedings of the Second Workshop on Computational Ap- proaches to Code Switching, Association for Computational Linguistics, Austin, Texas, pp 80-89, DOI 10.18653/v1/W16-5810, URL https://www.aclweb.org/ anthology/W16-5810
A twitter corpus and benchmark resources for German sentiment analysis. M Cieliebak, J M Deriu, D Egger, F Uzdilli, DOI10.18653/v1/W17-1106Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media, Association for Computational Linguistics. the Fifth International Workshop on Natural Language Processing for Social Media, Association for Computational LinguisticsValencia, SpainCieliebak M, Deriu JM, Egger D, Uzdilli F (2017) A twitter corpus and benchmark resources for German sentiment analysis. In: Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media, Association for Com- putational Linguistics, Valencia, Spain, pp 45-51, DOI 10.18653/v1/W17-1106, URL https://www.aclweb.org/anthology/W17-1106
Dimensions of abusive language on twitter. I Clarke, J Grieve, DOI10.18653/v1/W17-3001Proceedings of the First Workshop on Abusive Language Online. the First Workshop on Abusive Language OnlineVancouver, BC, CanadaAssociation for Computational LinguisticsClarke I, Grieve J (2017) Dimensions of abusive language on twitter. In: Proceedings of the First Workshop on Abusive Language Online, Association for Computational Linguistics, Vancouver, BC, Canada, pp 1-10, DOI 10.18653/v1/W17-3001, URL https://www.aclweb.org/anthology/W17-3001
Bengali named entity recognition using support vector machine. NER for South and South East Asian. A Ekbal, S Bandyopadhyay, 51Ekbal A, Bandyopadhyay S (2008) Bengali named entity recognition using support vector machine. NER for South and South East Asian Languages p 51
Large-Scale Bayesian Logistic Regression for Text Categorization. A Genkin, D Lewis, D Madigan, DOI10.1198/004017007000000245Technometrics. 49Genkin A, Lewis D, Madigan D (2007) Large-Scale Bayesian Logistic Regression for Text Categorization. Technometrics 49, DOI 10.1198/004017007000000245
Fast and accurate preordering for SMT using neural networks. A De Gispert, G Iglesias, B Byrne, DOI10.3115/v1/N15-1105Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesDenver, ColoradoAssociation for Computational Linguisticsde Gispert A, Iglesias G, Byrne B (2015) Fast and accurate preordering for SMT using neural networks. In: Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Association for Computational Linguistics, Denver, Colorado, pp 1012- 1017, DOI 10.3115/v1/N15-1105, URL https://www.aclweb.org/anthology/ N15-1105
A lexicon-based approach for hate speech detection. N D Gitari, Z Zuping, H Damien, J Long, International Journal of Multimedia and Ubiquitous Engineering. 104Gitari ND, Zuping Z, Damien H, Long J (2015) A lexicon-based approach for hate speech detection. International Journal of Multimedia and Ubiquitous Engineering 10(4):215-230
Mining and summarizing customer reviews. M Hu, B Liu, 10.1145/1014052.1014073Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data MiningNew York, NY, USA, KDD '04Association for Computing MachineryHu M, Liu B (2004) Mining and summarizing customer reviews. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Association for Computing Machinery, New York, NY, USA, KDD '04, p 168-177, DOI 10.1145/1014052.1014073, URL https://doi.org/ 10.1145/1014052.1014073
A challenge dataset and effective models for aspect-based sentiment analysis. Q Jiang, L Chen, R Xu, X Ao, M Yang, DOI 10.18653/ v1/D19-1654Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsJiang Q, Chen L, Xu R, Ao X, Yang M (2019) A challenge dataset and effective models for aspect-based sentiment analysis. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics, Hong Kong, China, pp 6279-6284, DOI 10.18653/ v1/D19-1654, URL https://www.aclweb.org/anthology/D19-1654
Duluth UROP at SemEval-2018 task 2: Multilingual emoji prediction with ensemble learning and oversampling. S Jin, T Pedersen, DOI10.18653/v1/S18-1077Proceedings of The 12th International Workshop on Semantic Evaluation. The 12th International Workshop on Semantic EvaluationNew Orleans, LouisianaAssociation for Computational LinguisticsJin S, Pedersen T (2018) Duluth UROP at SemEval-2018 task 2: Multilingual emoji prediction with ensemble learning and oversampling. In: Proceedings of The 12th International Workshop on Semantic Evaluation, Association for Computational Linguistics, New Orleans, Louisiana, pp 482-485, DOI 10.18653/v1/S18-1077, URL https://www.aclweb.org/anthology/S18-1077
A survey of current datasets for code-switching research. N Jose, B R Chakravarthi, S Suryawanshi, E Sherly, J P Mccrae, 2020 6th International Conference on Advanced Computing & Communication Systems (ICACCS). Jose N, Chakravarthi BR, Suryawanshi S, Sherly E, McCrae JP (2020) A survey of current datasets for code-switching research. In: 2020 6th International Conference on Advanced Computing & Communication Systems (ICACCS)
Twitter Sentiment Analysis: The Good the Bad and the OMG!. E Kouloumpis, T Wilson, J Moore, Fifth International AAAI conference on weblogs and social media. CiteseerKouloumpis E, Wilson T, Moore J (2011) Twitter Sentiment Analysis: The Good the Bad and the OMG! In: Fifth International AAAI conference on weblogs and social media, Citeseer
Estimating the reliability, systematic error and random error of interval data. K Krippendorff, 10.1177/001316447003000105Educational and Psychological Measurement. 301Krippendorff K (1970) Estimating the reliability, systematic error and random error of interval data. Educational and Psychological Measurement 30(1):61- 70, DOI 10.1177/001316447003000105, URL https://doi.org/10.1177/ 001316447003000105, https://doi.org/10.1177/001316447003000105
Learning automata based sentiment analysis for recommender system on cloud. P V Krishna, S Misra, D Joshi, M S Obaidat, 2013 International Conference on Computer, Information and Telecommunication Systems (CITS). IEEEKrishna PV, Misra S, Joshi D, Obaidat MS (2013) Learning automata based sentiment analysis for recommender system on cloud. In: 2013 International Conference on Computer, Information and Telecommunication Systems (CITS), IEEE, pp 1-5
Tamil paraphrase detection using encoder-decoder neural networks. B Krishnamurti, B S Kumar, D Thenmozhi, S Kayalvizhi, International Conference on Computational Intelligence in Data Science. SpringerThe Dravidian languagesKrishnamurti B (2003) The Dravidian languages. Cambridge University Press Kumar BS, Thenmozhi D, Kayalvizhi S (2020) Tamil paraphrase detection using encoder-decoder neural networks. In: International Conference on Computational Intelligence in Data Science, Springer, pp 30-42
Benchmarking aggression identification in social media. R Kumar, A K Ojha, S Malmasi, M Zampieri, Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), Association for Computational Linguistics. the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), Association for Computational LinguisticsSanta Fe, New Mexico, USAKumar R, Ojha AK, Malmasi S, Zampieri M (2018) Benchmarking aggression iden- tification in social media. In: Proceedings of the First Workshop on Trolling, Ag- gression and Cyberbullying (TRAC-2018), Association for Computational Linguis- tics, Santa Fe, New Mexico, USA, pp 1-11, URL https://www.aclweb.org/ anthology/W18-4401
Annotating evaluative sentences for sentiment analysis: a dataset for Norwegian. P Maehlum, J Barnes, L Øvrelid, E Velldal, Proceedings of the 22nd Nordic Conference on Computational Linguistics. the 22nd Nordic Conference on Computational LinguisticsTurku, FinlandLinköping University Electronic PressMaehlum P, Barnes J, Øvrelid L, Velldal E (2019) Annotating evaluative sentences for sentiment analysis: a dataset for Norwegian. In: Proceedings of the 22nd Nordic Conference on Computational Linguistics, Linköping University Elec- tronic Press, Turku, Finland, pp 121-130, URL https://www.aclweb.org/ anthology/W19-6113
Overview of the HASOC Track at FIRE 2020: Hate Speech and Offensive Language Identification in Tamil. T Mandl, S Modha, M A Kumar, Br ; Chakravarthi, Malayalam, Hindi, German English, 10.1145/3441501.3441517Forum for Information Retrieval Evaluation. New York, NY, USAAssociation for Computing Machinery2020Mandl T, Modha S, Kumar M A, Chakravarthi BR (2020) Overview of the HASOC Track at FIRE 2020: Hate Speech and Offensive Language Identification in Tamil, Malayalam, Hindi, English and German. In: Forum for Information Retrieval Eval- uation, Association for Computing Machinery, New York, NY, USA, FIRE 2020, p 29-32, DOI 10.1145/3441501.3441517, URL https://doi.org/10.1145/ 3441501.3441517
A multi-criteria recommender system exploiting aspect-based sentiment analysis of users' reviews. C Musto, M De Gemmis, G Semeraro, P Lops, Proceedings of the eleventh ACM conference on recommender systems. the eleventh ACM conference on recommender systemsMusto C, de Gemmis M, Semeraro G, Lops P (2017) A multi-criteria recommender system exploiting aspect-based sentiment analysis of users' reviews. In: Proceedings of the eleventh ACM conference on recommender systems, pp 321-325
On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. A Y Ng, M I Jordan, Advances in neural information processing systems. Ng AY, Jordan MI (2002) On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. In: Advances in neural information processing systems, pp 841-848
Exploring an efficient handwritten Manipuri meetei-mayek character recognition using gradient feature extractor and cosine distance based multiclass k-nearest neighbor classifier. K Nongmeikapam, W Kumar, M P Singh, Proceedings of the 14th International Conference on Natural Language Processing. the 14th International Conference on Natural Language ProcessingKolkata, IndiaNongmeikapam K, Kumar W, Singh MP (2017) Exploring an efficient handwritten Manipuri meetei-mayek character recognition using gradient feature extractor and cosine distance based multiclass k-nearest neighbor classifier. In: Proceedings of the 14th International Conference on Natural Language Processing (ICON-2017), NLP Association of India, Kolkata, India, pp 328-337, URL https://www.aclweb. org/anthology/W17-7541
A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. B Pang, L Lee, DOI10.3115/1218955.1218990Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04). the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)Barcelona, SpainPang B, Lee L (2004) A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In: Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), Barcelona, Spain, pp 271-278, DOI 10.3115/1218955.1218990, URL https://www.aclweb.org/ anthology/P04-1035
An introduction to logistic regression: from basic concepts to interpretation with particular attention to nursing domain. H Park, Journal of Korean Academy of Nursing. 432Park H (2013) An introduction to logistic regression: from basic concepts to interpre- tation with particular attention to nursing domain. Journal of Korean Academy of Nursing 43(2):154-164
Language modeling for code-mixing: The role of linguistic theory based synthetic data. P Patwa, G Aguilar, S Kar, S Pandey, Pykl S, B Gambäck, T Chakraborty, T Solorio, A ; Das, Spain Barcelona, A Pratapa, G Bhat, M Choudhury, S Sitaram, S Dandapat, K Bali, DOI10.18653/v1/P18-1143Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Long Papers)Patwa P, Aguilar G, Kar S, Pandey S, PYKL S, Gambäck B, Chakraborty T, Solorio T, Das A (2020) Semeval-2020 task 9: Overview of sentiment analysis of code-mixed tweets. In: Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Association for Computational Linguistics, Barcelona, Spain Pratapa A, Bhat G, Choudhury M, Sitaram S, Dandapat S, Bali K (2018) Language modeling for code-mixing: The role of linguistic theory based synthetic data. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Melbourne, Australia, pp 1543-1553, DOI 10.18653/v1/P18-1143, URL https: //www.aclweb.org/anthology/P18-1143
A comparative study of different state-of-the-art hate speech detection methods for Hindi-English code-mixed data. P Rani, S Suryawanshi, K Goswami, B R Chakravarthi, T Fransen, J P Mccrae, Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying. the Second Workshop on Trolling, Aggression and CyberbullyingMarseille, FranceRani P, Suryawanshi S, Goswami K, Chakravarthi BR, Fransen T, McCrae JP (2020) A comparative study of different state-of-the-art hate speech detection methods for Hindi-English code-mixed data. In: Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying, European Language Resources Association (ELRA), Marseille, France
A comparative study on code-mixed data of Indian social media vs formal text. P Ranjan, B Raja, R Priyadharshini, R C Balabantaray, DOI10.1109/IC3I.2016.79180352016 2nd International Conference on Contemporary Computing and Informatics (IC3I). Ranjan P, Raja B, Priyadharshini R, Balabantaray RC (2016) A comparative study on code-mixed data of Indian social media vs formal text. In: 2016 2nd International Conference on Contemporary Computing and Informatics (IC3I), pp 608-611, DOI 10.1109/IC3I.2016.7918035
RuSentiment: An enriched sentiment analysis dataset for social media in Russian. A Rogers, A Romanov, A Rumshisky, S Volkova, M Gronas, A Gribov, Proceedings of the 27th International Conference on Computational Linguistics, Association for Computational Linguistics. the 27th International Conference on Computational Linguistics, Association for Computational LinguisticsSanta Fe, New Mexico, USARogers A, Romanov A, Rumshisky A, Volkova S, Gronas M, Gribov A (2018) RuSen- timent: An enriched sentiment analysis dataset for social media in Russian. In: Proceedings of the 27th International Conference on Computational Linguistics, As- sociation for Computational Linguistics, Santa Fe, New Mexico, USA, pp 755-763, URL https://www.aclweb.org/anthology/C18-1064
A novel hybrid approach to detect and correct spelling in Tamil text. R Sakuntharaj, S Mahesan, 2016 IEEE International Conference on Information and Automation for Sustainability. IEEESakuntharaj R, Mahesan S (2016) A novel hybrid approach to detect and correct spelling in Tamil text. In: 2016 IEEE International Conference on Information and Automation for Sustainability (ICIAfS), IEEE, pp 1-6
Use of a novel hash-table for speeding-up suggestions for misspelt Tamil words. R Sakuntharaj, S Mahesan, 2017 IEEE International Conference on Industrial and Information Systems (ICIIS). IEEESakuntharaj R, Mahesan S (2017) Use of a novel hash-table for speeding-up sug- gestions for misspelt Tamil words. In: 2017 IEEE International Conference on Industrial and Information Systems (ICIIS), IEEE, pp 1-5
Detecting and correcting real-word errors in Tamil sentences. R Sakuntharaj, S Mahesan, Ruhuna Journal of Science. 92Sakuntharaj R, Mahesan S (2018a) Detecting and correcting real-word errors in Tamil sentences. Ruhuna Journal of Science 9(2)
A refined pos tag sequence finder for Tamil sentences. R Sakuntharaj, S Mahesan, 2018 IEEE International Conference on Information and Automation for Sustainability (ICIAfS). IEEESakuntharaj R, Mahesan S (2018b) A refined pos tag sequence finder for Tamil sentences. In: 2018 IEEE International Conference on Information and Automation for Sustainability (ICIAfS), IEEE, pp 1-6
Opinion mining on YouTube. A Severyn, A Moschitti, O Uryupina, B Plank, K Filippova, DOI10.3115/v1/P14-1118Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsBaltimore, MarylandAssociation for Computational Linguistics1Long Papers)Severyn A, Moschitti A, Uryupina O, Plank B, Filippova K (2014) Opinion mining on YouTube. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Baltimore, Maryland, pp 1252-1261, DOI 10.3115/v1/P14-1118, URL https://www.aclweb.org/anthology/P14-1118
A comparative analysis of logistic regression, random forest and KNN models for the text classification. K Shah, H Patel, D Sanghvi, M Shah, Augmented Human Research. 51Shah K, Patel H, Sanghvi D, Shah M (2020) A comparative analysis of logistic regression, random forest and KNN models for the text classification. Augmented Human Research 5(1):1-16
Sentiment analysis for codemixed Indian social media text with distributed representation. K Shalini, H B Ganesh, M A Kumar, K P Soman, 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI). Shalini K, Ganesh HB, Kumar MA, Soman KP (2018) Sentiment analysis for code- mixed Indian social media text with distributed representation. In: 2018 Interna- tional Conference on Advances in Computing, Communications and Informatics (ICACCI), pp 1126-1131
An Automatic Language Identification System for Code-Mixed English-Kannada Social Media Text. B S Sowmya Lakshmi, B R Shambhavi, DOI10.1109/CSITSS.2017.84477842017 2nd International Conference on Computational Systems and Information Technology for Sustainable Solution (CSITSS). Sowmya Lakshmi BS, Shambhavi BR (2017) An Automatic Language Identification System for Code-Mixed English-Kannada Social Media Text. In: 2017 2nd Inter- national Conference on Computational Systems and Information Technology for Sustainable Solution (CSITSS), pp 1-5, DOI 10.1109/CSITSS.2017.8447784
On the functions of code-mixing in kannada. S N Sridhar, International Journal of the Sociology of Language. 197816Sridhar SN (1978) On the functions of code-mixing in kannada. International Journal of the Sociology of Language 1978(16):109-118
A critical study of spm tamil literature exam paper. K P Thamburaj, V Rengganathan, Asian Journal of Assessment in Teaching and Learning. 5Thamburaj KP, Rengganathan V (2015) A critical study of spm tamil literature exam paper. Asian Journal of Assessment in Teaching and Learning 5:13-24
An analysis on keyboard writing skills in online learning. K P Thamburaj, L Arumugum, S J Samuel, 2015 International Symposium on Technology Management and Emerging Technologies (ISTMET). IEEEThamburaj KP, Arumugum L, Samuel SJ (2015) An analysis on keyboard writing skills in online learning. In: 2015 International Symposium on Technology Management and Emerging Technologies (ISTMET), IEEE, pp 373-377
Sentiment Analysis in Tamil Texts: A Study on Machine Learning Techniques and Feature Representation. S Thavareesan, S Mahesan, DOI10.1109/ICIIS47346.2019.90633412019 14th Conference on Industrial and Information Systems (ICIIS). Thavareesan S, Mahesan S (2019) Sentiment Analysis in Tamil Texts: A Study on Machine Learning Techniques and Feature Representation. In: 2019 14th Conference on Industrial and Information Systems (ICIIS), pp 320-325, DOI 10.1109/ICIIS47346.2019.9063341
Sentiment Lexicon Expansion using Word2vec and fastText for Sentiment Prediction in Tamil texts. S Thavareesan, S Mahesan, DOI10.1109/MERCon50084.2020.91853692020 Moratuwa Engineering Research Conference (MERCon). Thavareesan S, Mahesan S (2020a) Sentiment Lexicon Expansion using Word2vec and fastText for Sentiment Prediction in Tamil texts. In: 2020 Moratuwa Engineering Research Conference (MERCon), pp 272-276, DOI 10.1109/MERCon50084.2020. 9185369
Word embedding-based Part of Speech tagging in Tamil texts. S Thavareesan, S Mahesan, DOI10.1109/ICIIS51140.2020.93426402020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS). Thavareesan S, Mahesan S (2020b) Word embedding-based Part of Speech tagging in Tamil texts. In: 2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS), pp 478-482, DOI 10.1109/ICIIS51140.2020.9342640
Ontology-based Tamil-English cross-lingual information retrieval system. D Thenmozhi, C Aravindan, Sādhanā. 4310Thenmozhi D, Aravindan C (2018) Ontology-based Tamil-English cross-lingual information retrieval system. Sādhanā 43(10):1-14
Finite state transducer based morphology analysis for Malayalam language. S Thottingal, Proceedings of the 2nd Workshop on Technologies for MT of Low Resource Languages. the 2nd Workshop on Technologies for MT of Low Resource LanguagesDublin, IrelandThottingal S (2019) Finite state transducer based morphology analysis for Malayalam language. In: Proceedings of the 2nd Workshop on Technologies for MT of Low Re- source Languages, European Association for Machine Translation, Dublin, Ireland, pp 1-5, URL https://www.aclweb.org/anthology/W19-6801
Facebook sentiment: Reactions and emojis. Y Tian, T Galery, G Dulcinati, E Molimpakis, C Sun, DOI10.18653/v1/W17-1102Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media. the Fifth International Workshop on Natural Language Processing for Social MediaValencia, SpainAssociation for Computational LinguisticsTian Y, Galery T, Dulcinati G, Molimpakis E, Sun C (2017) Facebook sentiment: Reactions and emojis. In: Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media, Association for Computational Linguistics, Valencia, Spain, pp 11-16, DOI 10.18653/v1/W17-1102, URL https: //www.aclweb.org/anthology/W17-1102
Development of Prototype Morphological Analyzer for he South Indian Language of Kannada. T N Vikram, S R Urs, SpringerBerlin Heidelberg; Berlin, HeidelbergVikram TN, Urs SR (2007) Development of Prototype Morphological Analyzer for he South Indian Language of Kannada, Springer Berlin Heidelberg, Berlin, Heidelberg, pp 109-116
POS tagging of English-Hindi code-mixed social media content. Y Vyas, S Gella, J Sharma, K Bali, M Choudhury, DOI10.3115/v1/D14-1105Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsVyas Y, Gella S, Sharma J, Bali K, Choudhury M (2014) POS tagging of English- Hindi code-mixed social media content. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Doha, Qatar, pp 974-979, DOI 10.3115/v1/D14-1105, URL https://www.aclweb.org/anthology/D14-1105
Annotating expressions of opinions and emotions in language. J Wiebe, T Wilson, C Cardie, 10.1007/s10579-005-7880-9DOI 10.1007/ s10579-005-7880-9Language Resources and Evaluation. 392Wiebe J, Wilson T, Cardie C (2005) Annotating expressions of opinions and emotions in language. Language Resources and Evaluation 39(2):165-210, DOI 10.1007/ s10579-005-7880-9, URL https://doi.org/10.1007/s10579-005-7880-9
Recognizing contextual polarity in phrase-level sentiment analysis. T Wilson, J Wiebe, P Hoffmann, Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. Human Language Technology Conference and Conference on Empirical Methods in Natural Language ProcessingVancouver, British Columbia, CanadaAssociation for Computational LinguisticsWilson T, Wiebe J, Hoffmann P (2005) Recognizing contextual polarity in phrase-level sentiment analysis. In: Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Vancouver, British Columbia, Canada, pp 347-354, URL https://www.aclweb.org/anthology/H05-1044
Learning multilingual meta-embeddings for codeswitching named entity recognition. G I Winata, Z Lin, P Fung, DOI10.18653/v1/W19-4320Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), Association for Computational Linguistics. the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), Association for Computational LinguisticsFlorence, ItalyWinata GI, Lin Z, Fung P (2019) Learning multilingual meta-embeddings for code- switching named entity recognition. In: Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), Association for Compu- tational Linguistics, Florence, Italy, pp 181-186, DOI 10.18653/v1/W19-4320, URL https://www.aclweb.org/anthology/W19-4320
Predicting the type and target of offensive posts in social media. M Zampieri, S Malmasi, P Nakov, S Rosenthal, N Farra, R Kumar, DOI10.18653/v1/N19-1144Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Zampieri M, Malmasi S, Nakov P, Rosenthal S, Farra N, Kumar R (2019) Pre- dicting the type and target of offensive posts in social media. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, Minneapolis, Minnesota, pp 1415-1420, DOI 10.18653/v1/N19-1144, URL https://www.aclweb.org/ anthology/N19-1144
Çöltekin c (2020) SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media. M Zampieri, P Nakov, S Rosenthal, P Atanasova, G Karadzhov, H Mubarak, L Derczynski, Z Pitenis, Proceedings of SemEval. SemEvalZampieri M, Nakov P, Rosenthal S, Atanasova P, Karadzhov G, Mubarak H, Derczyn- ski L, Pitenis Z, Çöltekin c (2020) SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (OffensEval 2020). In: Proceedings of SemEval
| [
"https://github.com/bharathichezhiyan/DravidianCodeMix-Dataset",
"https://github.com/philbot9/youtube-remarkscraper"
]
|
[
"Confronting gravitational-wave observations with modern nuclear physics constraints",
"Confronting gravitational-wave observations with modern nuclear physics constraints"
]
| [
"I Tews \nTheoretical Division\nLos Alamos National Laboratory\n87545Los AlamosNMUSA\n",
"J Margueron \nInstitut de Physique Nucléaire de Lyon\nCNRS/IN2P3\nUniversité de Lyon\nUniversité\nClaude Bernard Lyon 1F-69622Villeurbanne CedexFrance\n",
"S Reddy \nInstitute for Nuclear Theory\nUniversity of Washington\n98195-1550SeattleWAUSA\n\nJINA-CEE\nMichigan State University\n48823East LansingMIUSA\n"
]
| [
"Theoretical Division\nLos Alamos National Laboratory\n87545Los AlamosNMUSA",
"Institut de Physique Nucléaire de Lyon\nCNRS/IN2P3\nUniversité de Lyon\nUniversité\nClaude Bernard Lyon 1F-69622Villeurbanne CedexFrance",
"Institute for Nuclear Theory\nUniversity of Washington\n98195-1550SeattleWAUSA",
"JINA-CEE\nMichigan State University\n48823East LansingMIUSA"
]
| []
| Multi-messenger observations of neutron star (NS) mergers have the potential to revolutionize nuclear astrophysics. They will improve our understanding of nucleosynthesis, provide insights about the equation of state (EOS) of strongly-interacting matter at high densities, and enable tests of the theory of gravity and of dark matter. Here, we focus on the EOS, where both gravitational waves (GWs) from neutron-star mergers and X-ray observations from space-based detectors such as NICER will provide more stringent constraints on the structure of neutron stars. Furthermore, recent advances in nuclear theory have enabled reliable calculations of the EOS at low densities using effective field theory based Hamiltonians and advanced techniques to solve the quantum many-body problem. In this paper, we address how the first observation of GWs from GW170817 can be combined with modern calculations of the EOS to extract useful insights about the EOS of matter encountered inside neutron stars. We analyze the impact of various uncertainties, the role of phase transitions in the NS core, and discuss how future observations will improve our understanding of dense matter.PACS. 26.60.Kp Equations of state of neutron-star matter -26.60.-c Nuclear matter aspects of neutron stars arXiv:1901.09874v1 [nucl-th] | 10.1140/epja/i2019-12774-6 | [
"https://arxiv.org/pdf/1901.09874v1.pdf"
]
| 119,029,705 | 1901.09874 | a20e32c97e9218c13f8704494d6a1620ff7f474b |
Confronting gravitational-wave observations with modern nuclear physics constraints
I Tews
Theoretical Division
Los Alamos National Laboratory
87545Los AlamosNMUSA
J Margueron
Institut de Physique Nucléaire de Lyon
CNRS/IN2P3
Université de Lyon
Université
Claude Bernard Lyon 1F-69622Villeurbanne CedexFrance
S Reddy
Institute for Nuclear Theory
University of Washington
98195-1550SeattleWAUSA
JINA-CEE
Michigan State University
48823East LansingMIUSA
Confronting gravitational-wave observations with modern nuclear physics constraints
Received: date / Revised version: dateEPJ manuscript No. (will be inserted by the editor)
Multi-messenger observations of neutron star (NS) mergers have the potential to revolutionize nuclear astrophysics. They will improve our understanding of nucleosynthesis, provide insights about the equation of state (EOS) of strongly-interacting matter at high densities, and enable tests of the theory of gravity and of dark matter. Here, we focus on the EOS, where both gravitational waves (GWs) from neutron-star mergers and X-ray observations from space-based detectors such as NICER will provide more stringent constraints on the structure of neutron stars. Furthermore, recent advances in nuclear theory have enabled reliable calculations of the EOS at low densities using effective field theory based Hamiltonians and advanced techniques to solve the quantum many-body problem. In this paper, we address how the first observation of GWs from GW170817 can be combined with modern calculations of the EOS to extract useful insights about the EOS of matter encountered inside neutron stars. We analyze the impact of various uncertainties, the role of phase transitions in the NS core, and discuss how future observations will improve our understanding of dense matter.PACS. 26.60.Kp Equations of state of neutron-star matter -26.60.-c Nuclear matter aspects of neutron stars arXiv:1901.09874v1 [nucl-th]
Introduction
Multimessenger observations of neutron-star (NS) mergers have the potential to revolutionize nuclear astrophysics much in the same way as observations of the cosmic microwave background (CMB) radiation revolutionized particle astrophysics. Neutron-star merger events simultaneously emit gravitational waves (GWs) and electromagnetic (EM) signals, from gamma-rays, X-rays, optical, infrared, to radio waves, and neutrinos. The first observation of a NS merger, GW170817 in the GW spectrum, GRB 170817A in the gamma-ray spectrum, and AT 2017gfo in the electromagnetic (EM) spectrum, was made on August 17, 2017, and in the weeks thereafter [1,2,3,4]. Triggered by the Fermi and Integral telescopes [3,5], this observation provided detailed spectral and temporal features both in GWs and EM radiation. Theoretical efforts to interpret this data has provided insights into the production of heavy r-process elements in NS mergers [6], and constraints on the EOS of dense matter [7,8,9,10]. NS mergers have the potential to provide detailed information on the properties of the merging compact stars, such as their masses and radii [11], as well as on the properties of the densest baryonic matter to be observed in the universe.
Future detections of NS mergers, anticipated during the next observing run of the Advanced LIGO and VIRGO detectors, could provide even stronger constraints on the EOS of strongly-interacting matter and the r-process.
We are pleased to contribute to this topical issue on "The first neutron star merger observation -Implications for nuclear physics", which contains several articles devoted to the theory and computing needed to improve the description of dense matter and to model neutronstar mergers -efforts that will play a key role in extracting insights from GW170817 and future detections. Here, we elaborate on earlier work in Ref. [10], where we analyzed GW170817 constraints on the dense matter EOS, to provide additional details, discussions, and new results.
Our contribution is structured as follows. In Sec. 2 we describe the NS equation-of-state models employed in our analyzis. In particular, we use two models: the minimal model or meta-model (MM), see Sec. 2.3 and the maximal or speed-of-sound model (CSM), see Sec. 2.4. Both models are constrained at low densities by state-of-the-art calculations of neutron-rich matter from chiral effective field theory (EFT). We discuss these models in the context of GW170817 in great detail in Sec. 3 and analyze the impact of phase transitions or future GW detections. Finally, we summarize our results and provide an outlook in Sec. 4.
Models
In this section, we discuss the dense-matter models we use in our analysis. Calculations of the EOS of neutron matter based on Hamiltonians derived from chiral EFT provide a reliable method to estimate the uncertainties associated with poorly constrained aspects of two-and many-body nuclear forces at short-distance [12,13]. Chiral EFT is a systematic expansion for nuclear forces in powers of momenta, and provides an efficient way to estimate theoretical uncertainties. It is however limited to momenta up to the so-called breakdown scale, Λ b , which signals the breakdown of the effective theory due to additional high-momentum physics, e.g. the onset of new degrees of freedom. Since Λ b is expected to be of the order of 500 − 600 MeV [14], chiral EFT is not applicable at all densities encountered in neutron stars and chiral EFT interactions have typically been used to describe neutron matter only up to saturation density, n sat . Here, using insights obtained in Ref. [10], we will analyze to which extent chiral EFT predictions up to 2n sat with conservative error estimates provide useful constrains for the nuclear equation of state, even though uncertainties grow fast with density.
To describe the EOS at higher densities, we will consider two extrapolation schemes rooted in low-density microscopic predictions and widely covering our present uncertainties at higher density. These two schemes are the minimal model or meta-model (MM), based on a smooth extrapolation of chiral EFT results, and the maximal model or speed-of-sound model (CSM), which explores the widest possible domain for the EOS and contains also more drastic behavior with density; see Ref. [10] for the first analysis of GWs with these models. These two models show some overlap for properties of dense neutron-star matter, as suggested from the masquerade phenomenon [15], but also highlight differences: The confrontation of these models with each other and with observations sheds light on the impact of the presence of strong phase transitions at high density, as is detailed hereafter.
Pure neutron matter from chiral EFT
Neutron stars are ideal laboratories to test theories of the strong interaction at finite chemical potential: the structure of neutron stars is governed by the knowledge of the EOS of neutron-star matter, relating energy density, pressure, and temperature. Additional uncertainties may come from rotation and magnetic field distribution in the star, but the dense-matter EOS is the key input. Since neutron stars explore densities from a few gram per cubic centimeter up to 10 times the nuclear saturation density, n sat = 0.16 fm −3 = 2.7·10 14 g cm −3 , the knowledge of the EOS is required for densities covering several orders of magnitude. Though young proto-neutron stars or neutronstar remnants also explore the EOS at high temperatures up to several tens of MeV, older neutron stars can typically be considered as cold objects at T = 0. This is especially true for two binary NS during the inspiral phase of a neutron-star merger, whose properties can be analyzed from the the premerger GW signal.
While the EOS of the neutron-star crust, reaching up to n sat /2, is rather well constrained, the uncertainty of the EOS increases fast with density and the composition of the inner core of NS is still unknown. Nevertheless, in the density range from n sat /2 up to about 2n sat , the neutronstar EOS can be constrained by state-of-the-art nucleartheory models. The starting point for these constraints are calculations of pure neutron matter (PNM). PNM is an idealized, infinite system consisting solely of neutrons, but it is much easier to compute than systems containing also protons. The reason is that certain parts of the nuclear interaction, e.g., tensor interactions, are weaker or do not contribute at all among neutrons. In contrast to symmetric nuclear matter, PNM is also not unstable with respect to density fluctuations below n sat , and uniform matter remains the true ground state of PNM at all densities, simplifying its calculation.
To reliably describe neutron matter, one needs precise and accurate quantum many-body methods in combination with a reliable model for the nuclear interaction. Neutron matter has been extensively studied in the last decade, using a multitude of nuclear interactions and advanced ab initio many-body methods. Among these are, e.g., many-body perturbation theory [17,18,19], the coupledcluster method [20], quantum Monte Carlo methods [21], or the self-consistent Green's function method [22]. A comparison of these different studies, see e.g., Refs. [23,24], shows that neutron matter is rather well constrained by these multiple ab initio approaches using diverse nuclear Hamiltonians. In this paper, we will use calculations of neutron matter obtained with the auxiliary-field diffusion Monte Carlo (AFDMC) method [25] together with modern nuclear Hamiltonians from chiral EFT.
Quantum Monte Carlo methods are among the most precise many-body methods for strongly interacting systems [25]. They provide the ground state of a many-body system, governed by a non-relativistic nuclear Hamiltonian defining the Schrödinger equation, by evolving a trial wave function Ψ T in imaginary time,
Ψ GS = lim τ →∞ e −Hτ Ψ T ,(1)
where Ψ T is constructed so that it has a non-vanishing overlap with the ground state Ψ GS . Expanding Ψ T in eigenfunctions of the Hamiltonian, one can easily see that contributions of excited states decay with time, and only the ground-state component of the trial wave function remains. Quantum Monte Carlo methods have been used to successfully describe nuclei up to 16 O [25, 26, 27] and neutron matter [21,12]. At very low densities, where neutron matter is close to the unitary limit and interactions are dominated by large scattering-length physics, these methods [28] have been successfully confronted to experimental measurements of cold atomic gases [29,30,31]. Due to its great success to study strongly-interacting matter and nuclei [21,32,12,33,27], we employ in this work the AFDMC method to determine PNM properties. For more details on Quantum Monte Carlo methods we refer the reader to Ref. [25].
On the interaction side, chiral EFT [34,35] is a modern theory for nuclear forces that is consistent with the symmetries of Quantum Chromodynamics and systematically describes the nucleon-nucleon interaction in terms of explicit resolved longer-range pion exchanges as well as short-range nucleon contact interactions. Chiral EFT is based on a momentum expansion in terms of p/Λ b , where p is the typical momentum of the nuclear system at hand, and Λ b is the breakdown scale already discussed. The short-range interaction terms parametrize all unresolved and unknown high-energy physics beyond the breakdown scale, and depend on a set of low-energy couplings (LECs), which are typically fitted to nucleon-nucleon (N N ) scattering data and properties of light nuclei. Chiral EFT does not only describe N N interactions but also consistent three-body (3N ) and higher many-body forces. It has been successfully applied to calculate properties of ground and excited states of nuclei, nuclear matter, as well as electroweak processes; see, e.g, Ref. [24] for a review. Most importantly, the systematic chiral EFT expansion enables the estimation of theoretical uncertainties for these physical systems.
In our analysis in this work, we use local chiral EFT interactions that have been constructed especially for the use in QMC methods in Refs. [12,36,37,38]. These interactions have been successfully tested in light-to mediummass nuclei and in n-α scattering [12,27] and agree with our current knowledge of the empirical parameters of nuclear matter [16,39]. In Ref. [13], these interactions have been used to study neutron matter up to 2n sat with theoretical uncertainty estimates using the AFDMC method. For more details on QMC calculations with local chiral interactions we refer the reader to Ref. [40].
In particular, in this work we use local chiral interactions at a cutoff scale R 0 = 1.0 fm with its systematic uncertainty estimates. In Fig. 1 we show the results for the energy per particle and pressure of neutron matter at leading order (LO), next-to-leading order (NLO), and at nextto-next-to-leading order (N 2 LO) with its uncertainty band in a density range going from 0.4 fm −3 up to 2n sat . We find that the uncertainty bands increase fast with density and are quite sizable at 2n sat . In addition to the results for chiral interactions, we also show in Fig. 1 AFDMC results employing the phenomenological AV8' N N and AV8' N N plus UIX 3N interactions as a comparison. It is interesting to note that the AV8' and NLO N N interactions agree very well with each other, which highlights the fact that many-body forces are a considerable source of uncertainty. Finally, we also compare all calculations with the unitary-gas limit of Ref. [16].
Discussion of uncertainties
The uncertainty bands shown in Fig. 1 include the following sources of uncertainty: i) the truncation of the nuclear Hamiltonian within the chiral expansion, ii) the regularization scheme and scale, which are needed to implement nuclear Hamiltonians in many-body methods, iii) the uncertainties in the determination of low-energy couplings from data, and iv) the many-body uncertainty that originates in approximations made when solving the Schrödinger equation for the nuclear many-body system. The first three sources, which originate in the nuclear Hamiltonian, dominate over the many-body uncertainty from QMC methods. Among these three, the truncation uncertainty is the dominant source of uncertainty and we will discuss it in the following.
The truncation uncertainty can be expressed in the following way. Introducing the dimensionless expansion parameter Q = p/Λ b which governs the order of the lowmomentum expansion of the nuclear expansion in chiral EFT approach, as stated before, and following Ref. [41], under the prerequisite that chiral EFT is a converging theory, one can define the order-by-order contributions to an observable X satisfying the following infinite summation,
X = X 0 ∞ i=0 c i Q i .(2)
Here, X 0 sets the natural scale expected for the observable X, e.g., the leading-order result, X 0 = X LO (c 0 = 1), and the c i≥1 denote the expansion coefficients. In calculations of nuclear systems, due to practical reasons this sum has to be truncated, inducing the so-called truncation uncertainty. This uncertainty is intrinsic to all nuclear Hamiltonians but can be specified for chiral EFT Hamiltonians by
∆X = X − X 0 n i=0 c i Q i .(3)
It has been shown in Ref. [41] that for practical purposes an estimate of the magnitude of the first truncated term in Eq. (2), given by i = n + 1, is a sufficient uncertainty estimate. To obtain this estimate, both the size of the unknown expansion coefficient c n+1 and of the expansion parameter Q are required. A conservative choice for the coefficient c n+1 is the maximum of all previously found coefficients,
c n+1 = n max i=0 c i ,(4)
while Q has to be estimated from the typical momentum scale for the system at hand. This uncertainty prescription is similar to the one presented by Epelbaum, Krebs, and Meißner (EKM) [42], and the truncation uncertainty, e.g., at N 2 LO, can be obtained from an order-by-order calculation as
∆X N 2 LO = max Q 4 X LO − X free , Q 2 X NLO − X LO , Q X N 2 LO − X NLO = Q 4 X 0 n max i=0 c i .(5)
We have used this uncertainty estimate to compute the truncation uncertainty, using Q = 3/5k F /Λ b , with the Fermi momentum k F and Λ b = 500 MeV. The total uncertainty bands in Fig. 1 additionally include the other three sources of uncertainty. The regularization scheme dependence has been explored by explicitly including regulator artifacts for local regulators. Specifically, in Fig. 1, the neutron-matter uncertainty bands include three different local chiral Hamiltonians which explore short-range 3N regulator artifacts; see Ref. [12] for details on the Hamiltonians and Ref. [43] for details on the regulator artifacts. These two sources of uncertainties dominate the total uncertainty band, while the manybody uncertainty is negligible.
To estimate the convergence of the chiral expansion at different densities, the series of expansion coefficients of Eq. (2) can provide insights. In Ref. [13], we have studied the convergence of the chiral series in pure neutron matter and found it to be reasonable up to a density of 2n sat . Beyond that, we expect the chiral expansion to break down even though the expansion parameter only increases by approximately 25% from n sat to 2n sat . Therefore, we restrict the chiral EFT input to densities up to 2n sat . In addition, we exclude one chiral Hamiltonian from further consideration because its regulator artifacts lead to a spurious and unphysical attractive 3N contribution in neutron matter, as discussed in Ref. [13]. This Hamiltonian represents the lower, soft part of the uncertainty band and is also in conflict with the unitary-gas bound of Ref. [16], shown in Fig. 1 as a blue dashed line. Excluding this Hamiltonian changes the lower bound of the uncertainty band to the red-dotted line in Fig. 1, in good agreement with the unitary-gas constraint.
In the following, we use this chiral EFT band up to a density n tr to constrain two different modelings for the high density equation of state. By varying n tr from n sat to 2n sat , we will show that, despite the rapid increase of the uncertainty of the neutron-matter EOS with density, chiral EFT constraints remain extremely useful up to 2n sat .
The minimal model
The first model that we consider in this analysis, the minimal model or meta-model (MM), assumes the EOS to be smooth enough to be describable in terms of a density expansion about n sat . Here, we briefly describe the MM, but see also Refs. [39,44] for more details.
The MM is described in terms of the empirical parameters of nuclear matter, which are defined as the Taylor coefficients of the density expansion of the energy per par- ticle of symmetric nuclear matter e sat (n) and the symmetry energy s sym (n),
e sat (n) = E sat + 1 2 K sat u 2 + 1 6 Q sat u 3 + 1 24 Z sat u 4 + ...(6)s sym (n) = E sym + L sym u + 1 2 K sym u 2 + 1 6 Q sym u 3 + 1 24 Z sym u 4 + ... ,(7)
where the expansion parameter u is defined as u = (n − n sat )/(3n sat ) and n = n n + n p is the baryon density, n n/p are the neutron and proton densities. A good representation of the energy per particle around n sat and for small isospin asymmetries δ = (n n −n p )/n can be obtained from the following quadratic approximation,
e(n, δ) = e sat (n) + s sym (n) δ 2 .(8)
The lowest order empirical parameters can be extracted from nuclear experiments [39], but typically carry uncertainties. Especially the symmetry-energy parameters are of great interest to the nuclear physics community and considerable effort is invested into a better estimation of their size. The MM constructs the energy per nucleon as,
e N (n, δ) = t F G * (n, δ) + v N (n, δ),(9)
where the kinetic energy is expressed as
t F G * (n, δ) = t F G sat 2 n n sat 2/3 1 + κ sat n n sat f 1 (δ) +κ sym n n sat f 2 (δ) ,(10)
and the functions f 1 and f 2 are defined as
f 1 (δ) = (1 + δ) 5/3 + (1 − δ) 5/3 ,(11)f 2 (δ) = δ (1 + δ) 5/3 − (1 − δ) 5/3 .(12)
The parameters κ sat and κ sym control the density and asymmetry dependence of the Landau effective mass as (q=n or p), m m * q (n, δ)
= 1 + (κ sat + τ 3 κ sym δ) n n sat ,(13)
where τ 3 = 1 for neutrons and -1 for protons. Taking the limit κ sat = κ sym = 0, Eq. (10) provides the free Fermi gas energy.
The potential energy in Eq. (9) is expressed as a series expansion in the parameter x and is quadratic in the asymmety parameter δ,
v N (n, δ) = N α≥0 1 α! (v sat α + v sym α δ 2 )x α u N α (x),(14)
where the function u N α (x) = 1−(−3x) N +1−α exp(−bn/n sat ) ensures the limit e N (n = 0, δ) = 0. The parameter b is taken large enough for the function u N α to fall sufficiently fast with density and to not contribute at densities above n sat . A typical value is b = 10 ln 2 ≈ 6.93 such that the exponential function is 1/2 for n = n sat /10. The MM parameters v sat α and v sym α are simply expressed in terms of the empirical parameters. The MM as expressed in Eqs.(9), (10), and (14) coincides with the meta-model ELFc described in Ref. [39], where detailed relations can be found. To obtain the neutron-star EOS, we extend our models to β-equilibrium and include a crust as described in Ref. [44]. By varying the empirical parameters within their known or estimated uncertainties, it was shown that the MM can reproduce many existing neutron-star EOS that are based on the assumption that a nuclear description is valid at all densities probed in neutron stars. Therefore, this model is a reliable representation for EOS without exotic phases of matter separated from the nucleonic phase through strong phase transitions.
In the following, the parameter space for the MM will be explored within a Markov-Chain Monte-Carlo algorithm, where the MM parameters are allowed to freely evolve inside the boundaries given in Table. 1. The resulting models satisfy the chiral EFT predictions in neutron matter for the energy per particle and the pressure up to n tr , causality, stability, positiveness of the symmetry energy (s sym (n) > 0), and also reach the maximum observed neutron-star mass M obs max , see the discussion in Sec. 2.5. The maximum density associated to each EOS within the MM is given either by the break-down of causality, stability, or positiveness of the symmetry energy condition, or by the end point of the stable neutron-star branch.
The maximal model
The second model that we consider in this analysis, the maximal model (CSM), is based on an extension of the speed of sound in neutron-star matter. Starting from the pure neutron matter calculations, we construct the neutronstar EOS up to n tr by constructing a crust as described in Ref. [45] and extending the neutron-matter results to β equilibrium above the crust-core transition. Having constructed the EOS up to n tr we compute the speed of sound,
c 2 S = ∂p( ) ∂ ,(15)
where p is the pressure and is the energy density. Above n tr , we parametrize the speed of sound in a very general way: we randomly sample a set of points c 2 S (n), where the values for c S have to be positive and are limited by the speed of light (stability and causality), and interpolate between the different sampling points using linear segments. The individual points are randomly distributed in the interval n tr −12n sat . From the resulting speed-of-sound curve, we reconstruct the EOS step-by-step starting at n tr , where (n tr ), p(n tr ), and (n tr ) are known:
n i+1 = n i + ∆n (16) i+1 = i + ∆ = i + ∆n · i + p i n i(17)p i+1 = p i + c 2 S (n i ) · ∆ ,(18)
where i = 0 defines the transition density n tr . In the second line we have used the thermodynamic relation p = n∂ /∂n − , which is valid at zero temperature.
In that way, we iteratively obtain the high-density EOS. We have explored extensions for a varying number of c 2 S (n) points, i.e., for 5-10 points, and found that the differences between these extensions are marginal. We, therefore, choose 6 sampling points. For each sampled EOS, we generate a second version which includes a strong firstorder phase transition with a random onset density and width, to explicitly explore such extreme density behavior.
The CSM for neutron-star applications was introduced in Ref. [13], and represents and extension of the model of Ref. [46]. A similar model was used in Ref. [47]. However, in contrast to Ref. [13] we have extended this model to explore the complete allowed parameter space for the speed of sound, by abandoning the specific functional form of Ref. [13] in favor of an extension using linear segments. This more conservative choice leads to slightly larger uncertainty bands, but allows us to make more definitive statements about neutron-star properties. The resulting EOS parameterizations represent possible neutron-star EOS and may include drastic density dependences, e.g., strong phase transitions which lead to intervals with a drastic softening or stiffening of the EOS. This represents a stark contrast to the MM, which does not include such behavior, and might give insights into the constituents of neutronstar matter at high-densities. The predictions of the CSM represent the widest possible domain for the respective neutron-star observables consistent with the low density input from chiral EFT. If observations outside of this domain were to be made, this would imply a breakdown of nuclear EFTs at densities below the corresponding n tr .
Since the CSM represents very general EOSs only governed by the density dependence of the speed-of-sound, it does not allow any statements about possible degrees of freedom. In this sense, it is very similar to extensions using piecewise polytropes which were introduced in Ref. [48] and have been used extensively to determine neutron-star properties; see, e.g., Ref. [49,50,7]. However, in contrast to polytropic extensions, in the CSM the speed of sound is continuous except when first-order phase transition are explicitly accounted for. This is important for the study of the tidal polarizabilities, where c −1 S enters.
Comparison of MM and CSM
For both the MM and CSM we generate thousands of EOSs that are consistent with low-density constraints from chiral EFT. In addition, the observations of heavy twosolar-mass pulsars in recent years [51,52,53] place important additional constraints on these EOSs, which we enforce by requiring M max > M obs max for all our EOSs. To be conservative, as the limit for M obs max we choose the centroid of the maximum observed mass minus twice the error-bar on the observation. For the two heaviest neutron stars observed up to now [51,52,53], this gives M obs max ≈ 1.9M . We now compare the predictions of both the MM (black bands with solid contour) and CSM (red bands with dotted contour) for the EOS of neutron-star matter, see Fig. 2, and the mass-radius (MR) relation, see Fig. 3. In the respective figures, we show the EOS and MR envelopes for n tr = n sat [panels (a)] and for n tr = 2n sat [panels (c)]. In all cases, the MM is a subset of the CSM, as expected. Also, the two models, which treat the neutron-star crust with different prescriptions, show excellent agreement at low densities. For n tr = n sat , the MM and CSM EOSs agree very well up to n tr , while for n tr = 2n sat the MM only samples a subset of the chiral EFT input, because the M obs max constraint forces the EOS to be sufficiently stiff which excludes the softest low-density neutron-matter EOS. This is a consequence of the smooth density expansion around n sat in the MM. In the CSM, instead, a nonsmooth stiffening of these softest EOS at higher densities can help stabilize heavy neutron stars, which is why the complete low-density band from chiral EFT is sampled. We also find that going from n tr = n sat to n tr = 2n sat allows to considerable reduce the EOS uncertainty for the CSM. The MM uncertainty is also slightly reduced and the MM band gets narrower. These results show that even though the theoretical uncertainties in the neutron-matter EOS increase fast in the density range 1 − 2n sat , the additional information provided allows to substantially reduce uncertainties in the CSM EOS: essentially, the chiral EFT constraint excludes the possibility of phase transitions in the region going from 1 to 2n sat . The impact of phase transitions above 2n sat on the EOS is very much reduced compared to the case where they are allowed to appear at lower densities, because we impose the M obs max constraint. A large domain of soft CSM EOSs is, thus, excluded. The stiff MM and CSM EOS are very close up to 2n sat , as expected.
These observations are also reflected in the MR predictions of both models. In the last case, the radius uncertainty for a typical neutron star is only about 1 km in the MM, compatible with the expected uncertainty of the NICER mission [54]. This allows for a possible exclusion of the MM if the NICER prediction disagrees with the MM result. If such an observation should be made in the near future, we will be able to better constrain dense-matter phase transitions. In contrast, the CSM, which includes EOS with sudden softening or stiffening at higher densities, dramatically extends the allowed envelopes for the EOS and the MR relation as compared with the MM. These differences in the predictions of the MM and CSM can be used to identify regions for the neutron-star observables, for which statements about the constituents of matter might be possible. For example, the observation of a typical neutron star with a radius of 10 km would imply the existence of a softening phase transition, that would hint on new phases of matter appearing in the core of neutron stars. Instead, in regions were both the MM and CSM agree, the masquerade problem does not allow statements about the constituents of neutron-star matter at high densities [15].
Finally, due to the rather soft density dependence of chiral EFT constraints in the density range 1 − 2n sat , n tr = 2n sat together with the constraint M max > M obs max seems to strongly disfavor EOS that lead to the appearance of disconnected compact-star branches, as suggested in Ref. [55]. Such EOS need very strong first-order phase transitions, which would soften the EOS so much that heavy two-solar-mass neutron stars cannot be supported, in accordance with the findings in Ref. [56]. Instead, chiral EFT calculations up to n tr = 2n sat imply that EOSs with first-order phase transitions lead to neutron stars of the classification "A" or "C" of Ref. [46].
Results for GW170817
In this section, we confront the recent neutron-star merger observation GW170817 by the LIGO-Virgo (LV) collaboration with our two classes of EOS models.
Posterior of the LIGO-Virgo analysis
The LV collaboration observed the GW signal of GW170817 for about 100s (several 1000 revolutions, starting from 25 Hz) and performed detailed analyses of the wave front [4]. Because the chirp mass M chirp , defined as
M chirp = (m 1 m 2 ) 3/5 (m 1 + m 2 ) 1/5 ,(19)
can be extracted from the entire signal, this observation allowed to put tight constraints on it. For GW170817, the LV collaboration precisely determined M chirp = 1.186 ± 0.001M . The extraction of higher-order GW parameters from the wavefront is complicated for several reasons. First, higher-order parameters are sensitive to the GW signal at later times and, thus, only a smaller part of the signal is suitable for their extraction. Second, there exist ambiguities between different higher-order parameters, e.g., between the individual neutron-star spins and the tidal polarizability. Because of this, the LV collaboration provided results for both a low-spin and a high-spin scenario. In this work, we only investigate the low-spin scenario for two reasons. First, large spins are not expected from the observed galactic binary NS population. Second, because neutron stars spin down over time, low spins are also expected from the extremely long merger time of GW170817 of the order of gigayears. Therefore, the low spin scenario is expected to be the more realistic scenario for binary neutron-star mergers such as GW170817.
The above mentioned problems in the extraction of higher-order parameters lead to weaker constraints on the individual masses of the two component neutron stars in GW170817. With m 1 being the mass of the heavier and m 2 being the mass of the lighter neutron star in the binary, the mass distribution of the individual stars is typically described in terms of the parameter q = m 2 /m 1 . The observed mass distributions for m 1 and m 2 is presented as histograms in the upper panel of Fig. 4. To use this information in our calculations, we describe the posterior of the LV collaboration for M chirp and q by the analytical probability distribution [58] p
(q, M chirp ) = p(q)p(M chirp ) ,(20)
where
p(M chirp ) ∝ exp[−(M chirp −M chirp ) 2 /2σ 2 M ] ,(21)
withM chirp = 1.186M and σ M = 10 −3 M [4]. For the mass asymmetry q, we have fitted the function
p(q) = exp − 1 2 v(q) 2 − c 2 v(q) 4 ,(22)
to the LV posterior for the component masses. We find c = 1.83 and v(q) = (q −0.89)/0.20, and compare the resulting normalized analytic distributions with the observed data in the upper panel of Fig. 4. Since in this work we will confront the gravitationalwave observations of the LV collaboration with nuclear physics constraints, i.e., use our set of EOSs together with the source properties of GW170817 to postdict the distribution ofΛ, we do not make use of the observed probability distribution forΛ. However, for reasons of completeness, we have fitted functions consisting of two and three Gaussians of the form
p(Λ) = N i=1 a i e − 1 2 Λ −Λ i σ i 2(23)
to the observed LV posterior forΛ. The resulting parameters a i , q i and σ qi are reported in Table 2, and the resulting functions as well as the LV result are plotted in the lower panel of Fig. 4, where the horizontal black line represent the 90% LV confidence level forΛ. We also show the posteriors for the reanalysis of Ref. [57] for the two extreme cases [uniform mass prior (u) and mass prior informed by double neutron stars (d)]. The main difference between the two analyses lies in the appearance of a second peak in the posterior probability distribution aroundΛ ∼ 600 for the LV result. The origin of this second peak is not well understood: the peak may be wash out considering a wider domain of frequencies, starting from 23 Hz as in Ref. [57]. The presence of the second peak is indeed an important issue for the prediction ofΛ: including the second peak, the upper boundary for the 90%-CL is 720, while it drops down if the second peak is absent. Therefore, in the following, we consider a structureless flat probability distribution inΛ, and sample the mass distributions for m 1 and m 2 in GW170817 from the analytic function p(q, M chirp ).
Areas of constant Λ
Before addressing GW170817, we focus on the tidal polarizability Λ of individual neutron stars. The tidal polarizability describes how a neutron star deforms under an external gravitational field, and depends on neutron-star properties as
Λ = 2 3 k 2 c 2 G R M 5 .(24)
Here, k 2 is the tidal love number, that is computed together with the Tolman-Oppenheimer-Volkoff equations; see, for example, Refs. [59,60,61] for more details.
It is interesting to look at areas of constant Λ within the MR plane. In this case, the relation of neutron-star mass and radius is given by
M = 3 2 Λ k 2 − 1 5 c 2 G R ,(25)
leading to the following scaling relation,
M M = 0.6243 Λ k 2 − 1 5 R 1 km .(26)
For constant Λ, this implies an almost linear relationship between M and R, because the love number k 2 does not vary strongly in that case. In addition, for different values of Λ, the slopes are rather similar due to the small exponent −1/5. In Fig. 5, we plot the mass-radius relation for n tr = n sat for the CSM, together with areas of constant Λ. In particular, we show areas for Λ = 200, 400, 800, and 1600. While there is a tight correlation between radii and tidal polarizabilities, from Fig. 5 one can see that both quantities still provide complementary information. For example, an exact observation of the tidal polarizability of a neutron star, i.e., with vanishing uncertainty, would still lead to a remaining uncertainty for the radius of a typical 1.4M neutron star. To be specific, for Λ = 200, the remaining radius uncertainty is still ≈ 1 km, compatible with the expected uncertainty of NICER [54]. For larger values of Λ this uncertainty decreases and for Λ = 800 it is only ≈ 0.5 km. However, based on GW170817 values larger than 720 are ruled out for typical neutron stars. Hence, both tidal deformabilities and radii offer complementary information on neutron-star global structure.
Finally, from Eq. (26), one can infer the following fit,
M M = a (b + Λ) 1/5 R 1 km ,(27)
where we find a = 0.406435 and b = 68.5.
Tidal polarizabilities of GW170817
For neutron-star mergers, the GW signal allows the extraction of the binary tidal polarizability parameterΛ. This parameter is defined as a mass-weighted average of the individual tidal polarizabilities, As discussed in Sec. 3.1, the extraction of the binary tidal polarizability suffers from increased uncertainties, due to its importance only during the last few orbits [59,60] and correlations among the parameters. In the initial publication of the LV collaboration [62], the constraint oñ Λ ≤ 800 was reported with 90% confidence (corrected to beΛ ≤ 900 in Ref. [4]). This analysis, however, was very general and did not assume both objects in the binary system to have the same EOS. Several reanalyses have since improved this constraint. Assuming that both compact objects were neutron stars governed by the same EOS, Ref. [57] used polytropic EOS models and a Bayesian parameter estimation with additional information on the source location from EM observations to derive limits oñ Λ for different prior choices for the component masses: for uniform priors the reported 90% confidence interval was Λ = 84 − 642, for a component mass prior informed by radio observations of Galactic double neutron stars the result wasΛ = 94 − 698, and for a component mass prior informed by radio pulsars the reported result was Λ = 89 − 681. A reanalysis by the LV collaboration found a new 90% confidence of 70 ≤Λ ≤ 720 [4]; see Fig. 4. Finally, the LV collaboration reported an additional result, assuming that both merging objects were neutron stars governed by the same EOS [63]. This EOS was based on the Lindblom parametrization [64] stitched to the SLy EOS for the crust, and resulted inΛ = 70 − 580 with 90% confidence. For the different extractions, the lower limit is rather stable, but the upper limit varies from 580-800.
In general, the uncertainty range for all extractions is sizable. In the following, we will investigate the resulting Λ obtained from state-of-the-art nuclear-physics models at low densities. To obtain these results, for all our EOS models we compute the combined tidal polarizabilityΛ for thousands of NS-NS binaries where the sample the mass m 1 of the heavier neutron star in the range 1.0 − 1.9M and the mass of the lighter neutron star m 2 in the range 1.0M − m 1 (implying q ≤ 1). This allows us to explore a wide range of mass asymmetries and chirp masses ranging from 0.871M to 1.654M , which naturally includes the chirp masses for several known neutron-star binaries as well as GW170817. We show the resulting envelopes forΛ as a function of M chirp in Fig. 6. We also indicate the chirp mass for GW170817, M GW170817 chirp = 1.186M [4] (blue dashed vertical lines) that allows to extract nuclearphysics constraints onΛ of GW170817.
Using nuclear-physics constraints from chiral EFT up to n sat [panel (a)] leads to the widest allowed range for Λ for a given chirp mass. This is true for both the MM and the CSM, but the CSM envelope is much larger due to the wider flexibility of the EOS at higher densities. For GW170817 (M GW170817 chirp = 1.186M ), we findΛ CSM = 60 − 2180 andΛ MM = 230 − 950; for the CSM, the uncertainty inΛ is much larger than the LV constraint for GW170817. For this transition density, both the MM and the CSM can be constrained by the LV constraint on GW170817 and, as a result, GW170817 adds information on the mass-radius relation of neutron stars.
To explore the impact of the LV constraint of Ref. [4], we make use of p(q, M chirp ) and, using a uniform prior, select only EOS-m 1,2 combinations with 70 ≤Λ ≤ 720. In panel (b) of Fig. 6 we show the resulting envelope for Λ(M chirp ) for the MM and CSM. In addition, we also show the resulting envelopes for the EOS and the MR relation in panels (b) of Figs. 2 and 3, respectively. Please note that the resulting range of tidal polarizabilities for M chirp = 1.186 ofΛ = 70 − 1020 in Fig. 6(b) is larger than the LV constraint. The reason is that we accept all EOS that fulfill the LV constraint for any value of q allowed according to p(q). The range in Fig. 6(b), however, is computed for many more values of q. For example, if an EOS passes the constraintΛ ≤ 720 for q = 0.7 than the resultingΛ for q = 1 will be larger.
Naturally, enforcing this constraint rules out a considerable part of EOSs that lie both on the high-pressure and low-pressure side at high energy densities. This, again, is reflected in the mass-radius relation, where neutron stars with large radii are excluded by the LV constraint. For our analysis and the CSM, we find that the radius of a 1.4M neutron star, R 1.4 , can be constrained to be 9.0 km < R 1.4 < 13.6 km. This was also found in Ref. [7], where a polytropic EOS expansion was used to constrain the radius of neutron stars by enforcing the constrain Λ 1.4 < 800 (the initial LV constraint of Ref. [62]). Ref. [7] found that R 1.4 < 13.6 km, and both analyses are in excellent agreement.
Finally, we assume the chiral EFT constraint to be valid up to 2n sat [panel (c)]. Even though the uncertainties are still sizable, the predicted total range forΛ reduces dramatically. For GW170817, we findΛ CSM = 80 − 580 andΛ MM = 280 − 480. Our constraint, which is solely guided by nuclear-EFT input, is much smaller than the observational LV constraint and in excellent agreement with the recent detailed recent reanalysis by the LV collaboration [63]. We emphasize, though, that our analysis is more constraining than the LV reanalysis: our 100% envelopes are compatible with the 90% contour of Ref. [63]. Therefore, the sentiment that the neutron-star merger GW170817 revolutionized our understanding of the EOS, is a bit of an exaggeration. It, however, represents a new hope for get-ting different constraints on the EOS that might also offer the possibility to investigate new phases of dense matter. In this sense, GW170817 and the expected future detections will surely contribute to answering the long standing question of the nature of NS core.
We explicitly stress that our results imply that current nuclear physics knowledge in the relevant density range of 1 − 2n sat , as obtained by ab inito calculations using modern nuclear Hamiltonians and state-of-the art many-body methods, is compatible to the recent neutron-star merger observation but more constraining for neutron-star observables and the EOS. In addition, efforts in the nucleartheory community to improve nuclear interactions might allow to considerably reduce the theoretical uncertainty for the neutron-star-matter EOS between 1 − 2n sat , which will tighten our constraints even more. In general, this very interesting density range provides an excellent laboratory to probe nuclear-theory predictions against astrophysical observations and heavy-ion collision experiments.
Impact of varying n tr and the validity of chiral EFT predictions
These present studies as well as the one of Ref. [13] are the first to use chiral EFT calculations of the neutron matter EOS up to twice nuclear saturation density with reliable error estimates to compute tidal polarizabilities for GW170817. Reliable uncertainty estimates are critical for understanding the impact that GW detections will have on elucidating the properties of dense matter inside neutron stars, and theoretical calculations of the dense-matter EOS without uncertainty estimates are of limited value for a meaningful analysis of GW data. Uncertainty estimates have shown that chiral EFT input remains useful up to 2n sat , and we find, in contrast to other recent publications [7,8,9], that GW170817 does not provide new insight about the EOS that cannot be obtained from current nuclear physics knowledge. This message tempers claims made in these recent publications which state that the upper limit on the tidal polarizability derived from GW data rules out stiff nuclear EOS. While this inference is correct, such stiff EOSs are already ruled out based on state-of-theart nuclear Hamiltonians. In other words, models of dense matter excluded by the upper limit on the tidal deformability from GW170817 are already incompatible with the current microscopic EOSs at densities where error estimates can still be justified.
Nevertheless, the reliability of chiral interactions at these densities has been questioned. Although the convergence of the chiral expansion cannot be strictly proven in this density range, we present arguments to show that the order-by-order convergence of the chiral expansion for the EOS up to 2n sat is still reasonable. First, the expansion parameter increases by only about 25% over the density interval 1 − 2n sat . Second, Ref. [13] analyzed the orderby-order convergence of the employed Hamiltonians at 2n sat , and showed that, even though the reliability naturally decreases with increasing density, the order-by-order We also indicate the constraints from GW170817 and the values of ntr, above which nuclear-theory input alone becomes more constraining than observations. convergence remains reasonable and consistent with simple power counting arguments within the theoretical uncertainty estimates. Nevertheless, densities around 2n sat seem to provide an upper limit to the applicability of the chiral Hamiltonians we use in this work.
To support our main statement -namely that the constraints from GW170817 are compatible with but less restrictive than predictions of the EOS based on realistic nuclear potentials and do not yield specific new information about nuclear Hamiltonians or about possible phase transitions at supra-nuclear density -in this context, we investigate which density range for chiral EFT input is sufficient to justify our statement. We present the total uncertainty ranges for R 1.4 (left panel) andΛ for M chirp = 1.186M (right panel) as functions of the density n tr in Fig. 7. For R 1.4 , we indicate the upper limit on the radii of Ref. [7], R 1.4 ≤ 13.6 km, which was obtained using n tr = n sat and the LV constraint (horizontal dotted line). We find that the CSM alone constrains the radii to be smaller than this bound for n tr > 0.23 fm −3 ≈ 1.44n sat (an 11% increase of the expansion parameter compared to n sat ). For the tidal polarizability, we indicate the LV constraint as a horizontal blue band and find that the CSM leads toΛ ≤ 720 as soon as n tr > 0.285 fm −3 ≈ 1.78n sat (a 20% increase of the expansion parameter compared to n sat ). We would like to emphasize that these crucial values for n tr for both observables do not necessarily have to agree, as seen in Fig. 7. The reason is that the upper limit onΛ depends on q while R 1.4 does not. In Fig. 6(b) we have seen that when varying q in the range allowed by GW170817,Λ can increase to values ∼ 1000 for the EOS that pass the LV constraint from GW170817. Chiral EFT input becomes compatible with this value at n tr ∼ 0.23 fm −3 , in agreement with the value for R 1.4 . At these values for n tr , in particular at 1.44n sat , arguments for the validity of chiral interactions remain even stronger, which strengthens the validity of our main statement.
Finally, the value of n tr also affects the speed of sound inside neutron stars. The speed of sound is expected to approach the conformal limit of c 2 S = 1/3 at very high densities [65]. In neutron stars, though, it is not clear if this conformal limit remains valid or not. As discussed in detail in Ref. [13], the neutron-matter EOS up to n tr = 2n sat requires the speed of sound to pass the conformal limit to be sufficiently stiff to stabilize the observed two-solarmass neutron stars. In fact, for chiral models the speed of sound has to increase beyond the conformal limit for n tr > 0.28 fm −3 and even for phenomenological nuclear Hamiltonians, which lead to stiffer neutron-matter EOS, this statement remains valid for n tr > 0.31 fm −3 . While there might be EOS that are much stiffer below 2n sat and, hence, stabilize the heaviest neutron stars while still obeying the conformal limit, such EOS are ruled out by modern nuclear Hamiltonians.
Therefore, the neutron-matter EOS up to 2n sat for state-of-the-art nuclear Hamiltonians requires the speed of sound in neutron stars to experience a non-monotonous behavior, i.e, increasing beyond c 2 S = 1/3 but decreasing at higher densities to approach this limit. For example, for chiral EFT interactions and n tr = 2n sat , the speed of sound has to reach values c 2 S ≥ 0.4. The question remains, though, which forms of strongly-interacting matter lead to such a behavior for the speed of sound. In particular, it might be unlikely that the speed of sound reaches values close to the speed of light. If we were to enforce that the speed of sound inside neutron stars is limited by c 2 S ≤ 0.5, Fig. 8. Envelopes for the correlation betweenΛ of GW170817 and the radius of a 1.4M (red) and the radius of a 1.6M (blue) neutron star for ntr = 2nsat and the CSM. The corresponding values for the MM (not shown) lie within the CSM envelopes. We also show the lower limit of the LV constraint on the tidal polarizability of GW170817 [4], the proposed constraint of Ref. [66] and its update of Ref. [67], and the radius constraint for a 1.6M neutron star from Ref. [11]. the hatched areas in Fig. 7 are excluded: this constraint slightly reduces the upper bound on neutron-star radii but it would mostly rule out low-radius neutron stars. The reason is that neutron stars can have very low radii only for strong first-order phase transitions with small onset densities. To simultaneously support 2M neutron stars, the EOSs has to experience a sudden subsequent stiffening, i.e., the speed of sound has to increase dramatically. For a larger possible speed of sound, stronger phase transitions are allowed, which leads to stars with small radii. Limits on c 2 S , on the other hand, rule out the strongest phase transition, and increase the smallest possible radius. For c 2 S ≤ 0.5, the lower limit on the radius of a 1.4M neutron star is of the order of 10 km, of the order of the constraint of Ref. [11].
Impact of additional constraints
Even though the tidal polarizabilities extracted from GW170817 alone may not revolutionize our understanding of the EOS, several additional constraints based on the EM counterpart were proposed. These additional constraints were mostly based on the fact that the EM signal of GW170817 does not seem to imply a prompt collapse of the hypermassive merger remnant to a black hole. Instead, it is argued that the merger remnant survived for several 100 milliseconds before collapse. Based on this assumption, several groups independently suggested the maximum mass of neutron stars to be less than ≈ 2.2 − 2.3M [58,68,69]. While this constraint is powerful for smooth EOS models, which exhibit a strong correlation between M max and radii of typical neutron stars, the appearance of strong first-order phase transitions in general EOS models implies that the maximum mass is not very constraining for the structure of typical neutron stars; see also Ref. [10].
Additional constraints for radii and tidal polarizabilities were proposed based on the same assumptions. Ref. [11] suggested that the EM observation can be used to argue that R 1.6 ≥ 10.68 +0. 15 −0.04 km. In contrast to the M max constraint, a radius constraint has a sizable impact on the CSM: In Figs. 2(b) and (c) as well as Figs. 3(b) and (c) we indicate parts of the envelopes which are excluded by R 1.6 ≥ 10.68 +0. 15 −0.04 km by hatched areas. In addition, Ref. [66] suggested that the amount of ejecta determined from the EM observations impliesΛ > 400. This constraint was later updated toΛ > 300 [67]. In Fig. 8, we show the correlation betweenΛ and the radii of a 1.4M neutron star, R 1.4 , and a 1.6M neutron star, R 1.6 , for n tr = 2n sat and the CSM. While in general radius and tidal polarizabilities are correlated, the appearance of phase transitions washes this correlation out. Fig. 8 again highlights the fact that even an exact determination of Λ leaves a considerable radius uncertainty. Therefore, independent observations of radii and tidal polarizabilities are crucial to pin down the high-density EOS of nuclear matter.
In Fig. 8, we also show the constraints of Refs. [11,66,67]. The radius constraint implies thatΛ ≥ 180 while the constraint of Ref. [66] (Ref. [67]) implies R 1.6 ∼ R 1.4 ≥ 11.5 km (10.5 km). All of these constraints are based on empirical formulas extracted from simulations for a limited set of model EOSs. Especially for the constraints of Refs. [66,67], this set contains only four nucleonic EOS and, therefore, is likely overestimated [10]. A similar argument may hold for the first constraint. In both cases, however, future numerical simulations with additional EOSs, including, e.g., phase transition, can be used to refine these constraints and improve their robustness.
In addition to inferences from GW170817, additional future observations might dramatically improve our understanding of the EOS. The NICER [54] and eXTP [70] missions will provide neutron-star radii with a few percent uncertainty: the NICER mission is expected to provide first results within this year. As we have seen above, these future radius observations might considerably reduce the ambiguity of the allowed EOS models. A measurement of R 1.4 with a 5% accuracy will add valuable information and might help distinguish EOSs with and without phase transitions; see also Ref. [13].
In addition, in the next years additional neutron-star merger observations by the LV collaboration are expected. While the uncertainty for the tidal polarizability associated with GW170817 is not sufficient to constrain the EOS, this might change for future observations. For example, mergers with better signal-to-noise ratios could be observed, or sufficiently many mergers are observed so that accurate information can be extracted. In addition, third generation GW detectors might provide tidal- polarizability measurements with 10% uncertainty. To illustrate the possibilities offered by such new GW events, we inject in Fig. 6(d) and (e) a fictituous measurement of M chirp = 1.385 andΛ to be measured in the range 200 − 300. Such an observation would dramatically reduce the uncertainties in the EOS: it would reduce the allowed radius range for a typical neutron star to 11.7-13.4 km for n tr = n sat and to only 11.7-12.5 km for n tr = 2n sat . Also, it is interesting to note that in this case the MM cannot reproduce the two events, GW170817 and the fictitious one. There is, therefore, a great potential to combine future detections as a filter for EOS models and the accumulation of GW tidal deformabilities may offer the possibility to make statements about the existence of phase transition in dense matter.
Impact of phase transitions on tidal polarizability
In the previous sections, we have seen that ranges for all neutron-star observables are larger for the CSM than the MM because the CSM permits regions of drastic stiffening or softenting of the EOS. In this section, we briefly discuss the impact that strong phase transitions have on neutronstar tidal polarizabilities.
Of special interest for the interpretation of merger observations is the behavior of the EOS for stars in the mass range of the two component masses: For GW170817 this range is around M = 1.4M . EOS with strong first-order phase transitions appearing in stars of this mass range might be probed by future merger observations. For instance, the CSM, which includes such phase transitions, permits small values forΛ due to strong softening and subsequent stiffening of the EOS, but the MM prevents Λ to be below ≈ 250. These observable differences among the two models allow us to identify ranges of tidal deformabilities (and neutron-star observables in general) for which a strong first-order phase transition is preferred or even necessary, providing a means to probe new states of matter inside neutron stars. In the above example, an observation ofΛ ≤ 250 would indicate a softening of the EOS that smooth (nucleonic) EOS cannot provide.
We have also seen before that strong phase transitions weaken the correlation between R andΛ. For EOSs with phase transitions in the relevant mass range, which produce lighter stars with larger radii and heavier stars with smaller radii, a significant mass asymmetry of a merging binary keeps the EOS compatible with a constraint oñ Λ but permits larger radii for typical neutron stars and, therefore, washes out this correlation.
In Fig. 9, we illustrate this behavior for n tr = n sat for two interesting cases: EOSs which pass the LV constraint for q = 0.7 but are excluded for q = 1.0 and vice versa. We show the EOSs belonging to the first class of models in Fig. 9(a) and the EOSs belonging to the second class of models in Fig. 9(b). In general, for a given EOS, heavier neutron stars have smaller tidal polarizabilities, and increasing the mass asymmetry in the binary, i.e., lowering q, results in slightly smaller values forΛ for a given chirp mass. Therefore, several smooth EOSs, i.e., without phase transitions, pass the LV constraint for q = 0.7 but not for q = 1.0, which can be seen in Fig. 9(a).
The more interesting case are EOS models with a strong phase transition occurring around 1.4M and leading to a kink in the MR curve. Below the kink, radii and tidal polarizabilities are larger but drastically decrease beyond the phase transition. Two cases can be distinguished: the phase transition appears at masses above 1.4M or below 1.4M . For the first case, q = 1 for GW170817 implies that both stars have the same mass ∼ 1.4M and, and therefore, larger radii and tidal polarizabilitiesΛ = Λ 1 = Λ 2 . Lowering q, so that the heavier star probes the phase transition, suddenly decreasesΛ by a fair amount. Therefore, some EOS will be rejected for q = 1 but accepted for lower q, e.g., q = 0.7. We show these models in Fig. 9(a). In contrast to the smooth models, though, these models permit much larger radii for typical neutron stars, which can also be seen in Fig. 3(b).
If the phase transition appears below 1.4M , the inverted situation can appear: EOSs are ruled out for q = 0.7 but allowed for q = 1.0. We show these cases in the right panel of Fig. 9. If the phase transition happens in very lowmass stars at densities close to saturation density, then the EOS produces neutron stars with very small radii of the order of R 1.4 ∼ 9 km. In this case,Λ is reduced for smaller values of q and the EOS is ruled out due to the lower constraint the tidal polarizability, 70 ≤Λ. However, this is an extremely rare situation and we find only one such EOS among tens of thousands of samples, see Fig. 9(b). If the phase transition appears in stars slightly below 1.4M , for q = 1 both stars in GW170817 would have been hybrid stars and theΛ would have been small enough for these models to pass the constraint. Increasing the mass asymmetry, Λ 1 decreases but Λ 2 increases rapidly, leading to the EOS being rejected by the upper constraint onΛ. We found a few such models, see Fig. 9(b).
In any case, information on possible strong first-order phase transitions might be obtained by neutron-star merger observations. The observation of two mergers with similar chirp mass but different mass asymmetries and dramatically different binary tidal polarizabilities might shed light on the location of a strong first-order phase transition. In addition, future observations accessing regions allowed by the CSM but forbidden by the MM might also provide information on such a phase transition. For these extractions, however, higher-order GW parameters need be constrained much more precisely in future observations.
Empirical relations forΛ
Finally, we use our EOS models to investigate the empirical relation between the tidal polarizability and the radius of neutron stars. Such a relation was reported in Eq. (5) of Ref. [57], that related the binary tidal polarizabilityΛ to the common radius of a neutron-star binary:
Λ = 0.0042(4) R c 2 GM chirp 6 = 0.000146(13) R km 6 .
(29) Similarly, a relation between the tidal polarizability and radius of a typical 1.4M neutron star was reported in Ref. [7]:
Λ 1.4 = 2.88 · 10 −6 R 1.4 km 7.5 .(30)
Interestingly, even though both approaches are based on a piecewise polytropic expansion for the EOS, the resulting relations and especially exponents are rather different (for q = 1,Λ ∼ Λ 1.4 andR ∼ R 1.4 ). We constructed similar relations betweenΛ and the average radius of the two binary neutron stars in GW170817 for the CSM and n tr = n sat and n tr = 2n sat . We show density plots for our data points and the resulting fit functions in Fig. 10, together with the result of Ref. [57]. For n tr = n sat (left panel), we find the relatioñ
Λ = 0.00057(6) R c 2 GM chirp 7.05 .(31)
In this case, the exponent lies in between the other two determinations but is closer to the result of Ref. [7]. For n tr = 2n sat , we find instead
Λ = 0.0047(8) R c 2 GM chirp 5.94 ,(32)
in very good agreement to the relation of Ref. [57]. Comparing the findings, we see that these relations are not universal but depend on the EOS input used.
Comparisons to other recent works
There is general consensus that the model-independent upper bound on the tidal deformability Λ 1.4 < 800 derived by the initial analysis by the LIGO-Virgo scientific collaboration in Ref. [62] implies that the radius R 1.4 13.6 kms.
Making the reasonable assumption that both compact objects were NSs, and that they are both described by the same EOS, other authors have discussed how the bound on the tidal deformability impacts our understanding of NSs and dense matter. In what follows we compare our analysis to some of these studies. In Ref. [7] the authors construct a model for the EOS based on the predictions of chiral EFT up to a baryon number density n sat and use a set of four polytropes to describe matter at higher densities encountered in the core. They claim that perturbative calculations of QCD (pQCD) valid at very high density, far exceeding those encountered inside the NS core, can constrain the allowed parameter space of the polytropic EOSs. This is then combined with the upper limit on the tidal deformability to constrain the relationship between mass and radius of all NSs and the EOS of matter encountered in their cores. The maximal model we employ addresses the question of how improved constraints on the EOS from theory between n sat and 2n sat will alter the situation. We find no evidence for the usefulness of constraints from pQCD. The pressure in NS cores is much smaller than those encountered at the densities where pQCD is valid. Our maximal model is thermodynamically consistent and has adequate freedom to satisfy constraints from pQCD, but is uninformed by it.
In Ref. [8] the authors use a model EOS for neutronrich matter that describes matter at sub-nuclear density encountered inside nuclei and at higher densities encountered inside neutron stars. They find a strong correlation between the neutron-skin thickness of neutron-rich nuclei and the neutron star tidal deformability, similar to the correlation between the skin-thickness and neutron-star radii found earlier [71]. Such a correlation is expected because the NS radius and the tidal deformability are tightly correlated in models that do not contain phase transitions. For their models they report a tight correlation given by Λ 7.76 × 10 −4 (R/km) 5.3 . Using the correlation between neutron skin thickness and NS radius they show that the experimental lower bound on the neutron-skin thickness of 208 Pb implies R 1.4 > 12.55 km. This, combined with the correlation between Λ and R, is used to deduce that Λ 1.4 > 490. As discussed earlier, both these correlations are model dependent. It is useful to compare these inferences to the predictions of our minimal model shown in Fig. 7 which assumes a smooth EOS without phase transitions, does not violate experimental data for the neutron-skin thickness of 208 Pb, but can accommodate smaller values for R 1.4 and Λ 1.4 .
In Ref. [9], the authors impose an additional constraint requiring that M max < 2.16 M and employ EOSs with and without strong first-order phase transitions to determine limits on the neutron star radius and deformability. In the absence of phase transitions they find that 12 km < R 1.4 < 13.45 km and require Λ 1.4 > 375. This range is deduced as the 2σ interval by exploring a large suite hadronic models. Our analysis based on the minimal model finds that smaller radii are possible. Further, we caution against using a probabilistic interpretation of the allowed ranges for R 1.4 and Λ 1.4 because it is difficult to assign likelihoods to a specific realization of the EOS. The inclusion of strong phase transitions in [9] allows for the existence of "twin star" solutions containing two separate stable branches of NSs. In this case, smaller values for R 1.4 and Λ 1.4 are allowed and the constraints weaken to R 1.4 > 8.53 km and Λ 1.4 > 35.5. The results obtained using the maximal model (CSM) are in good agreement with these limits.
Summary
To summarize, we confronted the recent GW observation with modern nuclear-physics constraints from chiral EFT. We elaborated on our results of Ref. [10] and provided many additional results.
In particular, we have used two different classes of models to extend QMC results with chiral EFT interactions to higher densities encountered in the core of neutron stars. We have used a minimal model, that is based on a density expansion around saturation density, and a maximal model based on a very general expansion in the speed of sound, that explores all EOSs consistent with the low-density input from chiral EFT. We used these models to study the uncertainties for the EOS and neutron-star observables for chiral EFT input up to either n sat or 2n sat .
We used these models with input from nuclear physics up to nuclear saturation density and data from GW170817 to deduce that the radius of a typical neutron star to be R 1.4 ≤ 13.6 km. If instead EFT predictions for the EOS is used up to twice nuclear saturation density we find that Λ < 580 and R 1.4 ≤ 12.6 km. These smaller ranges suggests that future observations need to provide much more precise constraints to enable conclusions about the EOS or provide evidence for novel phases of matter in neutron stars. We compared our results to other recent works, which arrived at the opposite conclusion, and discussed the robustness of our main statement.
We studied the impact of additional constraints on our findings. Most of these additional constraints are derived from interpretations of the EM counterpart of GW170817, and provide limits on radii, tidal polarizabilities, or the maximum mass. We showed that constraints on the maximum mass do not reduce the EOS uncertainty for typical neutron stars, in contrast to radius information, which is rather valuable. We also investigated how an upper limit on the speed of sound in neutron stars affects our findings.
We finally investigated the impact of strong first-order phase transitions on our predictions. Contrasting the predictions of the MM and the CSM may provide useful insights on how future measurements ofΛ from neutron-star mergers can help to identify new forms of matter at densities beyond nuclear saturation.
To conclude, we pose the question if and when the accuracy of gravitational-wave observations will be sufficiently small to provide constraints on the EOS that are tighter than the ones from nuclear theory. From our results, we estimate that the uncertaintyΛ needs to be of the order of ∆Λ < 300 to test the chiral EFT prediction in the density range n sat − 2n sat . Based on the contrast between MM and CSM, we expect that ∆Λ < 100 is needed to shed light on the possible existence of phase transitions in dense matter.
Fig. 2 .
2Comparison of the allowed EOS envelopes for the MM (black bands) and the CSM (red bands). We show three cases: a) the most general case, where ntr = nsat and only Mmax ≥ 1.9M is enforced, b) for ntr = nsat when enforcing 70 ≤Λ ≤ 720 and c) for ntr = 2nsat. When additionally enforcing R1.6 ≥ 10.68 km, the hatched regions are excluded.
Fig. 3 .
3Comparison of the allowed MR envelopes for the MM (black bands) and the CSM (red bands). We show three cases: a) the most general case, where ntr = nsat and only Mmax ≥ 1.9M is enforced, b) for ntr = nsat when enforcing 70 ≤Λ ≤ 720, and c) for ntr = 2nsat. When additionally enforcing R1.6 ≥ 10.68 km, the hatched regions are excluded.
For n tr = n sat [panel (a)], the CSM (MM) leads to a radius range of a typical neutron star of 1.4M of 8.4 − 15.2 km (10.9 − 13.5 km). This range reduces dramatically for n tr = 2n sat [panel (c)], where we find 8.7 − 12.6 km (10.9 − 12.0 km) for the CSM (MM).
Fig. 4 .
4Posteriors for the LV observation of GW170817. Upper panel: The mass distributions for m1 and m2 from Ref. [4] (histograms) and the distributions used in this work (solid lines), see Eq. (22). Lower panel: Marginalized and normalized posterior probability for the distribution p(Λ) as defined in this work. We also show the corresponding distributions for the analysis of the LV collaboration (LVC), and the reanalysis of Ref. [57] for the two extreme cases [uniform mass prior (u) and mass prior informed by double neutron stars (d)].
Fig. 5 .
5Mass-radius envelopes for ntr = nsat of Fig. 3(a) and areas of constant Λ for all CSM EOS parametrizations. We show areas for Λ = 200 (red), Λ = 400 (green), Λ = 800 (blue), and for Λ = 1600 (brown). For a typical 1.4M neutron star (horizontal dashed line), a constraint on Λ is equivalent to a radius constraint. The corresponding values for the MM (not shown) always lie withing the areas for the CSM.
Fig. 6 .
6Envelopes for the CSM (red) and the MM (black) for the predicted tidal polarizability parameterΛ as a function of chirp mass for neutron-star binaries with component masses in the range 1.0 − 1.9m . We show: panel (a) the results for ntr = nsat, panel (b) for ntr = nsat when additionally enforcing the LV constraint from GW170817, and panel (c) for ntr = 2nsat. In panels (d) and (e), we show how this band reduces under a fictitious observation of a merger of two 1.6M neutron stars whenΛ would be measured to be 200 − 300. We indicate GW170817 and the fictitious measurement (blue error bars) and the corresponding chirp masses (dotted vertical lines). In panel (e), the GW observations together with nuclear physics constraints would rule out the MM.
Fig. 7 .
7Radius of a typical 1.4M neutron star, R1.4 (left), andΛ for M chirp = 1.186M (right) as functions of ntr. We show the envelopes for the CSM in red and for the MM in black. For the CSM, when requiring c 2 S ≤ 0.5 instead of c 2 S ≤ 1.0, the hatched areas are excluded.
Fig. 9 .
9Equations of state for ntr = nsat which pass the LV constraint 70 ≤Λ ≤ 720 for q = 0.7 but not for q = 1.0 [panel (a)] and vice versa [panel (b)].
Fig. 10 .
10Relation connecting the common radiusR and the binary tidal polarizabilityΛ for 0.7 < q < 1.0 and for ntr = nsat (left panel) ntr = 2nsat (right panel). As a comparison, we show the relation Eq. (5) of Ref.[57] with its uncertainty (black dotted lines) and our fits (blue dashed line).
Fig. 1. The energy per particle and pressure of pure neutron matter as functions of baryon density up to 2nsat. We show the constraints from Ref.[13] based on AFDMC calculations with local chiral potentials at N 2 LO (red bands). As a comparison, we show results at LO (black dashed lines), NLO (black dashed-dotted lines), as well as calculations using phenomenological N N interactions only (black dotted lines) and including also phenomenological 3N forces (black solid lines). We also indicate the unitary-gas bound of Ref.[16] (blue dashed-dotted lines) and the part of the uncertainty band that we use for our NS modeling (red dotted lines); see text for more details.0.00
0.05
0.10
0.15
0.20
0.25
0.30
n fm −3
0
5
10
15
20
25
30
35
40
45
E/A [MeV]
LO
NLO
N 2 LO
AV8
AV8 + UIX
E UG
0.00
0.05
0.10
0.15
0.20
0.25
0.30
n fm −3
−10
−5
0
5
10
15
20
P MeV fm −3
LO
NLO
N 2 LO
AV8
AV8 + UIX
E UG
Table 2 .
2Fit parameters of the Gaussians of Eq.(23) N
a1
Λ1
σ1
a2
Λ2
σ2
a3
Λ3
σ3
2
281.6
212.6
76.2
106.5
547.5
171.0
3
266.6
212.4
74.2
85.0
523.6
219.2
38.6
560.8
49.5
. B Abbott, Virgo ; LIGO Scientific1710.05832Phys. Rev. Lett. 119161101B. Abbott et al. (Virgo, LIGO Scientific), Phys. Rev. Lett. 119, 161101 (2017), 1710.05832
High Time Resolution Universe Survey, Nordic Optical Telescope. B P Abbott, 1710.05833Euro VLBI Team, ALMA), Astrophys. J. 84812Deeper Wider Faster ProgramDark Energy Survey, MASTER, AstroSat Cadmium Zinc Telluride Imager Team ; Chandra Team at McGill University ; CALETDark Energy Camera GW-EMB.P. Abbott et al. (GROND, SALT Group, Oz- Grav, DFN, INTEGRAL, Virgo, Insight-Hxmt, MAXI Team, Fermi-LAT, J-GEM, RATIR, IceCube, CAAS- TRO, LWA, ePESSTO, GRAWITA, RIMAS, SKA South Africa/MeerKAT, H.E.S.S., 1M2H Team, IKI-GW Follow- up, Fermi GBM, Pi of Sky, DWF (Deeper Wider Faster Program), Dark Energy Survey, MASTER, AstroSat Cadmium Zinc Telluride Imager Team, Swift, Pierre Auger, ASKAP, VINROUGE, JAGWAR, Chandra Team at McGill University, TTU-NRAO, GROWTH, AGILE Team, MWA, ATCA, AST3, TOROS, Pan-STARRS, NuSTAR, ATLAS Telescopes, BOOTES, CaltechNRAO, LIGO Scientific, High Time Resolution Universe Sur- vey, Nordic Optical Telescope, Las Cumbres Observatory Group, TZAC Consortium, LOFAR, IPN, DLT40, Texas Tech University, HAWC, ANTARES, KU, Dark Energy Camera GW-EM, CALET, Euro VLBI Team, ALMA), As- trophys. J. 848, L12 (2017), 1710.05833
. B P Abbott, 1710.05834Astrophys. J. 84813B.P. Abbott et al. (Virgo, Fermi-GBM, INTEGRAL, LIGO Scientific), Astrophys. J. 848, L13 (2017), 1710.05834
. B P Abbott, LIGO Scientific1805.11579Phys. Rev. 911001B.P. Abbott et al. (LIGO Scientific, Virgo), Phys. Rev. X9, 011001 (2019), 1805.11579
. V Savchenko, 1710.05449Astrophys. J. 84815V. Savchenko et al., Astrophys. J. 848, L15 (2017), 1710.05449
. M R Drout, 1710.05443Science. 3581570M.R. Drout et al., Science 358, 1570 (2017), 1710.05443
. E Annala, T Gorda, A Kurkela, A Vuorinen, 1711.02644Phys. Rev. Lett. 120172703E. Annala, T. Gorda, A. Kurkela, A. Vuorinen, Phys. Rev. Lett. 120, 172703 (2018), 1711.02644
. F J Fattoyev, J Piekarewicz, C J Horowitz, 1711.06615Phys. Rev. Lett. 120172702F.J. Fattoyev, J. Piekarewicz, C.J. Horowitz, Phys. Rev. Lett. 120, 172702 (2018), 1711.06615
. E R Most, L R Weih, L Rezzolla, J Schaffner-Bielich, 1803.00549Phys. Rev. Lett. 120261103E.R. Most, L.R. Weih, L. Rezzolla, J. Schaffner-Bielich, Phys. Rev. Lett. 120, 261103 (2018), 1803.00549
. I Tews, J Margueron, S Reddy, 1804.02783Phys. Rev. 9845804I. Tews, J. Margueron, S. Reddy, Phys. Rev. C98, 045804 (2018), 1804.02783
. A Bauswein, O Just, H T Janka, N Stergioulas, 1710.06843Astrophys. J. 85034A. Bauswein, O. Just, H.T. Janka, N. Stergioulas, Astro- phys. J. 850, L34 (2017), 1710.06843
. J E Lynn, I Tews, J Carlson, S Gandolfi, A Gezerlis, K E Schmidt, A Schwenk, Phys. Rev. Lett. 11662501J.E. Lynn, I. Tews, J. Carlson, S. Gandolfi, A. Gezerlis, K.E. Schmidt, A. Schwenk, Phys. Rev. Lett. 116, 062501 (2016)
. I Tews, J Carlson, S Gandolfi, S Reddy, 1801.01923Astrophys. J. 860I. Tews, J. Carlson, S. Gandolfi, S. Reddy, Astrophys. J. 860, 149 (2018), 1801.01923
. J A Melendez, S Wesolowski, R J Furnstahl, 1704.03308Phys. Rev. 9624003J.A. Melendez, S. Wesolowski, R.J. Furnstahl, Phys. Rev. C96, 024003 (2017), 1704.03308
. M Alford, M Braby, M W Paris, S Reddy, nucl-th/0411016Astrophys. J. 629M. Alford, M. Braby, M.W. Paris, S. Reddy, Astrophys. J. 629, 969 (2005), nucl-th/0411016
. I Tews, J M Lattimer, A Ohnishi, E E Kolomeitsev, 1611.07133Astrophys. J. 848105I. Tews, J.M. Lattimer, A. Ohnishi, E.E. Kolomeitsev, As- trophys. J. 848, 105 (2017), 1611.07133
. K Hebeler, A Schwenk, 0911.0483Phys. Rev. 8214314K. Hebeler, A. Schwenk, Phys. Rev. C82, 014314 (2010), 0911.0483
. C Drischler, A Carbone, K Hebeler, A Schwenk, 1608.05615Phys. Rev. 9454307C. Drischler, A. Carbone, K. Hebeler, A. Schwenk, Phys. Rev. C94, 054307 (2016), 1608.05615
. J W Holt, N Kaiser, 1612.04309Phys. Rev. 9534326J.W. Holt, N. Kaiser, Phys. Rev. C95, 034326 (2017), 1612.04309
. G Hagen, T Papenbrock, A Ekström, K A Wendt, G Baardsen, S Gandolfi, M Hjorth-Jensen, C J Horowitz, 1311.2925Phys. Rev. 8914319G. Hagen, T. Papenbrock, A. Ekström, K.A. Wendt, G. Baardsen, S. Gandolfi, M. Hjorth-Jensen, C.J. Horowitz, Phys. Rev. C89, 014319 (2014), 1311.2925
. S Gandolfi, J Carlson, S Reddy, 1101.1921Phys. Rev. 8532801S. Gandolfi, J. Carlson, S. Reddy, Phys. Rev. C85, 032801 (2012), 1101.1921
. A Carbone, A Rios, A Polls, 1408.0717Phys. Rev. 9054322A. Carbone, A. Rios, A. Polls, Phys. Rev. C90, 054322 (2014), 1408.0717
. S Gandolfi, A Gezerlis, J Carlson, 1501.05675Ann. Rev. Nucl. Part. Sci. 65303S. Gandolfi, A. Gezerlis, J. Carlson, Ann. Rev. Nucl. Part. Sci. 65, 303 (2015), 1501.05675
. K Hebeler, J D Holt, J Menendez, A Schwenk, 1508.06893Ann. Rev. Nucl. Part. Sci. 65K. Hebeler, J.D. Holt, J. Menendez, A. Schwenk, Ann. Rev. Nucl. Part. Sci. 65, 457 (2015), 1508.06893
. J Carlson, S Gandolfi, F Pederiva, S C Pieper, R Schiavilla, 1412.3081J. Carlson, S. Gandolfi, F. Pederiva, S.C. Pieper, R. Schi- avilla et al. (2014), 1412.3081
. M Piarulli, A Baroni, L Girlanda, A Kievsky, A Lovato, E Lusk, L E Marcucci, S C Pieper, R Schiavilla, M Viviani, Phys. Rev. Lett. 12052503M. Piarulli, A. Baroni, L. Girlanda, A. Kievsky, A. Lo- vato, E. Lusk, L.E. Marcucci, S.C. Pieper, R. Schiavilla, M. Viviani et al., Phys. Rev. Lett. 120, 052503 (2017)
. D Lonardoni, J Carlson, S Gandolfi, J E Lynn, K E Schmidt, A Schwenk, X Wang, 1709.09143Phys. Rev. Lett. 120122502D. Lonardoni, J. Carlson, S. Gandolfi, J.E. Lynn, K.E. Schmidt, A. Schwenk, X. Wang, Phys. Rev. Lett. 120, 122502 (2018), 1709.09143
. J Carlson, S Reddy, Phys. Rev. Lett. 100150403J. Carlson, S. Reddy, Phys. Rev. Lett. 100, 150403 (2008)
. S Nascimbãšne, N Navon, K J Jiang, F Chevy, C Salomon, Nature. 4631057S. NascimbÚne, N. Navon, K.J. Jiang, F. Chevy, C. Sa- lomon, Nature 463, 1057 (2010)
. N Navon, S Nascimbene, F Chevy, C Salomon, Science. 328729N. Navon, S. Nascimbene, F. Chevy, C. Salomon, Science 328, 729 (2010)
M W Zwierlein, Superfluidity in ultracold atomic Fermi gases. Oxford University Press2M.W. Zwierlein, Superfluidity in ultracold atomic Fermi gases, Vol. 2 (Oxford University Press, 2014)
. D Lonardoni, A Lovato, S Gandolfi, F Pederiva, 1407.4448Phys. Rev. Lett. 11492301D. Lonardoni, A. Lovato, S. Gandolfi, F. Pederiva, Phys. Rev. Lett. 114, 092301 (2015), 1407.4448
. S Gandolfi, H W Hammer, P Klos, J E Lynn, A Schwenk, 1612.01502Phys. Rev. Lett. 118232501S. Gandolfi, H.W. Hammer, P. Klos, J.E. Lynn, A. Schwenk, Phys. Rev. Lett. 118, 232501 (2017), 1612.01502
. E Epelbaum, H W Hammer, U G Meißner, U G Ner, Reviews of Modern Physics. 811773E. Epelbaum, H.W. Hammer, U.G. Meißner, U. G.ner, Re- views of Modern Physics 81, 1773 (2009)
. R Machleidt, D R Entem, Phys. Rept. 5031R. Machleidt, D.R. Entem, Phys. Rept. 503, 1 (2011)
. A Gezerlis, I Tews, E Epelbaum, S Gandolfi, K Hebeler, A Nogga, A Schwenk, Phys. Rev. Lett. 11132501A. Gezerlis, I. Tews, E. Epelbaum, S. Gandolfi, K. Hebeler, A. Nogga, A. Schwenk, Phys. Rev. Lett. 111, 032501 (2013)
. A Gezerlis, I Tews, E Epelbaum, M Freunek, S Gandolfi, K Hebeler, A Nogga, A Schwenk, 1406.0454Phys. Rev. 9054323A. Gezerlis, I. Tews, E. Epelbaum, M. Freunek, S. Gan- dolfi, K. Hebeler, A. Nogga, A. Schwenk, Phys. Rev. C90, 054323 (2014), 1406.0454
. I Tews, S Gandolfi, A Gezerlis, A Schwenk, 1507.05561Phys. Rev. 9324305I. Tews, S. Gandolfi, A. Gezerlis, A. Schwenk, Phys. Rev. C93, 024305 (2016), 1507.05561
. J Margueron, R Hoffmann Casali, F Gulminelli, Physical Review C 97. J. Margueron, R. Hoffmann Casali, F. Gulminelli, Physical Review C 97 (2018)
. J E Lynn, I Tews, S Gandolfi, A Lovato, 1901.04868J.E. Lynn, I. Tews, S. Gandolfi, A. Lovato (2019), 1901.04868
. R J Furnstahl, N Klco, D R Phillips, S Wesolowski, 1506.01343Phys. Rev. 9224005R.J. Furnstahl, N. Klco, D.R. Phillips, S. Wesolowski, Phys. Rev. C92, 024005 (2015), 1506.01343
. E Epelbaum, H Krebs, U G Meißner, 1412.0142Eur. Phys. J. A51. 53E. Epelbaum, H. Krebs, U.G. Meißner, Eur. Phys. J. A51, 53 (2015), 1412.0142
. L Huth, I Tews, J E Lynn, A Schwenk, 1708.03194Phys. Rev. 9654003L. Huth, I. Tews, J.E. Lynn, A. Schwenk, Phys. Rev. C96, 054003 (2017), 1708.03194
. J Margueron, R Hoffmann Casali, F Gulminelli, Physical Review C 97. J. Margueron, R. Hoffmann Casali, F. Gulminelli, Physical Review C 97 (2018)
. I Tews, 1607.06998Phys. Rev. 9515803I. Tews, Phys. Rev. C95, 015803 (2017), 1607.06998
. M G Alford, S Han, M Prakash, 1302.4732Phys. Rev. 8883013M.G. Alford, S. Han, M. Prakash, Phys. Rev. D88, 083013 (2013), 1302.4732
. S K Greif, G Raaijmakers, K Hebeler, A Schwenk, A L Watts, 1812.08188S.K. Greif, G. Raaijmakers, K. Hebeler, A. Schwenk, A.L. Watts (2018), 1812.08188
. J S Read, B D Lackey, B J Owen, J L Friedman, 0812.2163Phys. Rev. 79124032J.S. Read, B.D. Lackey, B.J. Owen, J.L. Friedman, Phys. Rev. D79, 124032 (2009), 0812.2163
. K Hebeler, J M Lattimer, C J Pethick, A Schwenk, Astrophys. J. 77311K. Hebeler, J.M. Lattimer, C.J. Pethick, A. Schwenk, As- trophys. J. 773, 11 (2013)
. C A Raithel, F Ozel, D Psaltis, Astrophys. J. 83144C.A. Raithel, F. Ozel, D. Psaltis, Astrophys. J. 831, 44 (2016)
Hessels. P Demorest, T Pennucci, S Ransom, M Roberts, J , Nature. 4671081P. Demorest, T. Pennucci, S. Ransom, M. Roberts, J. Hes- sels, Nature 467, 1081 (2010)
. J Antoniadis, P C Freire, N Wex, T M Tauris, R S Lynch, Science. 3406131J. Antoniadis, P.C. Freire, N. Wex, T.M. Tauris, R.S. Lynch et al., Science 340, 6131 (2013)
. E Fonseca, Astrophys. J. 832167E. Fonseca et al., Astrophys. J. 832, 167 (2016)
K Gendreau, Z Arzoumanian, T Okaajima, Proc. SPIE. SPIE8443844313K. Gendreau, Z. Arzoumanian, T. Okaajima, Proc. SPIE 8443, 844313 (2012)
. V Paschalidis, K Yagi, D Alvarez-Castillo, D B Blaschke, A Sedrakian, 1712.00451Phys. Rev. 9784038V. Paschalidis, K. Yagi, D. Alvarez-Castillo, D.B. Blaschke, A. Sedrakian, Phys. Rev. D97, 084038 (2018), 1712.00451
. M G Alford, G F Burgio, S Han, G Taranto, D Zappalà, 1501.07902Phys. Rev. 9283002M.G. Alford, G.F. Burgio, S. Han, G. Taranto, D. Zappalà, Phys. Rev. D92, 083002 (2015), 1501.07902
. S De, D Finstad, J M Lattimer, D A Brown, E Berger, C M Biwer, 1804.08583Phys. Rev. Lett. 12191102S. De, D. Finstad, J.M. Lattimer, D.A. Brown, E. Berger, C.M. Biwer, Phys. Rev. Lett. 121, 091102 (2018), 1804.08583
. B Margalit, B D Metzger, Astrophys. J. 85019B. Margalit, B.D. Metzger, Astrophys. J. 850, L19 (2017)
. E E Flanagan, T Hinderer, Physical Review D. 77E.E. Flanagan, T. Hinderer, Physical Review D 77 (2008)
. T Damour, A ; C C Nagar, T Moustakidis, C Gaitanos, G A Margaritis, Lalazissis, 1608.00344Physical Review D. 8045801Phys. Rev.. Erratum: Phys. Rev.C95,no.5,059904(2017)T. Damour, A. Nagar, Physical Review D 80 (2009) 61. C.C. Moustakidis, T. Gaitanos, C. Margaritis, G.A. Lalazissis, Phys. Rev. C95, 045801 (2017), [Erratum: Phys. Rev.C95,no.5,059904(2017)], 1608.00344
. B P Abbott, Virgo, LIGO ScientificPhys. Rev. Lett. 119161101B.P. Abbott et al. (Virgo, LIGO Scientific), Phys. Rev. Lett. 119, 161101 (2017)
. B P Abbott, 1805.11581Virgo, LIGO Scientific). B.P. Abbott et al. (Virgo, LIGO Scientific) (2018), 1805.11581
. L Lindblom, 1009.0738Phys. Rev. 82103011L. Lindblom, Phys. Rev. D82, 103011 (2010), 1009.0738
. A Kurkela, P Romatschke, A Vuorinen, 0912.1856Phys. Rev. 81105021A. Kurkela, P. Romatschke, A. Vuorinen, Phys. Rev. D81, 105021 (2010), 0912.1856
. D Radice, A Perego, F Zappa, S Bernuzzi, Astrophys. J. 85229D. Radice, A. Perego, F. Zappa, S. Bernuzzi, Astrophys. J. 852, L29 (2018)
. D Radice, L Dai, 1810.12917D. Radice, L. Dai (2018), 1810.12917
. M Shibata, S Fujibayashi, K Hotokezaka, K Kiuchi, K Kyutoku, Y Sekiguchi, M Tanaka, Phys. Rev. 96123012M. Shibata, S. Fujibayashi, K. Hotokezaka, K. Kiuchi, K. Kyutoku, Y. Sekiguchi, M. Tanaka, Phys. Rev. D96, 123012 (2017)
. L Rezzolla, E R Most, L R Weih, Astrophys. J. 85225L. Rezzolla, E.R. Most, L.R. Weih, Astrophys. J. 852, L25 (2018)
. A L Watts, Sci. China Phys. Mech. Astron. 6229503A.L. Watts et al., Sci. China Phys. Mech. Astron. 62, 29503 (2019)
. C J Horowitz, J Piekarewicz, Phys. Rev. Lett. 865647C.J. Horowitz, J. Piekarewicz, Phys. Rev. Lett. 86, 5647 (2001)
| []
|
[
"TAILS FROM THE ORPHANAGE",
"TAILS FROM THE ORPHANAGE"
]
| [
"Carl J Grillmair \nIPAC\nMail Code 314-6, 1200 E. California Blvd91125Caltech, PasadenaCA\n"
]
| [
"IPAC\nMail Code 314-6, 1200 E. California Blvd91125Caltech, PasadenaCA"
]
| []
| Examining a portion of the northern Sloan Digital Sky Survey (SDSS) footprint, we detect at least three and possibly seven halo debris streams. One of these (PS1-D) was recently detected in the Pan-STARRS1 3π survey, and the remaining two are also evident as extensions of the SDSS detections. All of these streams are metal poor and are found at a distance of around 21 ± 5 kpc. The streams are between 65 • and 70 • in length, oriented almost north-south, and are nearly parallel and somewhat convergent with the neighboring Orphan stream. Surface densities ranging from 1.5 to 0.5 stars per square degree down to g = 21.7 correspond to surface brightnesses between 35 and 37 mag per square arcsecond. The streams each appear to be more than 300 pc across, suggesting either dwarf/ultrafaint galaxy progenitors or long-term heating of very ancient globular cluster streams. The orbits of all but one of these streams appear to be nearly radial, and the orbit normals suggest that all of the streams are part of the Vast Polar Structure, a relatively narrow plane that contains most of the known satellite galaxies, globular clusters, and stellar streams. | 10.3847/1538-4357/834/2/98 | [
"https://arxiv.org/pdf/1612.03181v1.pdf"
]
| 7,161,962 | 1612.03181 | 51589402b3f0d42196d6b3c8065c47ede7a5efe1 |
TAILS FROM THE ORPHANAGE
9 Dec 2016 Draft version October 18, 2018 Draft version October 18, 2018
Carl J Grillmair
IPAC
Mail Code 314-6, 1200 E. California Blvd91125Caltech, PasadenaCA
TAILS FROM THE ORPHANAGE
9 Dec 2016 Draft version October 18, 2018 Draft version October 18, 2018Preprint typeset using L A T E X style emulateapj v. 5/2/11Subject headings: Galaxy: Structure -Galaxy: Halo
Examining a portion of the northern Sloan Digital Sky Survey (SDSS) footprint, we detect at least three and possibly seven halo debris streams. One of these (PS1-D) was recently detected in the Pan-STARRS1 3π survey, and the remaining two are also evident as extensions of the SDSS detections. All of these streams are metal poor and are found at a distance of around 21 ± 5 kpc. The streams are between 65 • and 70 • in length, oriented almost north-south, and are nearly parallel and somewhat convergent with the neighboring Orphan stream. Surface densities ranging from 1.5 to 0.5 stars per square degree down to g = 21.7 correspond to surface brightnesses between 35 and 37 mag per square arcsecond. The streams each appear to be more than 300 pc across, suggesting either dwarf/ultrafaint galaxy progenitors or long-term heating of very ancient globular cluster streams. The orbits of all but one of these streams appear to be nearly radial, and the orbit normals suggest that all of the streams are part of the Vast Polar Structure, a relatively narrow plane that contains most of the known satellite galaxies, globular clusters, and stellar streams.
INTRODUCTION
Dozens of distinct, highly collimated stellar debris streams are now known to orbit in the halo of our Galaxy (see Grillmair & Carlin (2016) and Smith (2016) for reviews). Detecting and tracing such streams around the Galaxy will become increasingly important as we refine our techniques for using them as probes of the Galactic potential (Küpper et al. 2015;Bovy et al. 2016) and of the distribution of dark matter subhalos (Carlberg 2009;Yoon et al. 2011). Knowing in advance the locations and trajectories of these streams may also help us interpret the vast amount of data we expect to harvest from upcoming Gaia data releases.
While searching for and examining other streams in the Sloan Digital Sky Survey (SDSS), we have long noted a somewhat fibrous texture in the area of sky surrounding the Orphan stream (the "orphanage"). However, detailed examinations at high spatial resolution (0.1 • ) have not revealed any obvious streams at a signal-to-noise ratio sufficient for publication. In this Letter we coarsen our spatial sampling considerably, enabling the detection of at least three nearly parallel, low-metallicity stellar debris streams. We describe our detection method in Section 2. We make an initial attempt to constrain the orbits in Section 3, and discuss additional low-significance structures in Section 4. Concluding remarks are given in Section 5.
ANALYSIS
We make use of the photometric catalog from data release 10 of the SDSS. Experience has shown that we probe most deeply and are most sensitive to differences in stellar populations using just the g, r, and i measurements. We use all objects classed as stars and with g < 21.7. Photometry is dereddened using the DIRBE/IRAS dust maps of Schlegel, Finkbeiner, & Davis (1998), corrected using the prescription of Schlafly & Finkbeiner [email protected] (2011). Figure 1 shows the result of applying a matched filter in the color-magnitude domain to the western half of the northern footprint of the SDSS. The filter is based on the color-magnitude distribution of stars in the old, metalpoor globular cluster NGC 5053 with [Fe/H] ≈ −2.29 (Harris 1996). Three streams become apparent, running roughly north-south and somewhat convergent with the Orphan stream, similar in appearance to the claw marks that bears commonly leave on trees. Figure 2 shows the distribution of E(B − V ) over the same region of sky as in Figure 1 (Schlegel, Finkbeiner, & Davis 1998) . While there are evidently some random clumps of dust emission between the streams, there are no linear features that resemble the streams in Figure 1. We conclude that the features in Figure 1 are not an artifact of reddening-induced incompleteness. Figure 3 shows the same part of the sky in a matched-filtered map of the Pan-STARRS 3π survey (Bernard et al. 2016) for a distance of 25 kpc. While there are evidently issues with spatially variable completeness at the limit of the single-epoch survey, each of the streams in Figure 1 has a counterpart in Figure 3 extending to the southern limit of the Pan-STARRS survey. That the streams are not as obvious as they are in Figure 1 is presumably at least partly due to the more metal-rich ([Fe/H] = -1.5) isochrone used by Bernard et al. (2016). For example, the Orphan stream appears considerably weaker in Figure 3 than it does in Grillmair (2006a), Belokurov et al. (2006), or Newberg et al. (2010.
The features in Figures 1 and 3 extend through the constellations Leo, Leo Minor, Hydra, Sextans, Antlia, and Pyxis. Since there are also multiple streams in the same constellations, we follow Grillmair (2014) and name the streams after rivers cited in the Illiad. Henceforth we refer to the stream next to PS1-D as Sangarius, and the stream next to Orphan as Scamander.
The trajectories of the streams north of the bright, southern Sagittarius arm in panel d of Figure 1 are obviously somewhat conjectural. While they appear plausible, the discontinuity created by the Sagittarius stream creates some ambiguity, and velocity information will be required before we can definitively conclude that the northern extensions are not unrelated structures. Including both northern ( Figure 1) and southern portions (Figure 3) of the streams, the arc lengths are 68 • , 59 • , and 66 • for PS1-D, Sangarius, and Scamander, respectively. At a distance of 21 kpc, this translates to physical lengths of 25, 22, and 24 kpc, respectively. In equatorial coordinates, the trajectory of PS1-D can be modeled to σ = 0.19 • using:
α = 141.017 + 0.208δ − 0.02491δ 2 + 0.000609δ 3 −1.20989 × 10 −6 δ 4 (1)
Sangarius is well modeled (σ = 0.22 • ) with:
α = 148.9492 − 0.03811δ + 0.001505δ 2(2)
while Scamander again requires a higher-order fit (σ = 0.11 • ):
α = 155.642 − 0.1000δ − 0.00191δ 2 − 0.0003346δ 3 +1.47775 × 10 −5 δ 4 (3)
Grillmair (2009) quantified the significance of streams above the background using the "T" statistic, measuring the median filtered surface density in multiple segments as one sweeps across the sky perpendicular to the orientation of the stream. The region of sky in the vicinity of the orphanage is quite complex, with the Sagittarius, Orphan, EBS, and AntiCenter streams, along with the orphanage streams themselves, making it difficult to measure the noise floor in a stream-free region. In Figure 4 we show the run of T across an unsmoothed version of Figure 1, normalizing to the "field" RMS measured in an identical manner in an apparently blank region of sky to the north and east of the Sagittarius stream. We use equations 1, 2, and 3 to define the stream segments we pass over each respective stream. We detect PS1-D and Scamander at roughly the 5σ level, while we detect Sangarius at somewhere between 2 and 4σ. These levels roughly accord with the visual impression in Figure 1.
Sampling a version of Figure 1 using 0.1 • binning, we find that the FWHMs of the streams range from 0.9 • to 1.1 • . At a distance of 21 kpc, this corresponds to physical widths ranging from 330 to 400 pc, broader than most of the known globular cluster streams but narrower than the Orphan stream. This suggests that the streams could be either the remnants of diminutive dwarf or ultrafaint galaxies, or (more likely given our population estimates below) globular cluster streams that have been heated for 10 Gyr or more by dark-matter subhalos (Carlberg 2009).
Color-magnitude diagrams for the streams are shown in Figure 5. All of the streams appear to be quite metal poor ([Fe/H] ≤ −1.4), which is expected since our filter was designed to capture metal-poor substructures. To estimate the number of stars in each stream, we count only stars that lie within 2σ of the NGC 5053 colormagnitude locus and on the subgiant branch or below.
Once again, given the complexity of the field, it is difficult to measure the surface density of field stars. Our best estimate yields mean stream surface densities of 0.5 stars per square degree for the southernmost 15 • (−35 • < η < −20 • ) of PS1-D and Sangarius in Figure 1 and 1.5 stars per square degree for the same portion of Scamander. This is superposed on a field star surface density, selected using the same photometric constraints, of about 10 stars per square degree. We are therefore sampling the streams with a total of between 30 and 110 stars down to g = 21.7. If we adopt a globular cluster-like luminosity function and integrate down the main sequence, then for a distance of 21 kpc, we find that there should be between 440 (Sangarius) and 1600 (Scamander) stars in total within these portions of the streams. A similar integration yields equivalent surface brightnesses of between 35 and 37 magnitudes per square arcsecond. If the stream segments we see are portions of streams that encircle the Galaxy, then we arrive at total populations of the progenitors of between 10 4 and 4×10 4 stars, e.g. comparable to the populations of modern-day globular clusters.
There are no known globular clusters or dwarf galaxies falling near great circles defined by each stream. though this is not conclusive as the uncertainties are high. On the other hand, the widths of the streams, combined with their tenuousness, may be an indication of great age and of the possibility that the progenitors dissolved long ago.
ORBITS
Though we have as yet no velocity information for these streams, and while we are mindful of the fact that streams do not precisely follow single orbits (Eyre & Binney 2011), the distances and trajectories of the streams can give us some idea of the orbit parameters. We use a model for the Galactic potential by Allen & Santillan (1991) and fit in a least-squares sense to five or six, roughly equidistant normal points along each stream in the region −35 • < η < 0 • . We assign a distance of 21 kpc to the middle and the ends of each stream. While our distance estimates are far less certain than the stream trajectories on the sky, we find that the ends of the streams disappear if we shift the selection filter by more than ≈ 0.5 mag faintward or brightward; the streams are evidently nearly perpendicular to our line of sight. We assign 0.3 • uncertainties to the positions of the individual normal points and 5 kpc uncertainties to our distance estimates. Table 1 lists the orbit parameters corresponding to the best fits. The uncertainties are estimated by setting each of the three free parameters (the radial velocity and the two components of proper motion) to their 90% limits and refitting the stream using the remaining two parameters. While the orbital planes are quite well constrained by the observed trajectories of the streams, the radial velocities and hence eccentricities are evidently nearly unconstrained.
The convex-eastwards curvature found by Bernard et al. (2016) for PS1-D is at odds with any plausible orbit around the Galaxy. This westwardbending feature at the north end of the stream is visible in panels (b) and (c) of Figure 1, though in the SDSS data it is neither an obvious continuation of the stream nor clearly preferred over the eastward-bending feature we trace in panel (d). The stream trace in panel (d) alleviates the nonphysical aspect of the orbit, though only just. This suggests that the northern part of the Bernard et al. (2016) stream may actually be some unrelated structure. Deeper photometric data and/or Gaia proper motions will be required to determine the true path of PS1-D north of the Sagittarius stream.
Both our eastward-bending PS1-D and Sangarius appear to be on nearly radial orbits, while Scamander is on a less extreme, somewhat more tightly bound orbit. We note that the Orphan stream is also on a far-flung orbit, with R apo ≃ 90 kpc (Newberg et al. 2010). This suggests the possibility that PS1-D and Sangarius may be dynamically related to Orphan in some way, perhaps through a much larger structure that gave rise to Orphan. Table 1 shows that the orbital poles for PS1-D, Sangarius, and Scamander all lie between those of the Orphan and Anticenter streams. This consequently puts them within the region defining the Vast Polar Structure (VPOS) (Pawlowski et al. 2012) that apparently contains the orbits of most of the known dwarf galaxies, globular clusters, and stellar debris streams orbiting our Galaxy.
STILL OTHER STREAMS?
Careful examination of Figure 1 shows that there may be at least four more streams, at S/Ns considerably lower than those of the streams analyzed above, at a distance of ≈ 21 kpc in the region 0 • > λ > −25 • , −5 • < η < 25 • . These ladder-like structures are almost certainly affected by discontinuities in the SDSS scan direction, but the enhancements in the north-south direction are less easy to dismiss.
If indeed these are additional streams, then their similar orientations suggest that either the data or our filtering and analysis are somehow particularly sensitive to streams oriented in a nearly north-south direction, or that the VPOS once contained many more objects (clusters or dwarf galaxies) than we see today. Concerning the former, we are certainly biased against selecting structures oriented along the SDSS east-west scan direction, but we know of no process or peculiarity that could enhance apparently linear structures in the cross-scan direction. These potential streams will need to be verified once we have access to other surveys (e.g. Pan-STARRS) and other stream detection methods (e.g. Gaia). We mention them here primarily to aid in the identification or verification of structures that may be discovered in upcoming Gaia releases.
These features appear to have trajectories qualitatively similar to those of the three streams examined above. Orbit fits yield nearly radial orbits, with orbit normals (l, b = 199 • ± 1 • , 26 • ± 4 • ) that once again lie within the patch of sky associated with the VPOS.
CONCLUSIONS
Examining a region of the northern footprint of the Sloan Digital Sky Survey, we find evidence for at least three, and possibly seven stellar debris streams. With 30 to 110 stars per stream and equivalent surface brightnesses > 35 mag arcsec −2 , these streams are likely at the very limit of what can be detected in the SDSS. Yet they are enticing as perhaps the tip a substantial iceberg, as well as an indicator of what we may discover when Gaia proper motions become available.
All of these streams appear to orbit within the VPOS, believed to contain the majority of surviving satellite galaxies. Once this has beenverified with velocity information, these streams will clearly add to the significance of this structure and to the consequences it may have for our understanding of galaxy formation in ΛCDM. Figure 1. T is the median value, measured from the image filtered for a distance of 21 kpc, for five, 3 • long segments in each stream. We have normalized the run of T by dividing by the RMS measured the same way in an apparently blank region of sky north of the Sagittarius streams. The plotted values thus correspond roughly to the S/N at each point. The peaks corresponding to the Anticenter Stream (Grillmair 2006b;Li et al. 2012), EBS (Grillmair 2011;Hargis et al. 2016), and Orphan streams (Grillmair 2006a;Belokurov et al. 2007;Newberg et al. 2010;Sesar et al. 2013;Grillmair 2015) show up only incidentally and are significantly lower than the S/Ns we would measure using more appropriate filters, correct distances, and suitably oriented stream segments.
Figure 1 .
1Panel (a): filtered surface density map of the 70 • × 85 • western portion of the northern SDSS footprint, in SDSS coordinates. The SDSS coordinate system has the advantage that small differences in calibration and completeness from scan to scan are aligned eastwest, or very nearly along horizontal lines in theFigure.The matched filter is based on the color-magnitude distribution of stars in NGC 5053, shifted to a distance of 21 kpc. The map has been binned to 0.5 • × 0.5 • pixels, and smoothed with a Gaussian kernel of 1.5 • . The stretch is linear, with lighter areas indicating higher surface densities. Panel (b): the same filtered image as in panel (a) after subtraction of a smoothed background image constructed using an annular median window filter of radius 4.5 • . Panel (c): The map in panel (a) after subtracting a version of itself that is shifted 1 • to the east. Panel (d): same as panel (a), but with the streams indicated. The curves correspond to Equations 1, 2, and 3, shifted east and west by 2.5 • . The blue curve shows the Orphan stream trajectory ofNewberg et al. (2010). The light blue box indicates the location of four possible streams (labeled a, b, c, and d) discussed in Section 4.
Figure 2 .
2The distribution of E(B − V ), taken fromSchlegel, Finkbeiner, & Davis (1998), in the same coordinate system asFigure 1. The stretch is linear, with white indicating E(B − V ) > 0.09 and black corresponding to E(B − V ) < 0.01
Figure 3 .
3A portion ofBernard et al.'s (2016) matched-filtered density map of stars in a single epoch of the Pan-STARRS 3π survey. In this case, the stars have been filtered using an isochrone with an age of 12 Gyr, a metallicity of [Fe/H] = -1.5, and a distance of 25 kpc. The blue boxes indicate the region shown inFigure 1, and the red, green, and white curves are the same as those in panel d ofFigure 1. The stretch is linear, with lighter areas indicating higher surface densities. Despite the significant pattern noise due to variations in depth and completeness, Sangarius, Scamander, and PS1-D all show indications of continuing southward, as indicated by the arrows.
Figure 4 .
4The "T" statistic ofGrillmair (2009) for the southern 15 • of the streams in
Figure 5 .
5Dereddened (g − i) 0 Hess diagrams for (a) PS1-D, (b) Sangarius, and (c) Scamander. The curves show the theoretical loci for Z = 0.0001 ([Fe/H] = -2.2) in blue and Z = 0.0007 ([Fe/H] = -1.44) in red.
Table 1
1Predicted Motions and Orbit ParametersPS1-D
Sangarius
Scamander
We are grateful to Edouard Bernard for making available matched-filtered maps of the Pan-STARRS 3π survey. We are also grateful to an anonymous referee for several thoughtful suggestions that improved both the content and readability of the paper.Facilities: Sloan, PS1
. C Allen, A Santillan, Rev. Mex. Astron. Astrofis. 22255Allen, C., & Santillan, A. 1991, Rev. Mex. Astron. Astrofis., 22, 255
. V Belokurov, D B Zucker, N W Evans, ApJ. 642137Belokurov, V., Zucker, D. B., Evans, N. W., et al. 2006, ApJ, 642, L137
. V Belokurov, N W Evans, M J Irwin, ApJ. 658337Belokurov, V., Evans, N. W., Irwin, M. J., et al. 2007, ApJ, 658, 337
. E J Bernard, A M N Ferguson, E F Schlafly, MNRAS. 4631759Bernard, E. J., Ferguson, A. M. N., Schlafly, E. F., et al. 2016, MNRAS, 463, 1759
. J Bovy, A Bahmanyar, T K Fritz, N Kallivayalil, arXiv:1609.01298Bovy, J., Bahmanyar, A., Fritz, T. K., & Kallivayalil, N. 2016, arXiv:1609.01298
. R G Carlberg, ApJ. 705223Carlberg, R. G., ApJ, 705, 223
. A Eyre, J Binney, MNRAS. 4131852Eyre, A., & Binney, J. 2011, MNRAS, 413, 1852
. C J Grillmair, ApJ. 64537Grillmair, C. J., 2006a, ApJ, 645, L37
. C J Grillmair, ApJ. 65129Grillmair, C. J., 2006b, ApJ, 651, L29
. C J Grillmair, ApJ. 6931118Grillmair, C. J. 2009, ApJ, 693, 1118
. C J Grillmair, ApJ. 73898Grillmair, C. J. 2011, ApJ, 738, 98
. C J Grillmair, ApJ. 79010Grillmair, C. J. 2014, ApJ, 790, 10
. C J Grillmair, L Hetherington, R G Carlberg, B Willman, ApJ. 81226Grillmair, C. J., Hetherington, L., Carlberg, R. G., & Willman, B. 2015, ApJ, 812, L26
C J Grillmair, J L Carlin, Tidal Streams in the Local Group and Beyond. H. J. Newberg & J. L. Carlin eds., SpringerGrillmair, C. J., & Carlin, J. L. 2016, in Tidal Streams in the Local Group and Beyond, H. J. Newberg & J. L. Carlin eds., Springer
. J R Hargis, B Kimmig, B Willman, ApJ. 820Hargis, J. R., Kimmig, B., Willman, B., et al. 2016, ApJ, 820
. W E Harris, AJ. 1121487Harris, W. E. 1996, AJ, 112, 1487
. A H W Küpper, E Balbinot, A Bonaca, ApJ. 80380Küpper, A. H. W., Balbinot, E., Bonaca, A., et al. 2015, ApJ, 803, 80
. J Li, H J Newberg, J Carlin, ApJ. 757151Li, J., Newberg, H. J, Carlin, J. L, 2012, ApJ, 757, 151
. H J Newberg, B A Willett, B Yanny, Y Xu, ApJ. 71132Newberg, H. J., Willett, B. A., Yanny, B., & Xu, Y. 2010, ApJ, 711, 32
. M S Pawlowski, J Pflamm-Altenburg, P Kroupa, MNRAS. 4231109Pawlowski, M. S., Pflamm-Altenburg, J., & Kroupa, P. 2012, MNRAS, 423, 1109
. E F Schlafly, D P Finkbeiner, ApJ. 737103Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103
. D J Schlegel, D P Finkbeiner, M Davis, ApJ. 500525Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525
. B Sesar, C J Grillmair, J G Cohen, ApJ. 77626Sesar, B., Grillmair, C. J., Cohen, J. G. et al. 2013, ApJ, 776, 26
M C Smith, Tidal Streams in the Local Group and Beyond. H. J. Newberg & J. L. Carlin eds., SpringerSmith, M. C. 2016, in Tidal Streams in the Local Group and Beyond, H. J. Newberg & J. L. Carlin eds., Springer
. J H Yoon, K V Johnston, D W Hogg, ApJ. 73158Yoon, J. H., Johnston, K. V., & Hogg, D. W. 2011, ApJ, 731, 58
| []
|
[
"Mesons with Beauty and Charm: Spectroscopy",
"Mesons with Beauty and Charm: Spectroscopy"
]
| [
"Estia J Eichten \nTheoretical Physics Department Fermi National Accelerator Laboratory\nP.O. Box 50060510BataviaIllinois\n",
"Chris Quigg \nTheoretical Physics Department Fermi National Accelerator Laboratory\nP.O. Box 50060510BataviaIllinois\n"
]
| [
"Theoretical Physics Department Fermi National Accelerator Laboratory\nP.O. Box 50060510BataviaIllinois",
"Theoretical Physics Department Fermi National Accelerator Laboratory\nP.O. Box 50060510BataviaIllinois"
]
| []
| Applying knowledge of the interaction between heavy quarks derived from the study of cc and bb bound states, we calculate the spectrum of cb mesons.We compute transition rates for the electromagnetic and hadronic cascades that lead from excited states to the 1 S 0 ground state, and briefly consider the prospects for experimental observation of the spectrum. | 10.1103/physrevd.49.5845 | [
"https://arxiv.org/pdf/hep-ph/9402210v1.pdf"
]
| 41,160,854 | hep-ph/9402210 | 06f5f194b2a4b326e9cec7cb326c059882a90ca5 |
Mesons with Beauty and Charm: Spectroscopy
2 Feb 1994 (August 1, 2018)
Estia J Eichten
Theoretical Physics Department Fermi National Accelerator Laboratory
P.O. Box 50060510BataviaIllinois
Chris Quigg
Theoretical Physics Department Fermi National Accelerator Laboratory
P.O. Box 50060510BataviaIllinois
Mesons with Beauty and Charm: Spectroscopy
2 Feb 1994 (August 1, 2018)numbers: 1440Lb1440Nd1340Hq1325-k Typeset using REVT E X * Internet address: eichten@fnalgov † Internet address: quigg@fnalgov
Applying knowledge of the interaction between heavy quarks derived from the study of cc and bb bound states, we calculate the spectrum of cb mesons.We compute transition rates for the electromagnetic and hadronic cascades that lead from excited states to the 1 S 0 ground state, and briefly consider the prospects for experimental observation of the spectrum.
I. INTRODUCTION
The copious production of b quarks in Z 0 decays at the Large Electron-Positron collider (LEP) and in 1.8-TeV proton-antiproton collisions at the Fermilab Tevatron opens for study the rich spectroscopy of mesons and baryons beyond B + u and B 0 d . In addition to B 0 s and Λ 0 b , which have already been widely discussed, a particularly interesting case is the spectrum of cb states and its ground state, the B + c meson [1].
Even more than their counterparts in the J/ψ and Υ families, the cb states that lie below the (BD) threshold for decay into a pair of heavy-flavored mesons are stable against strong decay, for they cannot annihilate into gluons. Their allowed decays, by E1 or M1 transitions or by hadronic cascades, lead to total widths that are less than a few hundred keV. All decay chains ultimately reach the 1 S 0 ground state B c , which decays weakly. It may be possible, in time, to map out the excitation spectrum by observing photons or light hadrons in coincidence with a prominent decay of the B c [2]. This would test our understanding of the force between heavy quarks.
The weak decays of the cb ground state will be of particular interest because the influence of the strong interaction can be estimated reliably [3]. The deep binding of the heavy quarks within the B c means that the spectator picture is misleading. Taking proper account of binding energy, we expect a rather long lifetime that implies easily observable secondary vertices. The deep binding also affects the B c branching fractions and leads us to expect that final states involving ψ will be prominent. The modes ψπ + , ψa + 1 , ψρ + , ψD + s , and ψℓ + ν ℓ will serve to identify B c mesons and determine the B c mass and lifetime.
In this Article, we present a comprehensive portrait of the spectroscopy of the B c meson and its long-lived excited states. In Section II, we estimate the mass of the B c in the framework of nonrelativistic quarkonium quantum mechanics and calculate the spectrum of cb states in detail. In Section III, we compute rates for the prominent radiative decays of the excited states and estimate rates and spectra of the hadronic cascades (cb) i → ππ+(cb) f and (cb) i → η + (cb) f . Using this information, we outline a strategy for partially reconstructing the cb spectrum. A brief summary appears in Section IV. states, we rely on the nonrelativistic potential-model description of quarkonium levels. The interquark potential is known rather accurately in the region of space important for the J/ψ and Υ families [4][5][6], which spans the distances important for cb levels. This region lies between the short-distance Coulombic and long-distance linear behavior expected in QCD.
We consider four functional forms for the potential that give reasonable accounts of the cc and bb spectra: the QCD-motivated potential [7] given by Buchmüller and Tye [8], with and a Coulomb-plus-linear potential (the "Cornell potential") [4],
V (r) = − κ r + r a 2 , (2.6) with m c = 1.84 GeV/c 2 m b = 5.18 GeV/c 2 (2.7) κ = 0.52 a = 2.34 GeV −1 . (2.8)
We solve the Schrödinger equation for each of the potentials to determine the position of the 1S center of gravity for cc, cb, and bb. The 3 S 1 -1 S 0 splitting of the i ground state is given by
M( 3 S 1 ) − M( 1 S 0 ) = 32πα s |Ψ(0)| 2 9m i m j . (2.9)
The hyperfine splitting observed in the charmonium family [1],
M(J/ψ) − M(η c ) = 117 MeV/c 2 ,(2.10)
fixes the strong coupling constant for each potential. We neglect the variation of α s with momentum and scale the splitting of cb and bb from the charmonium value (2.10). The resulting values of vector and pseudoscalar masses are presented in Table I. Predictions for the cb ground-state masses depend little on the potential. The B c and B * c masses and splitting lie within the ranges quoted by Kwong and Rosner [11] in their survey of techniques for estimating the masses of the cb ground state. They find We take M Bc = 6.258 ± 0.020 GeV/c 2 (2.14)
6.194 GeV/c 2 < ∼ M Bc < ∼ 6.292 GeV/c 2 ,(2.
as our best guess for the interval in which B c will be found [12].
We shall adopt the Buchmüller-Tye potential [8] for the detailed calculations that follow, because it has the correct two-loop short-distance behavior in perturbative QCD.
B. Excited States
The interaction energies of a heavy quark-antiquark system probe the basic dynamics of the strong interaction. The gross structure of the quarkonium spectrum reflects the shape of the interquark potential. In the absence of light quarks, the static energy explicitly exhibits linear confinement at large distance. Further insight can be obtained by studying the spin-dependent forces, which distinguish the electric and magnetic parts of the interactions. Within the framework of quantum chromodynamics, the nature of the spin-dependent forces was first studied nonperturbatively by Eichten and Feinberg [13,14]. Gromes [15] subsequently added an important constraint that arises from boost-invariance of the QCD forms [16]. One-loop perturbative QCD calculations for the spin-dependent interactions in a meson composed of two different heavy quarks have also been carried out [17][18][19].
The spin-dependent contributions to the cb masses may be written as
∆ = 4 k=1 T k ,(2.15)
where the individual terms are
T 1 = L · s i 2m 2 iT 1 (m i , m j ) + L · s j 2m 2 jT 1 (m j , m i ) T 2 = L · s i m i m jT 2 (m i , m j ) + L · s j m i m jT 2 (m j , m i ) (2.16) T 3 = s i · s j m i m jT 3 (m i , m j ) T 4 = S ij m i m jT 4 (m i , m j ) ,
and the tensor operator is
S ij = 4 [3( s i ·n)( s j ·n) − s i · s j ] . (2.17)
In Eq. (2.16) and (2.17), s i and s j are the spins of the heavy quarks, L is the orbital angular momentum of quark and antiquark in the bound state, andn is an arbitrary unit vector.
The total spin is S = s i + s j .
The leading contributions to theT k have no explicit dependence on the quark masses. Assuming that the magnetic interactions are short-range (∝ r −3 ) and thus can be calculated in perturbation theory, we havẽ
T 1 (m i , m j ) = − 1 r dV dR + 2T 2 (m i , m j ) T 2 (m i , m j ) = 4α s 3 r −3 (2.18) T 3 (m i , m j ) = 32πα s 9 |Ψ(0)| 2 T 4 (m i , m j ) = α s 3 r −3 .
The connection betweenT 1 andT 2 is Gromes's general relation; the other equations reflect the stated approximations.
For quarkonium systems composed of equal-mass heavy quarks, the total spin S is a good quantum number and LS coupling leads to the familiar classification of states as 2S+1 L J ,
where J = L + S [20]
. The calculated spectra are compared with experiment in Table II (for the ψ family) and Table III (for the Υ family). Overall, the agreement is satisfactory.
Typical deviations in the charmonium system are less than about 30 MeV; deviations in the upsilon system are somewhat smaller. The differences between calculated and observed spectra suggest that the excitation energies in the cb system can be predicted within a few tens of MeV.
The leptonic decay rate of a neutral (QQ) vector meson V 0 is related to the Schrödinger wave function through [23,24] 19) where N c = 3 is the number of quark colors, e Q is the heavy-quark charge, and M V is the mass of the vector meson. The resulting leptonic widths, evaluated without QCD corrections, are tabulated in Tables II and III. Within each family, the leptonic widths are predicted in proper proportions, but are larger than the observed values. The QCD correction reduces the magnitudes significantly; the amount of this reduction is somewhat uncertain, because the first term in the perturbation expansion is large [25].
Γ(V 0 → e + e − ) = 16πN c α 2 e 2 Q 3 |Ψ(0)| 2 M 2 V 1 − 16α s 3π ,(2.
For unequal-mass quarks, it is more convenient to construct the mass eigenstates by jj coupling, first coupling L+ s c = J c and then adding the spin of the heavier quark,
s b + J c = J.
The level shifts ∆ (J) for the L = 1 states with (J c = 3 2 , J = 2) and (J c = 1 2 , J = 0) are
∆ (2) = 1 4m 2 c + 1 4m 2 b T 1 + 1 m b m cT 2 − 2 5m b m cT 4 (2.20) ∆ (0) = − 1 2m 2 c + 1 2m 2 b T 1 − 2 m b m cT 2 − 4 m b m cT 4 .
For a given principal quantum number, the two (L = 1, J = 1) cb states with J c = 1 2 and 3 2 are mixed in general. The elements of the mixing matrix are
∆ (1) 3 2 3 2 = 1 4m 2 c − 5 12m 2 b T 1 − 1 3m b m cT 2 + 2 3m b m cT 4 ∆ (1) 3 2 1 2 = ∆ (1) 1 2 3 2 = − √ 2 6m 2 bT 1 − √ 2 3m b m cT 2 + 2 √ 2 3m b m cT 4 (2.21) ∆ (1) 1 2 1 2 = − 1 2m 2 c + 1 6m 2 b T 1 − 2 3m b m cT 2 + 4 3m b m cT 4 .
Two limiting cases are familiar.
(i) With equal quark masses m b = m c ≡ m, the level shifts become
∆ (2) = 1 2m 2T 1 + 1 m 2T 2 − 2 5m 2T 4 (2.22) ∆ (0) = − 1 m 2T 1 − 2 m 2T 2 − 4 m 2T 4 ,
while the mixing matrix becomes
∆ (1) = 1 √ 2 √ 2 2 −T 1 − 2T 2 + 4T 4 6m 2 .
(2.23)
The mass eigenstates are the familiar 1 P 1 and 3 P 1 states of the LS coupling scheme. In this basis, they may be written as
| 1 P 1 = − 2 3 |J c = 3 2 + 1 3 |J c = 1 2 (2.24) | 3 P 1 = 1 3 |J c = 3 2 + 2 3 |J c = 1 2 with eigenvalues λ( 1 P 1 ) λ( 3 P 1 ) = 0 3 −T 1 − 2T 2 + 4T 4 6m 2 . (2.25)
The position of the 1 P 1 level coincides with the centroid [5∆ (2) + 3λ( 3 P 1 ) + ∆ (0) ]/9 of the 3 P J levels.
(ii) In the heavy-quark limit, m b → ∞, the level shifts of the J = 0, 2 levels become
∆ (2) = 1 4m 2 cT 1 (2.26) ∆ (0) = − 1 2m 2 cT 1 ,
while the mixing matrix becomes
∆ (1) = 1 0 0 −2 T 1 4m 2 c . (2.27)
The J c = 3 2 and J c = 1 2 states separate into degenerate pairs, as expected on the basis of heavy-quark symmetry [26].
In the cb system, we label the mass eigenstates obtained by diagonalizing the matrix (2.21) as n(1 + ) and n(1 +′ ). For the 2P 1 levels, the mixing matrix is
∆ (2P) = −1.85 −2.80 −2.80 −4.23 MeV ,(2.28)
with eigenvectors
|2(1 + ) = 0.552|J c = 3 2 + 0.833|J c = 1 2 (2.29) |2(1 +′ ) = −0.833|J c = 3 2 + 0.552|J c = 1
and eigenvalues
λ 2 = −6.09 MeV (2.30) λ ′ 2 = 0.00057 MeV .
For the 3P 1 levels, the mixing matrix is
∆ (3P) = −0.13 −2.54 −2.54 −6.91 MeV ,(2.31)
with eigenvectors
|3(1 + ) = 0.316|J c = 3 2 + 0.949|J c = 1 2 (2.32) |3(1 +′ ) = −0.949|J c = 3 2 + 0.316|J c = 1 2
and eigenvalues
λ 3 = −7.76 MeV (2.33) λ ′ 3 = 0.711 MeV .
For the 4P 1 levels, the mixing matrix is
∆ (4P) = 0.71 −2.44 −2.44 −8.31 MeV , (2.34) with eigenvectors |4(1 + ) = 0.245|J c = 3 2 + 0.969|J c = 1 2 (2.35) |4(1 +′ ) = −0.969|J c = 3 2 + 0.245|J c = 1 2
and eigenvalues
λ 4 = −8.93 MeV (2.36) λ ′ 4 = 1.32 MeV .
The calculated spectrum of cb states is presented in Table IV and Figure 1. Our spectrum is similar to others calculated by Eichten and Feinberg [14] in the Cornell potential [4], by
Gershteȋn et al. [27] in the power-law potential (2.2), and by Chen and Kuang [28] in their own version of a QCD-inspired potential. Levels that lie below the BD flavor threshold, i.e., with M < M D + M B = 7.1431 ± 0.0021 GeV/c 2 , will be stable against fission into heavy-light mesons.
C. Properties of cb Wave Functions at the Origin For quarks bound in a central potential, it is convenient to separate the Schrödinger wave function into radial and angular pieces, as
Ψ nℓm ( r) = R nℓ (r)Y ℓm (θ, φ) , (2.37)
where n is the principal quantum number, ℓ and m are the orbital angular momentum and its projection, R nℓ (r) is the radial wave function, and Y ℓm (θ, φ) is a spherical harmonic [29].
The Schrödinger wave function is normalized,
d 3 r|Ψ nℓm ( r)| 2 = 1 , (2.38) so that ∞ 0 r 2 dr|R nℓ (r)| = 1 . (2.39)
The value of the radial wave function, or its first nonvanishing derivative at the origin,
R (ℓ) nℓ (0) ≡ d ℓ R nℓ (r) dr ℓ r=0 , (2.40)
is required to evaluate pseudoscalar decay constants and production rates through heavyquark fragmentation [30]. The quantity |R (ℓ) nℓ (0)| 2 is presented for four potentials in Table V. The stronger singularity of the Cornell potential is reflected in spatially smaller states.
The pseudoscalar decay constant f Bc , which will be required for the discussion of annihilation decays cb → W + → final state, is defined by
0|A µ (0)|B c (q) = if Bc V cb q µ ,(2.41)
where A µ is the axial-vector part of the charged weak current, V cb is an element of the Cabibbo-Kobayashi-Maskawa quark-mixing matrix, and q µ is the four-momentum of the B c .
The pseudoscalar decay constant is related to the ground-state cb wave function at the origin by the van Royen-Weisskopf formula [23] modified for color,
f 2 Bc = 12|Ψ 100 (0)| 2 M = 3|R 10 (0)| 2 πM . (2.42)
In the nonrelativistic potential models we have considered to estimate M Bc and M B * c , we find
f Bc =
III. TRANSITIONS BETWEEN cb STATES
As in atomic physics, it is the spectral lines produced in cascades from excited states to the readily observable B c ground state that will reveal the cb level scheme. As in the J/ψ and Υ quarkonium families, the transitions are mostly radiative decays. A few hadronic cascades, analogs of the 2 3 S 1 → 1 3 S 1 ππ transition first observed in charmonium, will also be observable.
A. Electromagnetic Transitions
Except for the magnetic-dipole (spin-flip) transition between the ground-state B * c and B c , only the electric dipole transitions are important for mapping the cb spectrum.
Electric Dipole Transitions
The strength of the electric-dipole transitions is governed by the size of the radiator and the charges of the constituent quarks. The E1 transition rate is given by
Γ E1 (i → f + γ) = 4α < e Q > 2 27 k 3 (2J f + 1)| f |r|i | 2 S if ,(3.1)
where the mean charge is
< e Q >= m b e c − m c e b m b + m c , (3.2)
k is the photon energy, and the statistical factor S if = S f i is as defined by Eichten and
Gottfried [31]. S if = 1 for 3 S 1 → 3 P J transitions and S if = 3 for allowed E1 transitions between spin-singlet states. The statistical factors for d-wave to p-wave transitions are reproduced in Table VI for convenience. The E1 transition rates and photon energies in the cb system are presented in Table VII.
Magnetic Dipole Transitions
The only decay mode for the 1 3 S 1 (B * c ) state is the magnetic dipole transition to the ground state, B c . The M1 rate for transitions between s-wave levels is given by
Γ M1 (i → f + γ) = 16α 3 µ 2 k 3 (2J f + 1)| f |j 0 (kr/2)|i | 2 ,(3.3)
where the magnetic dipole moment is
µ = m b e c − m c e b 4m c m b (3.4)
and k is the photon energy. Rates for the allowed and hindered M1 transitions between spin-triplet and spin-singlet s-wave cb states are given in Table VIII. The M1 transitions contribute little to the total widths of the 2S levels. Because it cannot decay by annihilation, the 1 3 S 1 cb level, with a total width of 135 eV, is far more stable than its counterparts in the cc and bb systems, whose total widths are 68 ± 10 keV and 52.1 ± 2.1 keV, respectively [1].
B. Hadronic Transitions
A hadronic transition between quarkonium levels can be understood as a two-step process in which gluons first are emitted from the heavy quarks and then recombine into light hadrons. Perturbative QCD is not directly applicable, because the energy available to the light hadrons is small and the emitted gluons are soft. Nevertheless, the final quarkonium state is small compared to the system of light hadrons and moves nonrelativistically in the rest frame of the decaying quarkonium state. A multipole expansion of the color gauge field converges rapidly and leads to selection rules, a Wigner-Eckart theorem, and rate estimates for hadronic transitions [32]. The recombination of gluons into light hadrons involves the full strong dynamics and can only be modeled. The general structure of hadronic-cascade transitions and models for the recombination of gluons into light hadrons can be found in a series of papers by Yan and collaborators [33][34][35][36].
The hadronic transition rates for an unequal-mass QQ ′ system differ in some details from the rates for an equal-mass QQ system with the same reduced mass. The relative strengths of various terms that contribute to magnetic-multipole transitions are modified because of the unequal quark and antiquark masses. The electric-multipole transitions are only sensitive to the relative position of the quark and antiquark and will be unchanged in form.
As in the cc and bb systems, the principal hadronic transitions in the cb system involve the emission of two pions. Electric-dipole contributions dominate in these transitions, and so the equal-mass results apply directly. The initial quarkonium state is characterized by its total angular momentum J ′ with z-component M ′ , orbital angular momentum ℓ ′ , spin s ′ , and other quantum numbers collectively labelled by α ′ . The corresponding quantum numbers of the final quarkonium state are denoted by the unprimed symbols. Since the transition operator is spin-independent, the initial and final spins are the same: s ′ = s.
Because the gauge-field operators in the transition amplitude do not depend on the heavyquark variables, the transition operator is a reducible second-rank tensor, which may be decomposed into a sum of irreducible tensors with rank k = 0, 1, 2. The differential rate [33] for the E1-E1 transition from the initial quarkonium state Φ ′ to the final quarkonium state Φ and a system of n light hadrons, denoted h, is given by
dΓ dM 2 (Φ ′ → Φ + h) = (2J + 1) 2 k=0 k ℓ ′ ℓ s J J ′ 2 A k (ℓ ′ , ℓ) , (3.5)
where M 2 is the invariant mass squared of the light hadron system, { } is a 6-j symbol, and A k (ℓ ′ , ℓ) is the contribution of the irreducible tensor with rank k. The Wigner-Eckart theorem (3.5) yields the relations among two-pion transition rates given in Table IX.
The magnitudes of the A k (ℓ ′ , ℓ) are model-dependent. Since the A 1 contributions are suppressed in the soft-pion limit [33], we will set A 1 (ℓ ′ , ℓ) = 0. For some of the remaining rates we can use simple scaling arguments from the measured rates in QQ systems [37]. The amplitude for an E1-E1 transition depends quadratically on the interquark separation, so the scaling law between a QQ ′ and the corresponding QQ system states is given by [32,33]:
Γ(QQ ′ ) Γ(QQ) = r 2 (QQ ′ ) 2 r 2 (QQ) 2 , (3.6)
up to possible differences in phase space. The measured values for the ψ ′ → ψ + ππ, Υ ′ → Υ + ππ, and ψ(3770) → ψ + ππ transition rates allow good scaling estimates for the 2S → 1S + ππ and 3D → 1S + ππ transitions in the cb system. We have estimated the remaining transition rates by scaling the bb rates calculated by Kuang and Yan [34] in their Model C, which is based on the Buchmüller-Tye potential [8]. The results are shown in Table X.
Chiral symmetry leads to a universal form for the normalized dipion spectrum [41],
1 Γ dΓ dM = Constant × | K| M 2 Φ ′ (2x 2 − 1) 2 √ x 2 − 1 , (3.7)
where x = M/2m π and
| K| = M 2 Φ ′ − (M + M Φ ) 2 M 2 Φ ′ − (M − M Φ ) 2 2M Φ ′ (3.8)
is the three-momentum carried by the pion pair. The normalized invariant-mass distribution for the transition 2 3 S 1 → 1 3 S 1 + ππ is shown in Figure 2 for the cc, cb, and bb families. The soft-pion expression (3.7) describes the depletion of the dipion spectrum at low invariant masses observed in the transitions ψ(2S) → ψ(1S)ππ [42] and Υ(2S) → Υ(1S)ππ [43], but fails to account for the Υ(3S) → Υ(1S)ππ and Υ(3S) → Υ(2S)ππ spectra [44]. We expect the 3S levels to lie above flavor threshold in the cb system.
By the Wigner-Eckart theorem embodied in Eq. The 2 3 S 1 → 1 3 S 1 + η transition has been observed in charmonium. This transition proceeds via an M1-M1 or E1-M2 multipole. In the cb system the E1-M2 multipole dominates and the scaling from the cc system should be given by
Γ(cb) Γ(cc) = (m b + m c ) 2 4m 2 b r 2 (cb) r 2 (cc) M 3 ψ ′ M 3 Φ ′ [M 2 Φ ′ − (M Φ + M η ) 2 ] 1/2 [M 2 Φ ′ − (M Φ − M η ) 2 ] 1/2 [M 2 ψ ′ − (M ψ + M η ) 2 ] 1/2 [M 2 ψ ′ − (M ψ − M η ) 2 ] 1/2 ,
C. Total Widths and Experimental Signatures
The total widths and branching fractions are given in Table XI. The most striking feature of the cb spectrum is the extreme narrowness of the states. A crucial element in unraveling the spectrum will be the efficient detection of the 72-MeV M1-photon that, in coincidence with an observed B c decay, tags the B * c . This will be essential for distinguishing the B c (2S) → B c (1S) + ππ transition from B * c (2S) → B * c (1S) + ππ, which will have a nearly identical spectrum and a comparable rate. Combining the branching fractions in Table XI with the b-quark fragmentation probabilities of Ref. [30], we expect the cross section times branching fractions to be in the proportions
σB(B c (2S) → B c (1S) + ππ) ≈ 1.2 × σB(B * c (2S) → B * c (1S) + ππ) .
IV. CONCLUDING REMARKS
A meson with beauty and charm is an exotic particle, but prospects are good that it will be discovered in the near future. As soon as B c has been identified, the investigation of competing weak-decay mechanisms,b →cW + (represented by ψπ + , ψℓ + ν, etc.), c → sW + (represented by B s π + , B s ℓ + ν, etc.), and cb → W + (represented by ψD + s , τ + ν τ , etc.), can begin. The issues to be studied, and predictions for a wide variety of inclusive and exclusive decays, are presented in a companion paper [3]. Before the end of the decade, it should prove possible to map out part of the cb spectrum by observing γ-and ππ-coincidences with the ground-state B c or its hyperfine partner B * c . Collaborations. c Measured by the Mark III Collaboration [40]. d Calculated by Kuang and
2 1 P 1 (h b ) 9.873 2 1 S 0 (η ′ b ) 9.963
Yan [34] using the Buchmüller-Tye potential [8].
II. THE SPECTRUM OF B c STATES A. The Mass of B c Both in mass and in size, the mesons with beauty and charm are intermediate between the cc and bb states. Estimates of the B c mass can, consequently, be tied to what is known about the charmonium and Υ families. To predict the full spectrum and properties of cb
V
(r) = −0.6635 GeV + (0.733 GeV) log (r · 1 GeV) , (2.4) with m c = 1.5 GeV/c 2 m b = 4.906 GeV/c 2 ;(2.5)
QCD radiative corrections of the size suggested by the comparison of computed and observed leptonic widths for J/ψ and Υ, f Bc will be significantly larger than the pion decay constant, f π = 131.74 ± 0.15 MeV[1]. The compact size of the cb system enhances the importance of annihilation decays.
(3.5), the invariant mass spectrum in the decay B c (2S) → B c (1S) + ππ should have the same form (3.7) as the B * c (2S) → B * c (1S) + ππ transition. Braaten, Cheung, and Yuan [30] have calculated the probability for a high-energyb antiquark to fragment into the cb s-waves as 3.8 × 10 −4 forb → B c (1S), 5.4 × 10 −4 forb → B * c (1S), 2.3 × 10 −4 forb → B c (2S), and 3.2 × 10 −4 forb → B * c (2S). Given the excellent experimental signatures for B c (1S) decay and the favorable prospects for B c (2S) production in high-energy proton-antiproton collisions, it may be possible to observe the 0 → 0 transition for the first time in the B c family.
(3.9) where M Φ ′ and M Φ are the masses of the 2 3 S 1 and 1 3 S 1 cb levels, respectively. Because of the small energy release in this transition, the slightly smaller level spacing in the B c family compared to the J/ψ family (562 MeV vs. 589 MeV) strongly suppresses η-emission in the cb system. The observed rate of Γ(ψ ′ → ψ + η) = 6.6 ± 2.1 keV[1] scales to Γ(B c (2S) → B c (1S) + η) = 0.25 keV.
-but challenging-experimental goal would be to map the eight lowestlying cb states: the 1S, 2S, and 2P levels. A first step, in addition to reconstructing the hadronic cascades we have just discussed, would be the detection of the 455-MeV photons in coincidence with B c , and of 353-, 382-, and 397-MeV photons in coincidence with B * c → B c + γ(72 MeV). This would be a most impressive triumph of experimental art.
ACKNOWLEDGMENTS
Fermilab is operated by Universities Research Association, Inc., under contract DE-AC02-76CHO3000 with the United States Department of Energy. C.Q. thanks the Cultural Section of the Vienna municipal government and members of the Institute for Theoretical Physics of the University of Vienna for their warm hospitality while part of this work was carried out. However, these changes can be absorbed into the EFG formalism without introducing any new form factors by allowing the existing form factor to depend (logarithmically) on the heavy-quark masses. Recently, Yu-Qi Chen and Yu-Ping Kuang, "General relations of heavy quark-antiquark potentials induced by reparametrization invariance," China Center of Advanced Science and Technology (World Laboratory) preprint CCAST-93-37 (unpublished), extended the Gromes analysis [15] to show in general that no new spin-dependent structures appear to order 1/m 2 .[20] The expectation values of spin and orbital angular momentum operators are conveniently evaluated using s i · s j = 12 S(S + 1) − 3 4 , L · s i = L · s j = 1 2 L · S , L · S =1 2 [J(J + 1) − L(L + 1) − S(S + 1)]. The tensor operator can be written as S ij = 2 3( S ·n)( S ·n) − S 2 , for which [W. Kwong and J. L. Rosner, Phys. Rev. D 38, 279 (1988)] S ij = −[12 L · S 2 + 6 L · S − 4S(S + 1)L(L + 1)]/[(2L − 1)(2L + 3)]. [21] T. A. Armstrong et al. (E-760 Collaboration), Phys. Rev. Lett. 68, 1468 (1992); Nucl. Phys. B 373, 35 (1992).TABLES
FIGURESFIG. 1 .
1The spectrum of cb states. FIG. 2. Normalized dipion mass spectrum for the transition 2 3 S 1 → 1 3 S 1 + ππ in the ψ (dashed curve), B c (solid curve), and Υ (dotted curve) families.
TABLE I .
IQuarkonium ground-state masses (in GeV/c 2 ) in three potentials. Input value; determines α s = 0.36. b Input value; determines α s = 0.43. c Input value; determines α s = 0.37. d Input value; determines α s = 0.31.Observable
QCD, Ref. [8]
Power-law, Ref. [9]
Logarithmic, Ref. [10]
Cornell, Ref. [4]
(cc) 1S
3.067
3.067
3.067
3.067
ψ
3.097
3.097
3.097
3.097
η c
2.980
2.980
2.980
2.980
ψ − η c
0.117 a
0.117 b
0.117 c
0.117 d
(cb) 1S
6.317
6.301
6.317
6.321
B *
c
6.337
6.319
6.334
6.343
B c
6.264
6.248
6.266
6.254
B *
c − B c
0.073
0.071
0.068
0.089
(bb) 1S
9.440
9.446
9.444
9.441
Υ
9.464
9.462
9.460
9.476
η b
9.377
9.398
9.395
9.335
Υ − η b
0.087
0.064
0.065
0.141
a
TABLE II .
IICharmonium masses and leptonic widths in the Buchmüller-Tye potential. a See Ref. [1]. b See Ref. [21]. c See Ref.[22].Level
Mass (GeV/c 2 )
Leptonic Width (keV)
Calculated
Observed a
Calculated
Observed a
1 1 S 0 (η c )
2.980
2.9788 ± 0.0019
1 3 S 1 (ψ/J)
3.097
3.09688 ± 0.00001 ± 0.00006 b
8.00
4.72 ± 0.35
2 3 P 0 (χ c0 )
3.436
3.4151 ± 0.0010
2 3 P 1 (χ c1 )
3.486
3.51053 ± 0.00004 ± 0.00012 b
2 3 P 2 (χ c2 )
3.507
3.55615 ± 0.00007 ± 0.00012 b
2 1 P 1 (h c )
3.493
3.5262 ± 0.00015 ± 0.0002 c
2 1 S 0 (η ′
c )
3.608
2 3 S 1 (ψ ′ )
3.686
3.68600 ± 0.00010
3.67
2.14 ± 0.21
TABLE III .
IIIbb masses and leptonic widths in the Buchmüller-Tye potential.Level
Mass (GeV/c 2 )
Leptonic Width (keV)
Calculated
Observed a
Calculated
Observed a
1 1 S 0 (η b )
9.377
1 3 S 1 (Υ)
9.464
9.46032 ± 0.00022
1.71
1.34 ± 0.04
2 3 P 0 (χ b0 )
9.834
9.8598 ± 0.0013
2 3 P 1 (χ b1 )
9.864
9.8919 ± 0.0007
2 3 P 2 (χ b2 )
9.886
9.9132 ± 0.0006
TABLE IV .
IVcb masses (in GeV/c 2 ) in the Buchmüller-Tye potential.TABLE V. Radial wave functions at the origin and related quantities for cb mesons.Level
Calculated Mass
Eichten & Feinberg a
Gershteȋn et al. b
Chen & Kuang c
1 1 S 0 (B c )
6.264
6.243
6.246
6.310
1 3 S 1 (B *
c )
6.337
6.339
6.329
6.355
2 3 P 0
6.700
6.697
6.645
6.728
2 1 +′
6.736
6.740
6.741
6.760
2 1 +
6.730
6.719
6.682
6.764
Level
|R
(ℓ)
nℓ (0)| 2
QCD, Ref. [8]
Power-law, Ref. [9]
Logarithmic, Ref. [10]
Cornell, Ref. [4]
1S
1.642 GeV 3
1.710 GeV 3
1.508 GeV 3
3.102 GeV 3
2P
0.201 GeV 5
0.327 GeV 5
0.239 GeV 5
0.392 GeV 5
2S
0.983 GeV 3
0.950 GeV 3
0.770 GeV 3
1.737 GeV 3
3D
0.055 GeV 7
0.101 GeV 7
0.055 GeV 7
0.080 GeV 7
3P
0.264 GeV 5
0.352 GeV 5
0.239 GeV 5
0.531 GeV 5
3S
0.817 GeV 3
0.680 GeV 3
0.563 GeV 3
1.427 GeV 3
TABLE VI .
VIStatistical Factor S if for 3 P J → 3 D J ′ + γ Transitions.J
J ′
S if
0
1
2
1
1
1/2
1
2
9/10
2
1
1/50
2
2
9/50
2
3
18/25
TABLE VII .
VIIE1 Transition Rates in the cb System.TABLE VIII. M1 Transition Rates in the cb System.Transition
Photon energy (MeV)
f |r|i (GeV −1 )
Γ(i → f + γ) (keV)
2 3 P 2 → 1 3 S 1 + γ
397
1.714
112.6
2(1 + ) → 1 3 S 1 + γ
382
1.714
99.5
2(1 + ) → 1 1 S 0 + γ
450
1.714
0.0
2(1 +′ ) → 1 3 S 1 + γ
387
1.714
0.1
2(1 +′ ) → 1 1 S 0 + γ
455
1.714
56.4
2 3 P 0 → 1 3 S 1 + γ
353
1.714
79.2
2 3 S 1 → 2 3 P 2 + γ
151
−2.247
17.7
2 3 S 1 → 2(1 + ) + γ
167
−2.247
14.5
2 3 S 1 → 2(1 +′ ) + γ
161
−2.247
0.0
2 3 S 1 → 2 3 P 0 + γ
196
−2.247
7.8
2 1 S 0 → 2(1 + ) + γ
125
−2.247
0.0
2 1 S 0 → 2(1 +′ ) + γ
119
−2.247
5.2
3 3 D 3 → 2 3 P 2 + γ
258
2.805
98.7
3 3 D 2 → 2 3 P 2 + γ
258
2.805
24.7
3 3 D 2 → 2(1 + ) + γ
274
2.805
88.8
3 3 D 2 → 2(1 +′ ) + γ
268
2.805
0.1
3 3 D 1 → 2 3 P 2 + γ
258
2.805
2.7
3 3 D 1 → 2(1 + ) + γ
274
2.805
49.3
3 3 D 1 → 2(1 +′ ) + γ
268
2.805
0.0
3 3 D 1 → 2 3 P 0 + γ
302
2.805
88.6
3 1 D 2 → 2(1 +′ ) + γ
268
2.805
92.5
3 3 P 2 → 1 3 S 1 + γ
770
0.304
25.8
3 3 P 2 → 2 3 S 1 + γ
249
2.792
73.8
3 3 P 2 → 3 3 D 3 + γ
142
−2.455
17.8
3 3 P 2 → 3 3 D 2 + γ
142
−2.455
3.2
3 3 P 2 → 3 3 D 1 + γ
142
−2.455
0.2
3(1 + ) → 1 3 S 1 + γ
754
0.304
22.1
3(1 + ) → 2 3 S 1 + γ
232
2.792
54.3
3(1 + ) → 3 3 D 2 + γ
125
−2.455
9.8
3(1 + ) → 3 3 D 1 + γ
125
−2.455
0.3
3(1 +′ ) → 1 3 S 1 + γ
760
0.304
2.1
3(1 +′ ) → 2 3 S 1 + γ
239
2.792
5.4
3(1 +′ ) → 3 3 D 2 + γ
131
−2.455
11.5
3(1 +′ ) → 3 3 D 1 + γ
131
−2.455
0.4
3 3 P 0 → 1 3 S 1 + γ
729
0.304
21.9
3 3 P 0 → 2 3 S 1 + γ
205
2.792
41.2
3 3 P 0 → 3 3 D 1 + γ
98
−2.455
6.9
Transition
Photon energy (MeV)
f |j 0 (kr/2)|i
Γ(i → f + γ) (keV)
TABLE X .
XEstimated rates for two-pion E1-E1 transitions between cb levels, scaled from cc and bb measurements and calculations.a Particle Data Group average[1]. b Measured by the Crystal Ball[38] and Mark II[39] Transition
(QQ) rate (keV)
r 2 (cb) / r 2 (QQ)
Reduced rate (cb) (keV)
(bb) : 11.7 ± 2.2 a
1.99
A 0 (0, 0) = 40 ± 8
2 3 S 1 →1 3 S 1 + ππ
(cc) : 141 ± 27 a
0.70
A 0 (0, 0) = 69 ± 13
Mean
A 0 (0, 0) = 50 ± 7
(cc) : 37 ± 17 ± 8 b
A 2 (2, 0) = 137 ± 70
3 3 D 1 →1 3 S 1 + ππ
(cc) : 55 ± 23 ± 11 c
0.72
A 2 (2, 0) = 204 ± 94
Mean: 43 ± 15
A 2 (2, 0) = 160 ± 56
3 3 P 0 → 2 3 P 0 + ππ
(bb) : 0.4 d
1.88
A 0 (1, 1) = 4.2
3 3 P 2 → 2 3 P 1 + ππ
(bb) : 0.01 d
1.88
A 2 (1, 1) = 0.2
3 S 1 (Υ ′ )
a Should this state lie above flavor threshold, dissociation into BD will dominate over the tabulated decay modes.http://arxiv.org/ps/hep-ph/9402210v1 http://arxiv.org/ps/hep-ph/9402210v1
We follow the nomenclature of the Particle Data Group, in which B mesons containb antiquarks. See Particle Data Group. Phys. Lett. B. 2391We follow the nomenclature of the Particle Data Group, in which B mesons containb antiquarks. See Particle Data Group, Phys. Lett. B 239, 1 (1990);
. Phys. Rev. D. 451Phys. Rev. D 45, S1 (1992).
The CDF Collaboration has reconstructed the χ c states by observing photons in coincidence with leptonic decays of. Phys. Rev. Lett. J/ψ. See F. Abe et al.712537The CDF Collaboration has reconstructed the χ c states by observing photons in coin- cidence with leptonic decays of J/ψ. See F. Abe et al. (CDF Collaboration), Phys. Rev. Lett. 71, 2537 (1993).
E Eichten, C Quigg, preparation. For a brief preliminary account, see C. Quigg, FERMILAB-CONF-93/265-T, contributed to the Workshop on B Physics at Hadron Accelerators. SnowmassE. Eichten and C. Quigg, in preparation. For a brief preliminary account, see C. Quigg, FERMILAB-CONF-93/265-T, contributed to the Workshop on B Physics at Hadron Accelerators, Snowmass.
. E Eichten, K Gottfried, T Kinoshita, K D Lane, T.-M Yan, Phys. Rev. D. 17203ibid. 21, 313(E) (1980); ibid. 21E. Eichten, K. Gottfried, T. Kinoshita, K. D. Lane, T.-M. Yan, Phys. Rev. D 17, 3090 (1978); ibid. 21, 313(E) (1980); ibid. 21, 203 (1980).
. C Quigg, J L Rosner, Phys. Rev. D. 232625C. Quigg and J. L. Rosner, Phys. Rev. D 23, 2625 (1981).
. W Kwong, J L Rosner, C Quigg, Ann. Rev. Nucl. Part. Sci. 37325W. Kwong, J. L. Rosner, and C. Quigg, Ann. Rev. Nucl. Part. Sci. 37, 325 (1987).
. J Richardson, Phys. Lett. B. 82272J. Richardson, Phys. Lett. B 82, 272 (1979).
. W Buchmüller, S.-H H Tye, Phys. Rev. D. 24132W. Buchmüller and S.-H. H. Tye, Phys. Rev. D 24, 132 (1981).
. A Martin, Phys. Lett. B. 93338A. Martin, Phys. Lett. B 93, 338 (1980);
Heavy Flavours and High Energy Collisions in the 1-100 TeV Range. A. Ali and L. CifarelliNew YorkPlenum Press141in Heavy Flavours and High Energy Collisions in the 1-100 TeV Range, edited by A. Ali and L. Cifarelli (Plenum Press, New York, 1989), p. 141.
. C Quigg, J L Rosner, Phys. Lett. B. 71153C. Quigg and J. L. Rosner, Phys. Lett. B 71, 153 (1977).
. W Kwong, J L Rosner, Phys. Rev. D. 44212W. Kwong and J. L. Rosner, Phys. Rev. D 44, 212 (1991).
use a dual-QCD potential to calculate M Bc = 6.287 GeV/c 2 and M B * c = 6.372 GeV/c 2 . We note that their prediction for the (cc). M Baker, J S Ball, F Zachariasen, Phys. Rev. D. 459101S level lies at 3.083 GeV/c 2 , 16 MeV/c 2 above the observed valueM. Baker, J. S. Ball, and F. Zachariasen, Phys. Rev. D 45, 910 (1992), use a dual-QCD potential to calculate M Bc = 6.287 GeV/c 2 and M B * c = 6.372 GeV/c 2 . We note that their prediction for the (cc) 1S level lies at 3.083 GeV/c 2 , 16 MeV/c 2 above the observed value.
unpublished), use the Feynman-Hellman theorem to predict M B * c = 6.320 ± 0.010 GeV/c 2 . S. Godfrey and N. Isgur. R Roncaglia, A R Dzierba, D B Lichtenberg, E Predazzi, estimate M Bc = 6. 270189Phys. Rev. D. c = 6.34 GeV/c 2R. Roncaglia, A. R. Dzierba, D. B. Lichtenberg, and E. Predazzi, Indiana University preprint IUHET 270 (January 1994, unpublished), use the Feynman-Hellman theorem to predict M B * c = 6.320 ± 0.010 GeV/c 2 . S. Godfrey and N. Isgur, Phys. Rev. D 32, 189 (1985), estimate M Bc = 6.27 GeV/c 2 and M B * c = 6.34 GeV/c 2 .
. E J Eichten, F Feinberg, Phys. Rev. Lett. 431205E. J. Eichten and F. Feinberg, Phys. Rev. Lett. 43, 1205 (1979).
. E J Eichten, F Feinberg, Phys. Rev. D. 232724E. J. Eichten and F. Feinberg, Phys. Rev. D 23, 2724 (1981).
. D Gromes, Z. Phys. C. 26401D. Gromes, Z. Phys. C 26, 401 (1984).
No. 267General reviews of the Eichten-Feinberg-Gromes (EFG) formulation have been given by M. Peskin, in Dynamics and Spectroscopy at High Energy, Proceedings of the Eleventh SLAC Summer Institute on Particle Physics. P. MSLAC ReportGeneral reviews of the Eichten-Feinberg-Gromes (EFG) formulation have been given by M. Peskin, in Dynamics and Spectroscopy at High Energy, Proceedings of the Eleventh SLAC Summer Institute on Particle Physics, SLAC Report No. 267, edited by P. M.
. ; E J Mcdonough, Stanford Linear Accelerator Center151Stanford, CAMcDonough (Stanford Linear Accelerator Center, Stanford, CA, 1983), p. 151; E. J.
Eichten, The Sixth Quark, Proceedings of the Twelfth SLAC Summer Institute on Particle Physics. P. M. McDonough (Stanford Linear Accelerator CenterStanford, CA1SLAC ReportIn Eq. (2.13), V 2 (R) should read dV 2 (R) dREichten, in The Sixth Quark, Proceedings of the Twelfth SLAC Summer Institute on Particle Physics, SLAC Report No. 281, edited by P. M. McDonough (Stanford Linear Accelerator Center, Stanford, CA, 1985), p. 1. In Eq. (2.13), V 2 (R) should read dV 2 (R) dR ;
D Gromes, Spectroscopy of Light and Heavy Quarks. Ugo Gastaldi, Robert Klapisch, and Frank CloseNew York and LondonD67D. Gromes, in Spectroscopy of Light and Heavy Quarks, edited by Ugo Gastaldi, Robert Klapisch, and Frank Close (Plenum Press, New York and London, 1987), p. 67; D.
Gromes, The Quark Structure of Matter, Proceedings of the Yukon Advanced Study Institute. N. Isgur, G. Karl, and P. J. O'DonnellSingaporeWorld Scientific1Gromes, in The Quark Structure of Matter, Proceedings of the Yukon Advanced Study Institute, edited by N. Isgur, G. Karl, and P. J. O'Donnell (World Scientific, Singapore, 1985), p. 1.
. W Buchmüller, Y J Ng, S.-H. Henry Tye, Phys. Rev. D. 243003W. Buchmüller, Y. J. Ng, and S.-H. Henry Tye, Phys. Rev. D 24, 3003 (1981).
. S N Gupta, S F Radford, W W Repko, Phys. Rev. D. 253305ibid. 26S. N. Gupta, S. F. Radford, and W. W. Repko, Phys. Rev. D 25, 3430 (1982); ibid. 26, 3305 (1982).
For the unequal-mass case relevant to the cb system. Y. J. Ng, J. Pantaleone, and S.-HFor the unequal-mass case relevant to the cb system, Y. J. Ng, J. Pantaleone, and S.-H.
. Henry Tye, S.-H H Pantaleone, Y J Tye, Ng, Phys. Rev. Lett. 55777Phys. Rev. D. introduced a new, spin-dependent form factorHenry Tye, Phys. Rev. Lett. 55, 916 (1985) [see also J. Pantaleone, S.-H. H. Tye, and Y. J. Ng, Phys. Rev. D 33, 777 (1986)], introduced a new, spin-dependent form factor.
. T A Armstrong, E-760 CollaborationPhys. Rev. Lett. 692337T. A. Armstrong et al. (E-760 Collaboration), Phys. Rev. Lett. 69, 2337 (1992).
Up to the color factor, this relation is due to R. V F Van Royen, Weisskopf, Nuovo Cim. 50583Up to the color factor, this relation is due to R. Van Royen and V. F. Weisskopf, Nuovo Cim. 50, 617 (1967); 51, 583 (1967).
The QCD radiative correction factor is obtained by transcription from QED. See, for example. R Barbieri, Nucl. Phys. 105125The QCD radiative correction factor is obtained by transcription from QED. See, for example, R. Barbieri et al., Nucl. Phys. B105, 125 (1976);
. W Celmaster, Phys. Rev. D. 191517W. Celmaster, Phys. Rev. D 19, 1517 (1979).
− 16α s /3π) as the beginning of an expansion for (1 + 16α s /3π) −1 with α s = 0.36, then the predictions for the ψ family agree with experiment. If, for example, we interpret the factor. within errors, while those for the Υ family are about 20% lowIf, for example, we interpret the factor (1 − 16α s /3π) as the beginning of an expansion for (1 + 16α s /3π) −1 with α s = 0.36, then the predictions for the ψ family agree with experiment, within errors, while those for the Υ family are about 20% low.
. E Eichten, K Gottfried, T Kinoshita, K D Lane, T.-M Yan, Phys. Rev. D. 21203observed in the context of potential models that, in the heavy-quark limit, the state with J light quark = 3 2 is degenerate with the 3 P 2 level, while the state with J light quark = 1 2 is degenerate with the 3 P 0 level. (See their Appendix on charmed mesonsE. Eichten, K. Gottfried, T. Kinoshita, K. D. Lane, T.-M. Yan, Phys. Rev. D 21, 203 (1980), observed in the context of potential models that, in the heavy-quark limit, the state with J light quark = 3 2 is degenerate with the 3 P 2 level, while the state with J light quark = 1 2 is degenerate with the 3 P 0 level. (See their Appendix on charmed mesons.)
The observation in the heavy-quark limit of QCD is due to. J L See, B Rosner ; M, Wise, Comments Nucl. Part. Phys. 161130Phys. Rev. Lett.See also J. L. Rosner, Comments Nucl. Part. Phys. 16, 109 (1986). The observation in the heavy-quark limit of QCD is due to N. Isgur and M. B. Wise, Phys. Rev. Lett. 66, 1130 (1991).
. S S Gershteȋn, V V Kiselev, A K Likhoded, S R Slabospitskiȋ, A V Tkabladze, Sov. J. Nucl. Phys. 48515Yad. Fiz.S. S. Gershteȋn, V. V. Kiselev, A. K. Likhoded, S. R. Slabospitskiȋ, and A. V. Tkabladze, Yad. Fiz. 48, 515 (1988); [Sov. J. Nucl. Phys. 48, 327 (1988)].
The Study of B c (B c ) Meson and Its Excited States. Yu-Qi Chen, Yu-Ping Kuang, Institute for Theoretical Physics. 461165Academia Sinica. Ph.D. thesis. unpublishedYu-Qi Chen and Yu-Ping Kuang, Phys. Rev. D 46, 1165 (1992). See also Yu-Qi Chen, "The Study of B c (B c ) Meson and Its Excited States," Institute for Theoretical Physics, Academia Sinica, Ph.D. thesis, October 24, 1992 (unpublished).
We adopt the standard normalization, dΩ Y * ℓm (θ, φ)Y ℓ ′ m ′ (θ, φ) = δ ℓℓ ′ δ mm ′ . See, for example, the Appendix of. Quantum Mechanics of One-and Two-Electron Atoms. Hans A. Bethe and Edwin E. SalpeterBerlinSpringer-VerlagWe adopt the standard normalization, dΩ Y * ℓm (θ, φ)Y ℓ ′ m ′ (θ, φ) = δ ℓℓ ′ δ mm ′ . See, for example, the Appendix of Hans A. Bethe and Edwin E. Salpeter, Quantum Mechanics of One-and Two-Electron Atoms (Springer-Verlag, Berlin, 1957).
. Eric Braaten, Kingman Cheung, Tzu Chiang Yuan, Phys. Rev. D. 485049Eric Braaten, Kingman Cheung, and Tzu Chiang Yuan, Phys. Rev. D 48, R5049 (1993);
B c Meson Production at Hadron Colliders by Heavy Quark Fragmentation. Kingman Cheung, NUHEP-TH-93-19unpublishedKingman Cheung, "B c Meson Production at Hadron Colliders by Heavy Quark Frag- mentation," NUHEP-TH-93-19 (unpublished);
Perturbative QCD Predictions for the Fragmentation Functions of the P -Wave Mesons with Two Heavy Quarks. Yu-Qi Chen, preprint CCAST-93-4China Center for Advanced Science and Technology. unpublishedYu-Qi Chen, "Perturbative QCD Pre- dictions for the Fragmentation Functions of the P -Wave Mesons with Two Heavy Quarks," China Center for Advanced Science and Technology (World Laboratory) preprint CCAST-93-4 (unpublished).
. E Eichten, K Gottfried, Phys. Lett. B. 66286E. Eichten and K. Gottfried, Phys. Lett. B 66, 286 (1977).
K Gottfried, Proceedings of the International Symposium on Lepton and Photon Interactions as High Energies. F. Gutbrod, DESYthe International Symposium on Lepton and Photon Interactions as High EnergiesHamburgK. Gottfried, in Proceedings of the International Symposium on Lepton and Photon Interactions as High Energies, edited by F. Gutbrod, DESY, Hamburg (1978);
. Phys. Rev. Lett. 40598Phys. Rev. Lett. 40, 598 (1978);
. M B Voloshin, Nucl. Phys. B. 154365M. B. Voloshin, Nucl. Phys. B 154, 365 (1979).
. T.-M Yan, Phys. Rev. D. 221652T.-M. Yan, Phys. Rev. D 22, 1652 (1980).
. Y.-P Kuang, T.-M Yan, Phys. Rev. D. 242874Y.-P. Kuang and T.-M. Yan, Phys. Rev. D 24, 2874 (1981).
. Y.-P Kuang, S F Tuan, T.-M Yan, Phys. Rev. D. 371210Y.-P. Kuang, S. F. Tuan and T.-M. Yan, Phys. Rev. D 37, 1210 (1988).
. Y.-P Kuang, T.-M Yan, Phys. Rev. D. 41155Y.-P. Kuang and T.-M. Yan, Phys. Rev. D 41, 155 (1990).
It must be remembered in applying these relations to the cb system that the physical eigenstates with J = ℓ = 0 are linear combinations of the equal-mass spin-singlet and spin-triplet states. It must be remembered in applying these relations to the cb system that the physical eigenstates with J = ℓ = 0 are linear combinations of the equal-mass spin-singlet and spin-triplet states.
. R A Partridge, No. CALT-68-1150Ph. D. thesis, Caltech ReportunpublishedR. A. Partridge, Ph. D. thesis, Caltech Report No. CALT-68-1150 (1984, unpublished).
. R H Schindler, No. SLAC-219Stanford Linear Accelerator LaboratoryReportPh. D. thesis. unpublishedR. H. Schindler, Ph. D. thesis, Stanford Linear Accelerator Laboratory Report No. SLAC-219 (1979, unpublished).
. J Adler, Mark III CollaborationPhys. Rev. Lett. 6089J. Adler, et al. (Mark III Collaboration), Phys. Rev. Lett. 60, 89 (1988).
. L S Brown, R N Cahn, Phys. Rev. Lett. 351L. S. Brown and R. N. Cahn, Phys. Rev. Lett. 35, 1 (1975).
. G S Abrams, Mark I CollaborationPhys. Rev. Lett. 341181G. S. Abrams et al. (Mark I Collaboration), Phys. Rev. Lett. 34, 1181 (1975);
. G , G. S.
Abrams, Proceedings of the 1975 International Symposium on Lepton and Photon Interactions at High Energies. W. T. Kirk (SLACthe 1975 International Symposium on Lepton and Photon Interactions at High EnergiesStanford25Abrams, in Proceedings of the 1975 International Symposium on Lepton and Photon Interactions at High Energies, edited by W. T. Kirk (SLAC, Stanford, 1975), p. 25.
. D Besson, CLEO CollaborationPhys. Rev. D. 301433D. Besson et al. (CLEO Collaboration), Phys. Rev. D 30, 1433 (1984).
. T Bowcock, CLEO CollaborationPhys. Rev. Lett. 58307T. Bowcock et al. (CLEO Collaboration), Phys. Rev. Lett. 58, 307 (1987);
. I C Brock, CLEO CollaborationPhys. Rev. D. 431448I. C. Brock et al. (CLEO Collaboration), Phys. Rev. D 43, 1448 (1991).
. See Ref, 14a See Ref. [14].
. See Ref, 27b See Ref. [27].
. See Ref, 28]; the masses correspond to Potential I with Λ MS = 150 MeVc See Ref. [28]; the masses correspond to Potential I with Λ MS = 150 MeV.
| []
|
[
"A PROOF OF MERCA'S CONJECTURES ON SUMS OF ODD DIVISOR FUNCTIONS",
"A PROOF OF MERCA'S CONJECTURES ON SUMS OF ODD DIVISOR FUNCTIONS"
]
| [
"Kaya Lakein ",
"Anne Larsen "
]
| []
| []
| In a recent paper, Merca posed three conjectures on congruences for specific convolutions of a sum of odd divisor functions with a generating function for generalized m-gonal numbers. Extending Merca's work, we complete the proof of these conjectures.Euler's partition function p(n) is defined by the number of partitions of any nonnegative integer n, and its generating function is given by ∞ n=0 p(n)q n = ∞ k=1 | 10.1017/s0004972721000678 | [
"https://arxiv.org/pdf/2107.07637v2.pdf"
]
| 236,034,012 | 2107.07637 | 69430d641a0cc00d4022fbd4b3c3ddf4b13dc94e |
A PROOF OF MERCA'S CONJECTURES ON SUMS OF ODD DIVISOR FUNCTIONS
21 Jul 2021
Kaya Lakein
Anne Larsen
A PROOF OF MERCA'S CONJECTURES ON SUMS OF ODD DIVISOR FUNCTIONS
21 Jul 2021arXiv:2107.07637v2 [math.NT]
In a recent paper, Merca posed three conjectures on congruences for specific convolutions of a sum of odd divisor functions with a generating function for generalized m-gonal numbers. Extending Merca's work, we complete the proof of these conjectures.Euler's partition function p(n) is defined by the number of partitions of any nonnegative integer n, and its generating function is given by ∞ n=0 p(n)q n = ∞ k=1
1 1 − q k , |q| < 1.
The properties of the function p(n), such as its asymptotic behavior and its parity, have been an object of study for a long time. For instance, Ballantine and Merca [1] recently made a conjecture on when ak+1 is a square p(n − k)
is odd, which was proved by Hong and Zhang [2].
The function p(n) is linked to the divisor function σ(n) := d|n d, whose generating function is given by ∞ n=1 σ(n)q n = ∞ n=1 nq n 1 − q n .
In particular, p(n) and σ(n) satisfy the following convolution identities, which differ only in the values of p(0) and σ(0):
∞ k=−∞ (−1) k p(n − P 5 (k)) = δ 0,n , with p(0) = 1, ∞ k=−∞ (−1) k σ(n − P 5 (k)) = 0, with σ(0) replaced by n,
where δ ij is the Kronecker delta, and P m (k) is the kth generalized m-gonal number
P m (k) := m 2 − 1 k 2 − m 2 − 2 k.(1)
Motivated by these identities as well as the fact that the divisor functions σ(n) and
σ odd (n) := d|n d odd d,
where σ odd (n) := 0 for n ≤ 0, have the same parity, Merca recently studied the relationship between σ odd (n) and the generalized m-gonal numbers. More specifically, he investigated for which positive integers m the following congruences hold for all n ∈ Z + :
∞ k=−∞ σ odd (n − P m (k)) ≡ n (mod 2) if n = P m (j), j ∈ Z, 0 (mod 2) otherwise, (2) ∞ k=−∞ σ odd (n − P 5 (k)) ≡ n (mod m) if n = P 5 (j), j ∈ Z, 0 (mod m) otherwise, (3) ∞ k=−∞ (−1) P 3 (−k) σ odd (n − P 5 (k)) ≡ (−1) P 3 (−j) · n (mod m) if n = P 5 (j), j ∈ Z, 0 (mod m) otherwise. (4)
In particular, Merca posed the following conjectures:
Conjecture. The following are true:
(i) The congruence (2) holds for all n ∈ Z + if and only if m ∈ {5, 6}. (ii) The congruence (3) holds for all n ∈ Z + if and only if m ∈ {2, 3, 6}. (iii) The congruence (4) holds for all n ∈ Z + if and only if m ∈ {2, 4}.
Merca showed the if condition for each of these conjectures. Using his work, we obtain the following theorem: , then there exists some n ∈ Z + such that n = P 5 (j) for all j ∈ Z and
(5) ∞ k=−∞ σ odd (n − P 5 (k)) ≡ 0 (mod m).
Since σ odd (n − P 5 (k)) = 0 whenever n − P 5 (k) ≤ 0, the sum in (5) is in fact finite, and we easily compute k σ odd (3 − P 5 (k)) = 6, where 3 = P 5 (j) for j ∈ Z. Thus, (5) holds unless 6 ≡ 0 (mod m). But this is the case only if m ∈ {2, 3, 6}.
Next, we prove (iii). Again, Merca proves that (4) holds if m ∈ {2, 4} [3, Theorem 4], hence it suffices to show that if m / ∈ {2, 4}, then there exists some n ∈ Z + such that n = P 5 (j) for all j ∈ Z and
(6) ∞ k=−∞ (−1) P 3 (−k) σ odd (n − P 5 (k)) ≡ 0 (mod m).
We compute k (−1) P 3 (−k) σ odd (3 − P 5 (k)) = 4, where 3 = P 5 (j) for j ∈ Z, and so (6) holds unless 4 ≡ 0 (mod m). But this is the case only if m ∈ {2, 4}. Finally, we prove (i). Since σ odd (n) is odd if and only if n is a square or a twice square (see [3, p. 3]), we have that
(7) ∞ n=1 σ odd (n)q n ≡ ∞ n=1 q n 2 + ∞ n=1 q 2n 2 (mod 2). The nth coefficient of ∞ ℓ=1 σ odd (ℓ)q ℓ ∞ k=−∞ q Pm(k) = ∞ ℓ=1 ∞ k=−∞ σ odd (ℓ)q ℓ+Pm(k) = ∞ n=1 ∞ k=−∞ σ odd (n − P m (k)) q n is given by ∞ k=−∞ σ odd (n − P m (k))
. On the other hand, the nth coefficient of
∞ ℓ=1 q ℓ 2 + ∞ ℓ=1 q 2ℓ 2 ∞ k=−∞ q Pm(k) = ℓ≥1 k∈Z q ℓ 2 +Pm(k) + q 2ℓ 2 +Pm(k) is given by a m (n) + b m (n), where a m (n) = |A m (n)| := #{(ℓ, k) ∈ Z + × Z : ℓ 2 + P m (k) = n}, b m (n) = |B m (n)| := #{(ℓ, k) ∈ Z + × Z : 2ℓ 2 + P m (k) = n}.
Thus, due to (7) we must have
∞ k=−∞ σ odd (n − P m (k)) ≡ a m (n) + b m (n) (mod 2).
Suppose first that m ≥ 7. Then we claim that P m (0) = 0, P m (1) = 1, and P m (k) > 3 for all k / ∈ {0, 1}. From (1), it is clear that P m (0) = 0 and P m (1) = 1. To see that P m (k) > 3 for all k / ∈ {0, 1}, note that since the leading term of P m (x) is positive and its vertex is at 0 < m−4 2m−4 < 1, we have P m (k) ≥ P m (2) = m ≥ 7 for k ≥ 2 and P m (k) ≥ P m (−1) = m − 3 ≥ 4 for k ≤ −1.
Now, let n = 3. Then the above shows that n is not a generalized m-gonal number for m ≥ 7, and so for (2) to hold, we must have ∞ k=−∞ σ odd (3 − P m (k)) ≡ 0 (mod 2). If (ℓ, k) ∈ A m (3), then ℓ 2 = 3 − P m (k), so that in particular ℓ 2 ≤ 3, which forces ℓ = 1. But then we must have P m (k) = 2, which we have seen to be impossible. Hence, A m (3) is empty, and a m (3) ≡ 0 (mod 2). On the other hand, if (ℓ, k) ∈ B m (3), we must again have ℓ = 1. It follows that P m (k) = 1, which is the case if and only if k = 1. Hence B m (3) = {(1, 1)}, and b m (3) ≡ 1 (mod 2). We conclude
that ∞ k=−∞ σ odd (3 − P m (k)) ≡ a m (3) + b m (3) ≡ 1 ≡ 0 (mod 2)
. Merca showed that (2) holds for m ∈ {5, 6}, and for m ∈ {1, 2}, the sum in (2) diverges, hence it remains to consider m ∈ {3, 4}. Suppose first that m = 3, and note that P 3 (k) = 1 2 (k 2 + k). We have 3 = P 3 (−3) = P 3 (2), so for (2) to hold, we must have ∞ k=−∞ σ odd (3 − k) ≡ 3 ≡ 1 (mod 2). If (ℓ, k) ∈ A 3 (3), then ℓ = 1 and P 3 (k) = 2, which is impossible. Hence A 3 (3) is empty. If (ℓ, k) ∈ B 3 (3), then ℓ = 1 and P 3 (k) = 1, which is the case if and only if k ∈ {−2, 1}. Thus
B 3 (3) = {(1, −2), (1, 1)}, and ∞ k=−∞ σ odd (3 − k) ≡ a 3 (3) + b 3 (3) ≡ 0 ≡ 1 (mod 2)
. Finally, suppose m = 4, and note that P 4 (k) = k 2 . Since 4 = P 4 (2), for (2) to hold we must have ∞ k=−∞ σ odd (4 − k) ≡ 4 ≡ 0 (mod 2). If (ℓ, k) ∈ A 4 (4), then either ℓ = 1 and P 4 (k) = 3, which is impossible, or ℓ = 2 and P 4 (k) = 0, which is the case if and only if k = 0. Thus, A 4 (4) = {(2, 0)}. On the other hand, if (ℓ, k) ∈ B 4 (4), then ℓ = 1 and P 3 (k) = 2, which is impossible. Thus B 4 (4) is empty, and ∞ k=−∞ σ odd (4 − k) ≡ a 4 (4) + b 4 (4) ≡ 1 ≡ 0 (mod 2).
Theorem 1 .
1Merca's conjectures are true. Proof. We begin by proving (ii). Merca showed that (3) holds if m ∈ {2, 3, 6} [3, Theorem 3], hence it suffices to show that if m / ∈ {2, 3, 6}
AcknowledgementsWe would like to thank Ken Ono for suggesting this project, and for several helpful conversations. We thank William Craig and Badri Pandey, as well as the referee, for their comments on the exposition in this note. Finally, we are grateful for the generous support of the National Science Foundation (DMS 2002265 and DMS 205118), the National Security Agency (H98230-21-1-0059), the Thomas Jefferson Fund at the University of Virginia, and the Templeton World Charity Foundation.
Parity of Sums of Partition Numbers and Squares in Arithmetic Progressions. Cristina M Ballantine, Mercia Merca, Ramanujan J. 44Cristina M. Ballantine and Mercia Merca, Parity of Sums of Partition Numbers and Squares in Arithmetic Progressions, Ramanujan J. 44 (2017), 617-630.
Proof of the Ballentine-Merca Conjecture and Theta Function Identities Modulo 2. Letong Hong, Shengtong Zhang, Letong Hong and Shengtong Zhang, Proof of the Ballentine-Merca Conjecture and Theta Function Identities Modulo 2, https://arxiv.org/abs/2101.09846, 2021.
Congruence Identities Involving Sums of Odd Divisors Function. Mircea Merca, Proc. Rom. Acad. Ser. A. 222Mircea Merca, Congruence Identities Involving Sums of Odd Divisors Function, Proc. Rom. Acad. Ser. A 22, no. 2 (2021), 119-125.
| []
|
[
"Design of a Formation of Solar Pumped Lasers for Asteroid Deflection",
"Design of a Formation of Solar Pumped Lasers for Asteroid Deflection"
]
| [
"Massimiliano Vasile ",
"Christie Alisa Maddock ",
"\nDept. of Mechanical & Aerospace Engineering\nDept. of Mechanical & Aerospace Engineering\nUniversity of Strathclyde\n75 Montrose StreetG1 1XJGlasgowUK\n",
"\nUniversity of Strathclyde\n75 Montrose StreetG1 1XJGlasgowUK\n"
]
| [
"Dept. of Mechanical & Aerospace Engineering\nDept. of Mechanical & Aerospace Engineering\nUniversity of Strathclyde\n75 Montrose StreetG1 1XJGlasgowUK",
"University of Strathclyde\n75 Montrose StreetG1 1XJGlasgowUK"
]
| []
| This paper presents the design of a multi-spacecraft system for the deflection of asteroids. Each spacecraft is equipped with a fibre laser and a solar concentrator. The laser induces the sublimation of a portion of the surface of the asteroid, and the resultant jet of gas and debris thrusts the asteroid off its natural course. The main idea is to have a formation of spacecraft flying in the proximity of the asteroid with all the spacecraft beaming to the same location to achieve the required deflection thrust. The paper presents the design of the formation orbits and the multi-objective optimisation of the formation in order to minimise the total mass in space and maximise the deflection of the asteroid. The paper demonstrates how significant deflections can be obtained with relatively small sized, easy-to-control spacecraft. | 10.1016/j.asr.2012.06.001 | [
"https://arxiv.org/pdf/1206.1336v2.pdf"
]
| 9,455,449 | 1206.1336 | efea984c9eb07b53bf82d49822c3397e4abbca73 |
Design of a Formation of Solar Pumped Lasers for Asteroid Deflection
29 Jun 2012
Massimiliano Vasile
Christie Alisa Maddock
Dept. of Mechanical & Aerospace Engineering
Dept. of Mechanical & Aerospace Engineering
University of Strathclyde
75 Montrose StreetG1 1XJGlasgowUK
University of Strathclyde
75 Montrose StreetG1 1XJGlasgowUK
Design of a Formation of Solar Pumped Lasers for Asteroid Deflection
141510529 Jun 2012arXiv:1206.1336v2 [math.OC]Asteroid deflectionLaser ablationNEO
This paper presents the design of a multi-spacecraft system for the deflection of asteroids. Each spacecraft is equipped with a fibre laser and a solar concentrator. The laser induces the sublimation of a portion of the surface of the asteroid, and the resultant jet of gas and debris thrusts the asteroid off its natural course. The main idea is to have a formation of spacecraft flying in the proximity of the asteroid with all the spacecraft beaming to the same location to achieve the required deflection thrust. The paper presents the design of the formation orbits and the multi-objective optimisation of the formation in order to minimise the total mass in space and maximise the deflection of the asteroid. The paper demonstrates how significant deflections can be obtained with relatively small sized, easy-to-control spacecraft.
Introduction
Near Earth Objects (NEO), the majority of which are asteroids, are defined as minor celestial objects with a perihelion less than 1.3 AU and an aphelion greater than 0.983 AU. A subclass of these, deemed Potentially Hazardous Asteroids (PHA), are defined as those with a Minimum Orbital Intersection Distance (MOID) from the Earth's orbit less than or equal to 0.05 AU and a diameter larger than 150 m (equivalent to an absolute magnitude of 22.0 or less). As of March 2012, 8758 NEO's have been detected (IAU Minor Planet Centre, 2012); of those, 840 are estimated to have an effective diameter larger than 1 km ‡ , and 1298 are categorised as potentially hazardous. Impacts from asteroids over 1 km in diameter are expected to release over 10 5 megatons of energy with global consequences for our planet (Stokes et al., 2003), while those with an average diameter of 100 m can are expected to release over 10 2 megatons of energy potentially causing significant tsunamis and/or land destruction of a large city (Toon et al., 1997). It is estimated that there are between 30,000-300,000 NEO's with diameters around 100 m, meaning a large number of NEO's are still undetected.
A quantitative comparison of the various options for NEO deflection was conducted by Colombo et al. (2006);Sanchez Cuartielles et al. (2007). Examining the results of the comparison, one of the more interesting methods employed solar sublimation to actively deviate the orbit of the asteroid. The original concept, initially evaluated by Melosh et al. (1994), and later assessed by Kahle et al. (2006), envisioned a single large reflector; this idea was expanded to a formation of spacecraft orbiting in the vicinity of the NEO, each equipped with a smaller concentrator assembly capable of focusing the solar power at a distance around 1 km and greater (Maddock et al., 2007). This concept addressed the proper placement of the concentrators in close proximity to the asteroid while avoiding the plume impingement and provided a redundant and scalable solution. However, the contamination of the optics still posed a significant limitation as demonstrated by Vasile and Maddock (2010). In the same paper, the authors demonstrated that the combined effect of solar pressure and enhanced Yarkovsky effect could lead to miss (or deflection) distances of a few hundred to a thousand kilometres over eight years of deflection time. However, this deflection is orders of magnitude lower that the one achievable with a prolonged sublimation of the surface.
A possible solution is to use a collimating device that would allow for larger operational distances and protection of the optics. This paper presents an asteroid deflection method based on a formation of spacecraft each equipped with solar pumped lasers. The use of lasers has already proposed by several authors, although always in conjunction with a nuclear power source (Phipps, 1992(Phipps, , 1997Park and Mazanek, 2005). Extensive studies on the dynamics of the deflection with high power lasers were proposed by Park and Mazanek (2005) envisaging a single spacecraft with a MW laser. This paper proposes a different solution with a formation of smaller spacecraft, each supporting a kW laser system indirectly pumped by the Sun.
The paper starts with a simple sublimation model that is used to compute the deflection force. The orbits of the spacecraft formation are then designed by solving a multi-objective optimisation problem that yields an optimal compromise between distance from the target and impingement with the plume of ‡ An asteroid with an effective diameter equal to or greater than 1 km is defined here to be any NEA with an absolute brightness or magnitude H ≤ 17.75, as per Stuart (2003). debris. A Lyapunov controller is proposed to maintain the spacecraft in formation along the desired proximal orbit. A second multi-objective optimisation problem is then solved to compute a different type of controlled formation orbits in which the shape of the orbit is predefined. Finally, the number and size of the spacecraft is optimised to yield the maximum possible deflection.
Deflection Model
The orbital properties of Near Earth Asteroids (NEA) can be grouped into four general categories based on the semi-major axis a of the orbit, radius of apoapsis r a and/or radius of periapsis r p , described as follows (NASA Near Earth Object program, 2012):
Atens Earth-crossing asteroids with semi-major axes smaller than Earth (named after asteroid 2062 Aten), where a < 1 AU, r a ≥ 0.983 AU.
Apollos Earth-crossing asteroids with semi-major axes larger than Earth (named after asteroid 1862 Apollo), where a ≥ 1 AU, r p ≤ 1.0167 AU.
Amors Earth-approaching asteroids with orbits exterior to Earth's but interior to Mars (named after asteroid 1221 Amor), where a > 1 AU, 1.0167 AU < r p ≤ 1.3 AU.
Atiras Near Earth Asteroids whose orbits are contained entirely with the orbit of the Earth (named after asteroid 163693 Atira), where a < 1 AU, r p < 0.983 AU.
Apollo is the largest class (approximately 4100 NEA's) followed by Amors (approximately 3400 NEA's). The asteroid Apophis 99942, part of the Apollos class, is taken as a test case with a relatively low aphelion such that enough solar power can be harvested. While circular, or near-circular, orbits offer a more constant level of solar radiation, as suggested by Vasile (2008); Vasile and Maddock (2010) if the mirrors have variable optics, i.e., the focal point can be changed, a constant power density can be achieved for asteroids on elliptical orbits or when the level of solar power available is low. In terms of altering an orbit, thrusting at the perihelion of elliptical orbits maximises the change in semi-major axis (and therefore the miss distance). Even if the level of solar radiation available is not sufficient to induce sublimation at aphelion, a deflection can still be achieved, as will be demonstrated in this paper.
The other benefit of basing the test case on the Apophis asteroid is its popularity in scientific literature due to the initial, relatively high impact level (2.7% chance of impacting the Earth in 2029) it was given when it was first observed in 2004. While further tracking data has reduced the threat level, ruling out the possibility of an impact in 2029 but leaving a non-zero impact probability for the 2036 and 2037 encounters, the asteroid Apophis remains a popular reference example. Note, in fact, that although Apophis is not necessary a typical case, the interest here is to examine the effectiveness of a fractionated laser ablation system applied to the deflection of an S-type asteroid, belonging to a given size range, on a moderately eccentric orbit (although also an extension to highly eccentric orbits will be demonstrated) of which Apophis is an example. Table 1 gives the orbital and physical data of the asteroid used in this study. The asteroid shape was assumed to be tri-axial ellipsoidal,
a ℓ = √ 2d a b ℓ = d a c ℓ = d a √ 2 (1)
where a ℓ ≥ b ℓ ≥ c ℓ are the three radii along the three orthogonal axes and d a is the estimated average diameter based on the observed magnitude, given in Table 1. S-type asteroids, as used here for the test case, are moderately bright with an albedo from 0.10 to 0.22. By comparison, C-type asteroids are extremely dark with albedos typically in the range of 0.03 to 0.10. According to Delbò et al. (2007), the geometric albedo of Apophis is 0.33 however the value used here of 0.2 was chosen to give a more general test case. The minimum orbital intersection distance (MOID) is the separation distance at the closest point between two orbits, e.g., Apophis and the Earth. The deviation distance is defined here as the difference in position between the original, undeviated orbit k a 0 and the deviated orbit k a dev at t MOID (Colombo et al., 2009b) (see Fig. 1). Figure 2 illustrates the reference frames used here, where O{i, j, k} is the inertial heliocentric reference frame, and A{x, y, z} is the relative, rotating Hill reference frame (radial x, transverse y and out-of-plane z directions), centred on the asteroid.
Non-linear equations were used for determining the asteroid deviation vector ∆r dev = r a dev − r a0 as a function of the ephemeris in the Hill reference frame A, as derived by Maddock and Vasile (2008), where ∆k = k a dev − k a 0 = [∆a, ∆e, ∆i, ∆Ω, ∆ω, ∆M ] t giving the difference in Keplerian parameters between the undeviated and deviated orbits.
The change in the orbital parameters is calculated by numerically integrating the Gauss planetary equations (see e.g., Battin, 1999) using a thrust vector u dev = [u t u n u h ] t in the tangential, normal and out-of-plane (or direction of angular momentum h) reference frame, induced by the deflection method:
∆k = tMOID t0 dk(u dev ) dt dt(2)
Within this study, the deflection action is assumed to be aligned with the heliocentric velocity of the asteroid, therefore u n = 0 and u h = 0. Other authors have studied the optimal direction of the deflection action in the case of laser ablation (Yoo et al., 2009), however, the main interest of this paper is in the system sizing in relation to the achievable deviation. Colombo et al. (2009b) determined that the change in angular location, in this case given by the mean anomaly M , calculated at the MOID is,
∆M = ti t0 dM dt dt + n a0 (t 0 − t moid ) + n ai (t moid − t i )(3)
where n A0 is the mean motion of the undeflected asteroid, n Ai is the mean motion of the asteroid at the end of the deflection action, t 0 is the beginning of the deflection action and t i is the end of the deflection action. The non-linear proximal motion equations in Vasile and Maddock (2010) together with Eq. (3) and the Gauss planetary equations give the variation of the orbit of the asteroid at the time of the MOID. Vasile and Colombo (2008) showed that an estimation of the minimum orbit interception distance can be computed by projecting the variation of the orbit at the expected impact time onto the b-plane of the Earth at the time of the MOID, i.e., computing the variation of the impact parameter b. Hence, in the test section the variation of the impact parameter will be used as a measure of the achievable deflection.
The thrust produced by the deflection method is computed assuming that the lasers are not pulsed but continuous wave and that the energy density is sufficient only to turn the matter into gas (vapour regime) but not to produce plasma (Phipps, 2010). The level of momentum coupling that can be achieved with this model is lower than what can be found in other studies (see e.g., Phipps, 2010). A further assumption is that the asteroid is absorbing part of the incoming energy without changing its temperature thus providing a constant sink for heat transmission; this might not be the case for small asteroids.
Under these assumptions, the rate of the expelled surface matter is defined as (Sanchez Cuartielles et al., 2009),
dm exp dt = 2n sc v rot ymax y0 tout tin 1 H (P in − Q rad − Q cond ) dt dy(4)
where [t in , t out ] is the duration for which a point is illuminated, [y 0 , y max ] are the vertical limits of the illuminated surface area (i.e. orthogonal to the direction of rotation of the asteroid), H is the enthalpy of sublimation, v rot is the linear velocity of a point as it travels horizontally (i.e., orthogonal to y) through the illuminated spot area and n sc is the number of spacecraft in the formation. The input power per unit area due to the solar concentrators is given by,
P in = η sys C r (1 − ς a )S 0 r au r a 2(5)
where ς a = 0.2 is the albedo, S 0 = 1367 W/m 2 is the solar flux at 1 AU, scaled to the Sun-asteroid distance r a , η sys is the system efficiency, and C r is the concentration ratio (the ratio between the power density from the Sun on the mirror surface, and that of the illuminated spot area on the asteroid). The heat loss due to black-body radiation and the conduction loss are defined, respectively, as,
Q rad = σǫ bb T 4 (6) Q cond = (T subl − T 0 ) c a k a ρ a πt(7)
where σ is the Stefan-Boltzmann constant, ǫ bb is the black body emissivity, T is the temperature and c a , ρ a and k a are, respectively, the heat capacity, density and thermal conductivity of the asteroid. For the asteroid Apophis, c a = 750 J/kg·K based on the average value for silicate materials, k a = 2 W/K/m and ρ a = 2600 kg/m 3 (Remo, 1994). The sublimation temperature assumed is that for forsterites (Wang et al., 1999), T subl = 1800 K, with T 0 set to 278 K. The induced acceleration due to the sublimation process can then be determined by (Sanchez Cuartielles et al., 2009),
u sub = Λ vṁ exp m av a(8)
where m A is the mass of the asteroid at a generic instant of time,v a is the direction of the velocity vector of the NEO, Λ ≃ 2 π is the scattering factor, v is the average velocity of the debris particles according to Maxwell's distribution of an ideal gas:
v = 8k b T subl πM Mg2SiO4(9)
where k b is the Boltzmann constant, and M Mg 2 SiO4 is the molecular mass of fosterite.
The scattering factor Λ is computed as the average of all possible thrust directions assuming that the thrust can point randomly at any angle α t between 0 and π, therefore Λ = 1 π π 0 cos α t dα t (Sanchez Cuartielles et al., 2009). Some preliminary experiments (Gibbings et al., 2011) demonstrate that the plume is progressively focusing inwards for rocky type of asteroids, while for highly porous asteroids the plume tends to remain unfocused; hence assuming an uniform distribution of the thrust pointing direction over an angle of 180 • is a conservative choice. The remaining mass of the asteroid m a is calculated by numerically integrating Eq. (4).
Contamination Model
The contamination of the mirror surfaces due to the debris plume is modeled based on the work by Kahle et al. (2006). Their study made a number of initial assumptions regarding the expansion of the plume and sublimation process. The first assumption holds that the sublimation process is comparable to the generation of tails in comets. The asteroid is assumed to contain a reservoir of material underneath the surface, with the gas expanding outwards through a throat into vacuum. Preliminary experimental results have shown that this assumption, as with others in this section, are potentially overly pessimistic and may not be valid for every type of asteroid. However, altering these assumptions does not change the fundamental results in this paper, therefore it was decided to remain consistent with the existing literature and defer any further analysis on the validity of these assumptions for future work.
The second assumption is that the plume expansion is similar to the expansion of gas of a rocket engine outside the nozzle. The density of the expelled gas ρ exp is computed analytically,
ρ exp (δr s/sc , ϕ) = j cṁ exp v A spot d spot 2δr s/sc + d spot 2 (cos Θ) 2/(κ−1)(10)
where d spot is the diameter of the spot area, δr s/sc is the distance from the spot on the surface of the asteroid and the spacecraft, and Θ = πϕ/2ϕ max where ϕ is the angle between the spot-spacecraft vector and the y-axis of the Hill reference frame. The jet constant j c was set to 0.345, the maximum expansion angle ϕ max = 130.45 • , and adiabatic index κ = 1.4 based on the values for diatomic particles (Legge and Boettcher, 1982).
Note that this density model is in contradiction with the assumption of a uniform scattering over a hemisphere and, in fact, suggests a much more focused plume. From ongoing experiments (Gibbings et al., 2011), the plume appears to more closely match the density distribution given in Eq. (10) rather than a uniform distribution; nevertheless, in the analysis in this paper the most conservative choice was selected for the scattering factor in order to account for possible unmodeled performance degradation components.
The position vector δr s/sc from the spot to the spacecraft is defined as:
δr s/sc = x − r ℓ sin w a t cos(−w a t − θ va ) + r ℓ cos w a t sin(−w a t − θ va ) y − r ℓ cos w a t cos(−w a t − θ va ) − r ℓ sin w a t sin(−w a t − θ va ) z (11)
where the radius of the ellipse is given by,
r ℓ = a ℓ b ℓ b ℓ cos(−w a t − θ va ) 2 + a ℓ sin(−w a t − θ va ) 2(12)
and, with reference to Fig. 2, the position of the spacecraft with respect to the centre of the asteroid is δr = [x, y, z] t . We assume here that the asteroid is spinning around the z axis with a rotational velocity w a . The direction of the velocity of the asteroid in the heliocentric reference frame projected onto the Hill reference frame A is θ va . In other words, in order to have a deflection thrust aligned with the velocity of the asteroid, the spot is assumed to be at an elevation angle over the y-axis equal to θ va . The third assumption made is that all the particles impacting the surface of the mirror condense and stick to the surface. The exhaust velocity is constant, therefore the thrust depends only on the mass flow. A higher thrust results in a higher mass flow and thus in a faster contamination. This is a rather conservative assumption. The actual contamination level depends on the type of deposited material and the temperature of the optical surfaces. Following the approach used to compute the contamination of surfaces due to out-gassing, a view factor ψ vf was added equal to the angle between the normal to the mirror and the incident flow of gas. The resulting variation of the thickness of the material condensing on the mirror can be computed by,
dh cnd dt = 2 v ρ exp ρ layer cos ψ vf(13)
The average debris velocity v is multiplied by a factor of 2 to account for the expansion of the gas in a vacuum. The layer density ρ layer was set to 1 g/cm 3 . The power density on the asteroid surface is decreased based on the contamination of the mirrors. A degradation factor τ is applied to the power beamed to the asteroid surface, based on the Lambert-Beer-Bouguer law (Kahle et al., 2006),
τ = e −2υh cnd(14)
where υ = 10 4 /cm is the absorption coefficient for forsterite. Note that the values of υ and ρ layer are based on the assumption that the deposited material is dense and absorbs the light over the whole spectrum. This is again a rather conservative assumption; experiments have shown that while it appears to be valid for some silicates such as forsterite, this assumption may not hold true for all materials. As mentioned previously, further experimentation and analysis are underway, and will be the topic of future publications. Eq. (13) is numerically integrated, along with the Gauss equations, for the period of the mission.
Tugging Effect
The spacecraft will fly in formation with the asteroid at a distance δr, thus exerting a gravitational pull on it (Gong et al., 2009). The tugging acceleration u tug is given by:
u tug = −n sc Gm sc δr 2 δr(15)
where G is the universal gravity constant and m sc is the mass of a spacecraft. The sum of u tug and u sub forms the total deflection acceleration u dev . The acceleration u dev is used with Gauss planetary equations in order to determine the change in the NEO orbit.
The Laser System
Lasers work on the general premise of exciting electrons by stimulating them with the addition of photons (or quantum energy), which temporarily boost them up to a higher energy state. This stimulation continues until a population inversion exists, where there are more electrons at a higher energy state, e.g., E 1 than at the lower (or original) state, e.g. E 0 . The release of photons when the electrons drop back to their original base state produce an emission that, generally, has the same spectral properties of the stimulating radiation, and is therefore highly coherent. The energy that is not released as part of the output emission, is instead released as heat. This means that the laser must be continually cooled, which in space means large radiators.
In this paper two general methods of powering the laser are considered and defined as: direct pumping, where the energy is directly used to excite the laser, and indirect pumping, where an intermediate step is used to first convert the energy, e.g., solar radiation, into electricity.
Indirect solar-pumped lasers convert the solar energy first into electricity, which is then used to power the laser. Photovoltaic cells are an obvious choice for space applications. The drawback, of course, is the addition of an electrical power generator meaning added mass, size and power requirements. Direct solar-pumped lasers, by comparison, do precisely what the name suggests: the laser is directly energised using solar radiation. Due to the mismatch between the wide-band emissions of the Sun with the narrow absorption bands of lasers, the loss of available solar power is currently rather high. For example, the overlap between a Nd:YAG (neodymium-doped yttrium aluminium garnet) crystal absorption spectrum and the solar radiation spectrum is around 0.14 (Weksler and Shwartz, 1988).
One option is to use high efficiency solar arrays in conjunction with a solid state laser. Solid state lasers pumped with electric power can currently reach 60% efficiency. If the solar arrays have an efficiency of 30%, then the system would have an overall efficiency of 18%. If a pumped laser is used, then the focal point can be close to the primary mirror and a high concentration factor can be obtained with a relatively small mirror. For example, if the mirror has an area of 314 m 2 (equivalent to a 20 m diameter circular mirror), then the collected power at 1 AU is 429.5 kW. The solar array plus laser system converts only 18% of this power, therefore only 77.3 kW is beamed to the surface to the asteroid, while the rest needs to be dissipated.
In a paper presented in 1994, Landis discussed the use of a directly solar pumped laser based on semiconductor technology. According to Landis, the expected efficiency of directly pumped semiconductor laser would depend on the same efficiency losses of a solar cell, therefore Landis was expecting a lasing efficiency (output/input power ratio) of 35%. Such an efficiency would be one order of magnitude higher than the best Nd:YAG laser system, which is expected to reach 6% of overall efficiency.
Direct solar pumping would represent an interesting solution in terms of complexity of the overall system. In fact no cooling system for the photovoltaic conversion and no power transmission would be required. On the other hand the Technology Readiness Level (TRL) of both solar cells and semiconductor lasers is far higher than the one of a directly pumped laser and an indirectly pumped laser can be expected to be operational much sooner.
Recent electrically pumped semiconductor laser have proven over 73% wallplug efficiency (Crump et al., 2005;Stickley et al., 2005;nLIGHT, 2006;Peters et al., 2007) with a target efficiency of 80%. Research on fibres coupled with clusters of diodes have demonstrated slope efficiencies of up to 83% (Jeong et al., 2003(Jeong et al., , 2004. A substantial increase in cells efficiency has also to be expected. In particular, in order to achieve a 35% efficiency in direct pumping, semiconductor technology should allow the absorbtion of the solar spectrum over a wide range of frequencies. A high efficiency of a directly pumped laser is therefore expected to correspond to a high efficiency of solar cells. An increase of solar cell efficiency up to 50% (Luque et al., 2004) is reasonable, allowing an indirect pumping system to have a comparable efficiency to a 35% direct pumping system.
In the following the assumption is that the overall system efficiency η sys is about 22.7%, with a 45% efficiency of the cells, a 90% efficient reflectors, a 85% efficiency of the power transmission and regulation line and a 66% efficiency of the laser (given by the product of the target 80% for the laser diode and the achieved 83% slope efficiency of the fibres). A second option with a 60% laser efficiency and 40% cell efficiency is also considered.
Formation Design
One idea for the orbital design is to have the spacecraft flying in formation with the asteroid, orbiting in tandem around the Sun (see Fig. 11). The spacecraft have to maintain their relative position with respect to the asteroid in order to keep the required power density on the same spot on the surface of the asteroid (note that the surface of the asteroid is moving under the spot light of the laser). Therefore, the formation orbits have to be periodic and in close proximity with relatively low excursion in the relative distance from the asteroid. On the other hand the spacecraft should minimise any impingement with the plume of debris and gas coming from the sublimation of the surface material.
In order to design the desired formation orbits, one can start by considering the local Hill reference frame A{x, y, z} in Fig. 2 and the associated linearised version of proximal motion equations (Schaub and Junkins, 2003) used in the calculation of the asteroid deviation vector:
x(ν) = a A e A sin ν η δM − a A cos νδe (16a) y(ν) = r a η 3 (1 + e A cos ν) 2 δM + r a δω + r a sin ν η 2 (2 + e A cos ν)δe + r a cos i A δΩ (16b) z(ν) = r a (sin θ A δi − cos θ A sin i A δΩ)(16c)
where η = 1 − e 2 A , θ A = ν + ω A , ν is the true anomaly, a A , e A , i A , ω A are respectively the semi-major axis, eccentricity, inclination and argument of the perihelion of the orbit of the asteroid at a generic moment in time and δk = [δa, δe, δi, δΩ, δω, δM ] t are the variations of the orbital elements, with the imposed conditions δr ≪ r a , and δa = 0 in order to have periodic motion. These equations are a first approximation of the motion of the spacecraft and do not take into account the gravity field of the asteroid or solar pressure.
If the optimal thrust direction that maximises the deviation is along the unperturbed velocity vector of the asteroid (Colombo et al., 2009b), then the exhaust gases will flow along the direction of the velocity of the asteroid projected in the Hill reference frame. Therefore, the position vector in the radial, transversal and out-of-plane reference frame was projected onto the tangential, normal, out-of-plane reference frame to give δr tnh = [x tnh , y tnh , z tnh ] t . Then, the size of the formation orbits projected in the x tnh -z tnh plane was maximised. All the requirements on the formation orbits can be formulated in mathematical terms as a multi-objective optimisation problem with two objective functions,
min δk∈D max ν J 1 = δr (17a) min δk∈D max ν J 2 = − arctan x 2 tnh + z 2 tnh y tnh (17b)C ineq = min ν |y(ν)| − y lim > 0(18)
where y lim is a minimum distance along the y-axis, and D is the search space for the solution vector δk. Table 2 defines the boundaries imposed on D.
The boundary values were obtained by progressively increasing each of the boundaries from 0 to the value in the table, looking at the value of the maximum distance from the asteroid. Larger boundaries produce solutions with a better (lower) performance index J 2 but a higher performance index J 1 .
Equations (17)- (18) were optimised using a memetic multi-objective optimiser MACS (Multiagent Collaborative Search) (Vasile, 2005;Maddock and Vasile, 2008;Vasile, 2008). The optimisation led to the identification of two families of formation orbits belonging to two subsets of the search space D. Figures 3, 4 and 5 show the two families in the parameter space for y lim = 1000 m. The solutions are almost perfectly symmetrically distributed about the 0-value of δi, δΩ, while there is a bias towards the negative axis for δω. Each family has been identified with the label −z or +z depending on whether the sign of the z coordinate is negative or positive at y = y lim . Figure 6, instead, shows the Pareto fronts for y lim = 500 m and y lim = 1000 m respectively. Note that in Fig. 6, the Pareto fronts for the branches in Figs. 3, 4 and 5 appear superimposed and cannot be distinguished. Therefore, the two families can be considered equally locally Pareto optimal. Figure 7 shows the formation orbits in the A Hill frame. It can be noted that the two families are symmetric with respect to the x-y plane. In the remainder of the paper these orbits will be called natural formation orbits.
Formation Dynamics and Control
In order to maintain the orbits designed in the previous section, the spacecraft need to be controlled. In the proximity of the asteroid, in a Hill rotating reference frame, the spacecraft are subject to the force due to solar pressure, the gravity of the asteroid, the gravity of the Sun, the centrifugal and Coriolis forces plus the forces induced by the impingement with the plume. An active control is therefore required to maintain the spacecraft flying in formation with the asteroid.
Following the Jacobi ellipsoid model, the minor axis c ℓ of the asteroid is aligned with vector of angular momentum, which corresponds to the z-axis of the asteroid Hill frame A (see Fig. 2). The gravity field of the asteroid is expressed as the sum of a spherical field plus a second-degree and second-order field (Hu and Scheeres, 2002;Rossi et al., 1999),
U 20+22 = µ a δr 3 C 20 (1 − 3 2 cos 2 γ) + 3C 22 cos 2 γ cos 2λ(19)
where γ is the elevation over the x − y plane and the harmonic coefficients C 20 and C 22 are a function of the semi-axes,
C 20 = − 1 10 (2c 2 ℓ − a 2 ℓ − b 2 ℓ ) (20a) C 22 = 1 20 (a 2 ℓ − b 2 ℓ )(20b)
and λ is defined as,
λ = arctan y x + w a t
Note that a different rotational state or shape would alter the time-varying gravity field that the spacecraft would experience. In a real scenario, the rotational state coupled with the shape of the asteroid would require an adaptive focusing of the laser beam as the distance of the spot from the source will change with time. Also the divergence of the plume will change as the laser carves a groove into the asteroid. However, within the assumptions in this paper a different rotational state and/or shape would not alter the main results. If one considers a Hill reference frame A centred in the barycentre of the asteroid (see Fig. 2), the motion of the spacecraft in the proximity of the asteroid is given by:
x(ν) = −r A + 2νẏ +ν 2 (r a + x) +νy − µ sun (r a + x) r 3 sc − µ a δr 3 x + F sx (x, y, z) m sc + ∂U 20+22 ∂x (21a) y(ν) = −2νẋ −ν(r a + x) +ν 2 y − µ sun r 3 sc y − µ a δr 3 y + F sy (x, y, z) m sc + ∂U 20+22 ∂y (21b) z(ν) = − µ sun r 3 sc z − µ a δr 3 z + F sz (x, y, z) m sc + ∂U 20+22 ∂z (21c) with,ν = u devy − 2ṙ a r aν r 2 a (22) r a =ν 2 r a − µ sun r 2 a + u devx(23)
The force term F s = [F sx F sy F sz ] t is made of two contributions: light pressure from the emitted light from the laser F srp and the force due to the flow of gas and debris coming from the asteroid F plume . The force due to solar radiation F srp is defined as,
F srp = 2η sys A m1 S 0 c r au r sc 2 cos 2 βn steer + (1 − η 2 m )A m1 S 0 c r au r sc 2x(24)
where c is the speed of light and A m 1 is the cross section area of the primary mirror (see 11). The angle β is the half angle between the normal to the steering mirrorn steer and the Sun-mirror vector (which is approximated by setting it equal to the Sun-asteroid vector). The second term in Eq. (24) takes into account a non-perfect reflection of the primary and secondary mirror. The reflectivity of the two mirrors is here assumed to be η m = 0.90. The assumption is that the energy dissipated by the radiators is emitted uniformly in every direction and does not contribute to any change in the linear momentum of the spacecraft. If the flow rate per unit area at distance δr spot is (2ρ exp (δr spot , ϕ)v) and all the particles stick to the surface of the mirror then the force F plume is:
F plume = 4ρ exp (δr spot , ϕ)v 2 A eq cos ψ vfδ r s/sc(25)
The flow rate depends on the power density and therefore on the distance from the Sun. The part of the spacecraft exposed to the plume and to the reflected light changes along the orbit and is irregular. In order to simplify the calculations, the assumption adopted in this paper is that the total effect is equivalent to a flat surface with area A eq = A m 1 and normal unit vectorn eq such that the cross product n eq ,δr s/sc = cos ψ vf .
Given these equations, the resultant of all the forces acting on the spacecraft is not zero and in particular the difference between gravity and F s is a function of time. Therefore, an active control is required to maintain the position of the spacecraft with respect to the asteroid.
If one assumes that solar pressure, the gravity of the asteroid, and the force due to the plume impingement are the main source of perturbation of the proximity motion of the spacecraft and that any non-spherical terms in the gravity field expansion result in only a small (second order) additional perturbation, then one can build a simple control law based on the Lyapunov control function: The assumption here is that the motion along the reference formation orbit is much slower than the control action, which is valid as the period of the spacecraft orbit is equal to the period of the asteroid (just under 1 year). Therefore, the spacecraft targets a set of static points along the formation orbit. Now if there exist a control u such that dV /dt < 0 then one can maintain the mirror in the proximity of the reference point as the reference point moves along the reference formation orbit. A possible control is given by:
V = 1 2 δv 2 + 1 2 K (x − x ref ) 2 + (y − y ref ) 2 + (z − z ref ) 2(u = − − µ a δr 3 δr + F srp m sc + F plume m sc − K (δr − δr ref ) − c d δv(27)
The total derivative of the function V is:
dV dt = δv T δv + K(δr − δr ref ) T δv (28a) = δv T − µ a δr 3 δr + F srp m sc + F plume m sc − − µ a δr 3 δr + F srp m sc + F plume m sc (28b) − K (δr − δr ref ) − c d δv + K(δr − δr ref ) T δv = −c d δv T δv < 0 (28c)
where δv = [ẋ,ẏ,ż] t is the relative velocity of the spacecraft in the asteroid Hill reference frame A. The control in Eq. (27) can now be introduced into the full dynamic model in Eq. (21) to test the validity of the assumption that the light coming from the asteroid and aspherical gravity field are indeed small. The elastic coefficient K for both cases was chosen to be 10 −6 while the dissipative coefficient c d was set to 10 −5 . Figure 8 shows the maximum thrust level as a function of the maximum distance from the asteroid for a 20 m diameter mirror. Figure 9 shows the propellant consumption as a function of the maximum distance from the asteroid for a 20 m diameter mirror.
Shaped Formation
Although the natural formation orbits are designed to minimise the impingement with the plume of gas and debris, none of them can avoid the plume completely. In order to maximise the amount of solar power collected, the mirrors should be constantly pointing directly towards the Sun, hence in a direction perpendicular to the y-axis. By following one of the natural formation orbits, the spacecraft will rise above the z-y plane (i.e, in the +x direction) once per revolution around the Sun, thus directly exposing the reflector to the plume. According to the contamination model, every surface directly exposed to the plume builds up a layer of a contaminants. This is quite a strong assumption as all the impinging material is assumed to condense and only the surfaces in view of the plume are contaminated. We hold on to these assumptions in this paper, although some experimental work is underway to build a more realistic model (Gibbings et al., 2011). If one sticks to the assumptions of the contamination model, then one solution to mitigate the contamination would be to fly always below the plume of gas (i.e., −x direction, below the z-y plane). In order to make the spacecraft follow the desired proximal motion the following shape is assigned to the formation orbit:
x(ν) = x 1 cos(ν) + x 2 sin(ν) + x 3 (29a)
y(ν) = y 1 cos(ν) + y 2 sin(ν) + y 3 (29b) z(ν) = z 1 cos(ν) + z 2 sin(ν) (29c)
By differentiating with respect to time and inserting Eq. (29) and their first and second derivatives into the dynamic equations in Eq. (21), one can compute the control profile and the corresponding propellant consumption. The interest now is to design formation orbits that minimise the propellant consumption required by the control system to remain below the z-y plane and operate as close as possible to the asteroid to minimise pointing requirements. The problem can be formulated as follows:
min s∈X J 1 = MF c (30a) min s∈X max ν J 2 = δr (30b) min s∈X max ν J 3 = u (30c)
subject to the inequality constraints:
C 1 = max ν x(ν) < 0 (31a) C 2 = max ν y(ν) < 0 (31b)
where the solution vector is s = [x 1 , x 2 , x 3 , y 1 , y 2 , y 3 , z 1 , z 2 ] t , and MF c is the propellant mass fraction for the control over one year of operations. The search space X is defined by the lower and upper bounds on the components of s, respectively s l = [−1, −1, −1, −1, −1, −2, −1, −1] t and s u = [1, 1, 0, 1, 1, 0, 1, 1] t . Again MACS was used to solve the constrained problem in Eq. (30) and Eq. (31). The result of the multi-objective optimisation can be found in Fig. 10, and shows the propellant mass fraction versus the maximum thrust level versus the maximum distance to the asteroid for the case of 10 spacecraft, each carrying a 20 m diameter mirror, over the first year of operations.
As expected the level of thrust and control propellant mass fraction are monotonically increasing with the distance to the asteroid. However, even for close distances the annual propellant consumption and the thrust level are quite small, only a few milli-Newton of thrust are enough to maintain the orbit.
Spacecraft and System Sizing
The proposed configuration of each spacecraft is as follows: each spacecraft is made of a primary mirror that focuses the sunlight onto a secondary mirror that reflects the light onto a solar array seated behind of the primary mirror (see Fig. 11). The electric power coming from the solar array pumps a semiconductor laser and a steering mirror directs the beam. The secondary mirror, the solar array and the laser need to be maintained at an acceptable temperature. Hence the need for radiators that dissipate the excess of energy that is not converted into the laser beam.
The size of the radiators can be computed considering the steady state thermal balance between the input power coming from the concentrator and the dissipated power through radiation.
Three radiating areas were considered for the design of the spacecraft: one associated to the secondary mirror with area A RM2 , one associated to the solar array with area A RS , and one associated to the laser with area A RL . The size of each radiating area can be computed from the steady state equilibrium thermal
A RS = α s η M P iM2 − η s η M P iM2 − 2ǫ s σA S T 4 S ǫ r σT 4 rS (32a) A RL = η s η M P iM2 (1 − η l ) ǫ r σT 4 l (32b) A RM2 = α m 2 P iM2 − 2T 4 m2 ǫ s σĀ m 2 ǫ r σT 4 m2 (32c)
where α s is the absorptivity of the solar array, A S its area, ǫ s its emissivity, T S its temperature, P iM2 = η M A M1 S 0 (r AU /r A ) 2 is the input power to the secondary mirror, and σ is the Stefan-Boltzmann constant. Then, η S is the efficiency of the solar array, T r S is the temperature of the radiator associated to the solar array, and ǫ r its emissivity. Assuming the efficiency of the laser to be η l , and its temperature T l one can compute the area of the radiator A RL assuming that laser and radiator are in direct contact and that the heat can be transported with an efficiency close to 1. This is a reasonable assumption for relatively small scale systems that allow the use of a single or bi-phase passive cooling system. For large systems a dual phase active system might be required which lowers the overall efficiency and increases the system mass. Finally, the secondary mirror is assumed to operate at temperature T m 2 and has absorptivity α m 2 . The total mass of the spacecraft is m sc = m dry + m p (1 + MF t ), where the mass of the propellant m p = m dry MF p is a fraction of the dry mass m dry , augmented by the mass fraction MF t = 10% to include the mass of the tanks. The dry mass m dry = 1.2(m h + m s + m m + m l + m r + m bus ) is the sum of the mass of the laser m l , mass of the bus m bus , mass of the mirrors m m , mass of the solar array m s , mass of the radiators m r and mass of the harness m h . Given the low maturity of the technology employed for this system, we considered a system margin of 20% on the dry mass.
The mass of the harness m h is a fraction of the combined mass of the laser and solar array m h = MF h (m s + m l ). The mass of the solar array is m s = 1.15̺ s A s where we considered a 15% margin given the high efficiency of the cells.
The mass of the laser is m l = 1.5̺ l P l η l where the margin is now 50% given that a semiconductor laser of this size for space applications has not flown yet. The mass of the power management and distribution unit dedicated to the laser system is included in the mass of the harness while the mass of the bus is assumed to account also for the power electronics. The power input to the laser is,
P l = 0.85η s η 2 m A M1 S 0 r au r a 2(33)
and is a function of the input light power on the solar array, the efficiency of the solar array η s and the reflectivity of the mirrors η m = 0.90. The loss due to power regulation and transmission was considered to be 15% of the generated power.
The mass of the radiators m r = 1.2(A r S + A r M 2 + A r L )̺ r from Eq. (32) is proportional to the area and is augmented by a 20% margin. The total mass of the mirror is m m = 1.25(̺ m d A d + ̺ mĀm 1 + ̺ mĀm 2 ), whereĀ m 1 andĀ m 2 are the areas of the primary and secondary mirrors. The total mass of the mirrors is augmented by a 25% margin given the technology readiness level of the primary mirror.
The thermal properties of the system are reported in Table 3 while the values of the specific masses ̺, mass factors MF and mass of the bus m bus are reported in Table 4. The margins on the mirrors and power system are considered to include the marginal use of power to control the spacecraft in Table 5: Boundary values on the design variables for the formation design.
Design Parameter
Lower Bound Upper Bound
Mirror aperture diameter, d m (m) 2 20 Number of spacecraft, n sc 1 10 Concentration ratio, C r 1000 5000 proximity of the asteroid. As it will be shown later, the required thrust level is small and therefore the power demand is marginal compared to the one required for the sublimation.
Multiobjective Design
Once the deflection and the spacecraft models are defined, the interest is to optimise the formation in order to obtain the maximum value of the impact parameter for the minimum mass into orbit, given a warning time. The problem can be formulated as follows:
min x∈D (−b) (34a) min x∈D (n sc m sc ) (34b)
where the design vector x is defined by [d m , n sc , C r ] T and the search space D is defined in Table 5. The problem has two objectives, and a mix of integer and real variables. MACS was used to solve Eq. (34). The achievable deflection depends on the contamination of the optics, therefore the optimisation was run for both the shaped and the natural orbits. The result for the case of natural formation orbits can be seen in Fig. 12, where the impact parameter is represented against the mass of the system and the aperture diameter of the primary mirror for a laser with η L = 0.6 and solar cells with η S = 0.4, and Fig. 13, where the impact parameter is represented against the mass of the system and the aperture diameter of the primary mirror for a laser with η L = 0.66 and solar cells with η S = 0.45. Analogous solutions for the case of the shaped orbits can be found in Figs. 14 and 15.
It is interesting to note that the number of spacecraft increases when the aperture diameter increases. This is due to that fact that as the diameter of the primary mirror increases the radiator and laser mass increases up to the point at which the mass of a single spacecraft exceeds the total mass of two or more spacecraft of smaller size. This is a very important point that is in favour of the use of a formation instead of a single large spacecraft.
Furthermore, it has to be noted that the assumption is that the system for each spacecraft is scalable. This is actually not true in general as the technology for radiators and concentrators cannot be arbitrarily scaled up. In other words, technological solutions for small size spacecraft cannot be applied to large size spacecraft without modifications. This is a further reason in favour of the use of multiple spacecraft of small size. Figures 16 and 17 show the achievable impact parameter for the case of the natural formation orbits with two alternative design solutions, a 5 m in diameter reflector and a 10m in diameter reflector both with a concentration ratio of 5000, i.e., the ratio between the area of the concentrator and the area of the spot on the surface of the asteroid is 5000. Figures 18 and 19 show the achievable impact parameter for the case of the shaped formation orbits. Figure 20 shows the sensitivity to the concentration ratio for a fixed warning time of 8 years. The evident difference between the achievable impact parameter in the case of natural and shaped formation depends on the contamination effect that stops the sublimation process quite rapidly (less than one year in some cases) when the spacecraft rises above the y-z plane. Because the sublimation stops at the beginning of the deflection operations, the efficiency of the deflection, in the case of the natural formations, is strongly affected by the position along the orbit at which the sublimation starts. This is consistent with the results presented in Colombo et al. (2009a).
Effect of Eccentricity
One may argue that the method is effective only on asteroids relatively close to the Sun as the solar collectors need to power the laser. Indeed if the asteroid has an aphelion far from the Sun the power can drop below the minimum required to sublimate the surface. Using the idea of the shaped orbits, one can try to apply the laser concept to asteroids with an increasing aphelion from 1 AU to 2 AU and with a decreasing perihelion from 1 AU to 0.5 AU. The assumption is that the Earth is moving on a circular planar orbit and the asteroid on a planar elliptic orbit. The impact parameter is computed at one of the two intersections with the orbit of the Earth and the deflection action starts at the perihelion of the orbit of the asteroid. Figure 21 shows the achievable impact parameter as a function of radius of the aphelion and perihelion for 9 years of warning time, C r = 5000 and a 20 Figure 16: Impact parameter as a function of the number of spacecraft and warning time: 5 m aperture diameter and a concentration ratio of Cr = 5000. Natural formation orbits: η L = 0.60, η S = 0.40. m diameter collector. For comparison with the case of Apophis one can notice that the achievable impact parameter is substantially high for highly elliptical asteroids. There are two good reasons for that. One is that the thrust is applied mainly at the pericentre of the orbit but for highly elliptical orbits a variation of velocity at the pericentre produces a much higher change of the semi-major axis than for low eccentric orbits. The other is that the orbit of the asteroid has a much steeper intersection with the Earth's orbit and therefore a small variation of the arrival time generates a greater impact parameter. If one sticks to the hypothesis used above for the contamination, even in the case of natural orbits the spacecraft will experience no contamination as they fly above the plume when the sublimation is minimal or null. Therefore, the laser ablation seems to be effective even for high elliptical asteroids with high aphelion.
Conclusion
This paper presented the multidisciplinary design of a formation of spacecraft equipped with solar pumped laser for the deflection of asteroids.
The paper demonstrated that the use of multiple spacecraft is an optimal solution to maximise the deflection while minimizing the mass of the overall system. In fact as the diameter of the primary mirror increases the radiator and laser mass increases up to the point at which the mass of a single spacecraft exceeds the total mass of two or more spacecraft of smaller size. This is a very important point that is in favour of the use of a formation instead of a single large spacecraft. A formation, or fractionated system, has the further advantage of increasing redundancy and scalability as for a bigger asteroid the solution is simply to increase the number of spacecraft. The sizing of the spacecraft was based on a simple model in which the mass of the main bus is considered constant and the propellant mass is not optimised. These are two limiting assumptions that cause an overestimation of the mass for small systems. At the same time the deployment and thermal control systems are assumed to be scalable within the range of variability of the design parameters. Looking at present technology, this assumption can correspond to an underestimation of the mass for large systems. The efficiency of the laser and solar cells are at the upper limit of what is currently achievable in a lab environment. Although this is an optimistic assumption, current developments are progressing towards those limits independently of the deflection of asteroids. It is therefore reasonable to expect the system efficiencies presented in this paper in the near future. The paper also analyzed the control of the spacecraft in the vicinity of the asteroid and showed that with minimal control and propellant consumption the spacecraft can be maintained in their desired formation orbits.
Finally it was demonstrated that the laser ablation concept based on solar power is applicable also to high eccentric orbits (deep crossers) with even better performance with respect to the shallow crosser case. In fact, for deep crossers the deflection action is maximal where most effective, i.e., around the perihelion, and the steep intersection between orbit of the Earth and orbit of the asteroid amplifies the deflection effect.
Acknowledgements
Figure 1 :
1Definition of deviation distance at the MOID.
Figure 2 :
2Definition of the reference frames, including the rotating Hill frame A centred on the asteroid.
Figure 3 Figure 4 Figure 5
345: a) Objective functions J 1 and J 2 versus δi, b) objective function J 2 versus δi : a) Objective functions J 1 and J 2 versus δΩ,b) objective function J 2 versus δΩ : a) Objective functions J 1 and J 2 versus δω, b) objective function J 2 versus δω
Figure 6 :Figure 7 :
67Pareto Formation orbits with minimum distance of 500 m.
26) where δr ref = [x ref , y ref , z ref ] t are the coordinates of a point along the nominal formation orbit (in the Hill frame A).
Figure 8 :
8Maximum thrust versus maximum distance for the formation orbits with minimum distance of 1000 m.
Figure 9 :
9Propellant consumption versus maximum distance for the formation orbits with minimum distance of 1000 m.
Figure 10 :
10Pareto front of the shaped formation problem.
Figure 11 :
11Illustration of the spacecraft and laser system, showing the two parabolic mirrors (M 1 , M 2 ), the directional steering mirror (M d ), the solar arrays (S) which pump the laser (L), and the radiators (R).
Figure 12 :
12Total mass of the system against the diameter of the primary mirror of each spacecraft and the achieved impact parameter. Natural formation orbits: η L = 0.60, η S = 0.40.
Figure 13 :Figure 14 :
1314Total mass of the system against the diameter of the primary mirror of each spacecraft and the achieved impact parameter. Natural formation orbits: η L = 0.66, Total mass of the system against the diameter of the primary mirror of each spacecraft and the achieved impact parameter. Shaped formation orbits: η L = 0.66, η S = 0.45.
Figure 15 :
15Total mass of the system against the diameter of the primary mirror of each spacecraft and the achieved impact parameter. Shaped formation orbits: η L = 0.60,
Figure 17 :Figure 18 :
1718Impact parameter as a function of the number of spacecraft and warning time: 10 m aperture diameter and a concentration ratio of Cr = 5000. Natural formation orbits: η L = 0.60, η S = 0.40. Impact parameter as a function of the number of spacecraft and warning time: 5 m aperture diameter and a concentration ratio of Cr = 5000. Shaped formation orbits: η L = 0.60, η S = 0.40.
Figure 19 :Figure 20 :
1920Impact parameter as a function of the number of spacecraft and warning time: 10 m aperture diameter and a concentration ratio of Cr = 5000. Shaped formation orbits: η L = 0.60, η S = 0.40. Impact parameter as a function of the number of spacecraft and the concentration ratio: 10 m aperture diameter and a warning time of 8 years. Shaped formation orbits: η L = 0.60, η S = 0.40.
Figure 21 :
21Impact parameter as a function of the radius of perihelion r P and aphelion r A of the orbit of the asteroid.
Table 1 :
1Orbital and physical properties of test asteroid.Element
Measured Value
Semi-major axis
a a 0
0.9224 AU
Eccentricity
e a0
0.1912
Inclination
i a 0
3.3312 deg
RAAN
Ω a 0
204.4428 deg
Argument of periapsis ω a0
126.4002 deg
Period
T a 0
323.5969 days
Mean motion
n a 0
1.2876 ×10 −5 deg/s
Mass
m a 0
2.7×10 10 kg
Gravitational constant µ a
1.801599×10 −9 km 3 /s 2
Physical dimensions
a ℓ , b ℓ , c ℓ 191 m, 135 m, 95 m
Rotational velocity
w a
3.3×10 −3 deg/s
Albedo
ς a
0.2
Table 2 :
2Boundaries on the formation orbital parametersδe
δi
δΩ
δω
δM
(10 −7 ) (10 −7 rad) (10 −7 rad) (10 −7 rad) (10 −7 rad)
Lower bound −0.01
−0.1
−0.9
−1.5
−0.1
Upper bound
0
0.1
0.9
1.5
0.5
subject to the inequality constraint,
Table 3 :
3Thermal Properties of Spacecraft ElementsSolar arrays:
Mirror:
Radiator: Laser:
η s
α s
ǫ s
T s
T m2
α m2
ǫ r
η l
T l
0.4-0.45 0.8 0.8 373 K 373 K 0.01
0.9
0.6-0.66 313 K
equations:
Table 4 :
4Mass of Spacecraft ElementsSpecific mass:
̺ m
0.1 kg/m 2
̺ m d
0.1 kg/m 2
̺ l
0.005 kg/W
̺ s
1 kg/m 2
̺ r
1.4 kg/m 2
Mass:
m bus
500 kg
Mass fractions: MF h
0.2
MF p
0.3
MF t
0.1
This research was partially supported by the ESA/Ariadna Study Grant AO/1-5387/07/NL/CB . The authors would like to thank Dr. Leopold Summerer of the ESA Advanced Concepts Team for his support.
An Introduction to the Mathematics and Methods of Astrodynamics. R H Battin, revised Edition. AIAA Education SeriesBattin, R. H., 1999. An Introduction to the Mathematics and Methods of As- trodynamics, revised Edition. AIAA Education Series.
A comparative assessment of different deviation strategies for dangerous NEO. C Colombo, J P Sanchez Cuartielles, M Vasile, G Radice, International Astronautical Congress. Colombo, C., Sanchez Cuartielles, J. P., Vasile, M., Radice, G., October 2006. A comparative assessment of different deviation strategies for dangerous NEO. In: International Astronautical Congress. Valencia, Spain.
Optimal low-thrust trajectories to asteroids through an algorithm based on differential dynamic programming. C Colombo, M Vasile, G Radice, Celestial Mechanics and Dynamical Astronomy. 1051-3Colombo, C., Vasile, M., Radice, G., November 2009a. Optimal low-thrust tra- jectories to asteroids through an algorithm based on differential dynamic pro- gramming. Celestial Mechanics and Dynamical Astronomy 105 (1-3), 75-112.
Semi-analytical solution for the optimal low-thrust deflection of Near-Earth Objects. C Colombo, M Vasile, G Radice, Journal of Guidance, Control and Dynamics. 323Colombo, C., Vasile, M., Radice, G., May-June 2009b. Semi-analytical solu- tion for the optimal low-thrust deflection of Near-Earth Objects. Journal of Guidance, Control and Dynamics 32 (3), 796-809.
Optimized performance GaAsbased diode lasers: Reliable 800 nm 125 W Bars and 83.5 efficient 975-nm single emitters. P Crump, J Wang, T Crum, S Zhang, M Grimshaw, W Dong, M Defranza, S Das, M Devito, J Farmer, 2005Crump, P., Wang, J., Crum, T., Zhang, S., Grimshaw, M., Dong, W., DeFranza, M., Das, S., DeVito, M., Farmer, J., 2005. Optimized performance GaAs- based diode lasers: Reliable 800 nm 125 W Bars and 83.5 efficient 975-nm single emitters. In: SSDLTR2005-Crump.
Albedo and size determination of potentially hazardous asteroids: (99942) Apophis. M Delbò, A Cellino, E Tedesco, Icarus. 188Delbò, M., Cellino, A., Tedesco, E., 2007. Albedo and size determination of potentially hazardous asteroids: (99942) Apophis. Icarus 188, 266-269.
On testing laser ablation processes for asteroid deflection. A Gibbings, J.-M Hopkins, D Burns, M Vasile, IAA Planetary Defense Conference. Gibbings, A., Hopkins, J.-M., Burns, D., Vasile, M., 2011. On testing laser abla- tion processes for asteroid deflection. In: IAA Planetary Defense Conference.
Formation flying solar-sail gravity tractors in displaced orbit for towing near-Earth asteroids. S Gong, J Li, H Baoyin, Celestial Mechanics and Dynamical Astronomy. 1051-3Gong, S., Li, J., BaoYin, H., November 2009. Formation flying solar-sail gravity tractors in displaced orbit for towing near-Earth asteroids. Celestial Mechan- ics and Dynamical Astronomy 105 (1-3), 159-177.
Spacecraft motion about slowly rotating asteroids. W Hu, D J Scheeres, Journal of Guidance, Control and Dynamics. 254Hu, W., Scheeres, D. J., July-August 2002. Spacecraft motion about slowly rotating asteroids. Journal of Guidance, Control and Dynamics 25 (4), 765- 775.
Online resource. IAU Minor Planet CentreIAU Minor Planet Centre, 2012. Online resource, http://www.minorplanetcenter.org/.
Cladding-pumped ytterbium-doped large-core fiber laser with 610 w of output power. Y Jeong, J K Sahu, S Baek, C Alegria, D B S Soh, C Codemard, J Nilsson, Optics Communications. 2341-6Jeong, Y., Sahu, J. K., Baek, S., Alegria, C., Soh, D. B. S., Codemard, C., Nilsson, J., 2004. Cladding-pumped ytterbium-doped large-core fiber laser with 610 w of output power. Optics Communications 234 (1-6), 315-319.
Ytterbium-doped large-core fibre laser with 272 w of output power. Y Jeong, J K Sahu, R B Williams, D J Richardson, K Furusawa, J Nilsson, Electronics Letters. 3913Jeong, Y., Sahu, J. K., Williams, R. B., Richardson, D. J., Furusawa, K., Nils- son, J., 2003. Ytterbium-doped large-core fibre laser with 272 w of output power. Electronics Letters 39 (13), 977-978.
Physical limits of solar collectors in deflecting Earth-threatening asteroids. R Kahle, E Kührt, G Hahn, J Knollenberg, Aerospace Science and Technology. 10Kahle, R., Kührt, E., Hahn, G., Knollenberg, J., 2006. Physical limits of solar collectors in deflecting Earth-threatening asteroids. Aerospace Science and Technology 10, 253-263.
Modelling control thrust plume flow and impingement. H Legge, R Boettcher, International Symposium on Rarefied Gas Dynamics. Legge, H., Boettcher, R., 1982. Modelling control thrust plume flow and im- pingement. In: International Symposium on Rarefied Gas Dynamics. pp. 983- 992.
FULLSPECTRUM: A new PV wave making more efficient use of the solar spectrum. A Luque, A Martá, L Cuadra, C Algora, P Wahnon, G Salal, P Benítez, A W Bett, A Gombert, V M Andreev, C Jassaud, J Van Roosmalen, J Alonso, A Räuber, G Strobel, W Stolz, B Bitnar, C Stanley, J Conesa, W Van Sark, K Barnham, R Danz, T Meyer, I Luque-Heredia, R Kenny, C Christofides, European Photovoltaic Solar Energy Conference. Paris, FranceLuque, A., Martá, A., Cuadra, L., Algora, C., Wahnon, P., Salal, G., Benítez, P., Bett, A. W., Gombert, A., Andreev, V. M., Jassaud, C., Van Roosmalen, J., Alonso, J., Räuber, A., Strobel, G., Stolz, W., Bitnar, B., Stanley, C., Conesa, J., Van Sark, W., Barnham, K., Danz, R., Meyer, T., Luque-Heredia, I., Kenny, R., Christofides, C., 2004. FULLSPECTRUM: A new PV wave making more efficient use of the solar spectrum. In: European Photovoltaic Solar Energy Conference. Paris, France.
Comparison of single and multi-spacecraft configurations for NEA deflection by solar sublimation. C Maddock, J P Sanchez Cuartielles, M Vasile, G Radice, New Trends in Astrodynamics and Applications III. Belbruno, E.American Institute of Physics886Maddock, C., Sanchez Cuartielles, J. P., Vasile, M., Radice, G., 2007. Com- parison of single and multi-spacecraft configurations for NEA deflection by solar sublimation. In: Belbruno, E. (Ed.), New Trends in Astrodynamics and Applications III. Vol. 886. American Institute of Physics, pp. 303-316.
Design of optimal spacecraft-asteorid formations through a hybrid global optimization approach. C Maddock, M Vasile, Journal of Intelligent Computing and Cybernetics. 12Maddock, C., Vasile, M., 2008. Design of optimal spacecraft-asteorid forma- tions through a hybrid global optimization approach. Journal of Intelligent Computing and Cybernetics 1 (2), 239-268.
Non-nuclear strategies for deflecting comets and asteroids. H J Melosh, I V Nemchinov, Y I Zetzer, Gehrels, T.University of Arizona PressHazard due to comets and asteroidsMelosh, H. J., Nemchinov, I. V., Zetzer, Y. I., 1994. Non-nuclear strategies for deflecting comets and asteroids. In: Gehrels, T. (Ed.), Hazard due to comets and asteroids. University of Arizona Press, pp. 1111-1132.
Potentially hazardous asteroids. Online resource. NASA Near Earth Object programNASA Near Earth Object program, 2012. Potentially hazardous asteroids. On- line resource, http://neo.jpl.nasa.gov/neo/pha.html.
nLIGHT demonstrates 73% wall-plug efficiency. Press ReleasenLIGHT, January 2006. nLIGHT demonstrates 73% wall-plug efficiency. Press Release, http://www.nlight.net/news/releases.
Deflection of Earth-crossing asteroids/comets using rendezvous spacecraft and laser ablation. S.-Y Park, D D Mazanek, Journal of Astronautical Sciences. 531Park, S.-Y., Mazanek, D. D., Jan.-Mar. 2005. Deflection of Earth-crossing as- teroids/comets using rendezvous spacecraft and laser ablation. Journal of As- tronautical Sciences 53 (1), 21-37.
High-power, highefficiency laser diodes at JDSU. M Peters, V Rossin, M Everett, E Zucker, Proc. SPIE 6456. SPIE 645664560Peters, M., Rossin, V., Everett, M., Zucker, E., 2007. High-power, high- efficiency laser diodes at JDSU. In: Proc. SPIE 6456, 64560G.
Laser deflection of NEO's. C R Phipps, NASA Near Earth Object: Interception Workshop. New Mexico, USAPhipps, C. R., 1992. Laser deflection of NEO's. In: NASA Near Earth Object: Interception Workshop. New Mexico, USA.
Laser deflection of near-earth asteroids and comet nuclei. C R Phipps, Proc. International Conference on Lasers 96. International Conference on Lasers 96STS PressPhipps, C. R., 1997. Laser deflection of near-earth asteroids and comet nuclei. In: Proc. International Conference on Lasers 96, STS Press. pp. 580-587.
An alternate treatment of the vapor-plasma transition. C R Phipps, International Journal of Aerospace Innovation. Phipps, C. R., 2010. An alternate treatment of the vapor-plasma transition. International Journal of Aerospace Innovation.
Classifying and modeling NEO material properties and interactions. J L Remo, Space Science Series. Gehrels, T., Matthews, M. S., Schumann, A.University of Arizona PressHazards due to comets and asteroidsRemo, J. L., 1994. Classifying and modeling NEO material properties and inter- actions. In: Gehrels, T., Matthews, M. S., Schumann, A. (Eds.), Hazards due to comets and asteroids. Space Science Series. University of Arizona Press, Tucson, AZ, pp. 551-596.
Orbital evolution around irregular bodies. A Rossi, F Marzari, P Farinella, Earth, Planets, Space. 51Rossi, A., Marzari, F., Farinella, P., 1999. Orbital evolution around irregular bodies. Earth, Planets, Space 51, 1173-1180.
A multicriteria assessment of deflection methods for dangerous NEOs. J P Sanchez Cuartielles, C Colombo, M Vasile, G Radice, New Trends in Astrodynamics and Applications III. Belbruno, E.886American Institute of PhysicsSanchez Cuartielles, J. P., Colombo, C., Vasile, M., Radice, G., 2007. A multi- criteria assessment of deflection methods for dangerous NEOs. In: Belbruno, E. (Ed.), New Trends in Astrodynamics and Applications III. Vol. 886. Amer- ican Institute of Physics, pp. 317-333.
Multi-criteria comparison among several mitigation strategies for dangerous Near Earth Objects. J P Sanchez Cuartielles, C Colombo, M Vasile, G Radice, Journal of Guidance, Control and Dynamics. 321Sanchez Cuartielles, J. P., Colombo, C., Vasile, M., Radice, G., January- February 2009. Multi-criteria comparison among several mitigation strategies for dangerous Near Earth Objects. Journal of Guidance, Control and Dynam- ics 32 (1), 121-142.
Analytical mechanics of space systems. H Schaub, J L Junkins, AIAA Education Series. AIAA. 1st EditionSchaub, H., Junkins, J. L., 2003. Analytical mechanics of space systems, 1st Edition. AIAA Education Series. AIAA, Virginia, U.S.A.
Future of high efficiency diode lasers. C M Stickley, M E Filipkowski, E Parra, E E Hach, SPIE 5991. Stickley, C. M., Filipkowski, M. E., Parra, E., Hach, E. E., 2005. Future of high efficiency diode lasers. In: SPIE 5991 59911O-1.
Study to determine the feasibility of extending the search for Near-Earth Objects to smaller limiting diameters. G H Stokes, D K Yeomans, W F Bottke, D Jewitt, S R Chesley, T S Kelso, J B Evans, R S Mcmillan, R E Gold, T B Spahr, A W Harris, S Worden, Near-Earth Object Science Definition Team, NASAStokes, G. H., Yeomans, D. K., Bottke, W. F., Jewitt, D., Chesley, S. R., Kelso, T. S., Evans, J. B., McMillan, R. S., Gold, R. E., Spahr, T. B., Harris, A. W., Worden, S., August 2003. Study to determine the feasibility of extending the search for Near-Earth Objects to smaller limiting diameters. Near-Earth Object Science Definition Team, NASA.
Observational constraints on the number, albedos, size, and impact hazards of the near-earth asteroids. J S Stuart, Massachusetts Institute of Technology. Dept. of Earth, Atmospheric, and Planetary SciencesPhd thesisStuart, J. S., 2003. Observational constraints on the number, albedos, size, and impact hazards of the near-earth asteroids. Phd thesis, Massachusetts Institute of Technology. Dept. of Earth, Atmospheric, and Planetary Sciences.
Environmental perturbations caused by the impact of asteroids and comets. O B Toon, K Zahnle, D Morrison, R P Turco, C Covey, Reviews of Geophysics. 35Toon, O. B., Zahnle, K., Morrison, D., Turco, R. P., Covey, C., 1997. Environ- mental perturbations caused by the impact of asteroids and comets. Reviews of Geophysics 35, 41-78.
Robust mission design through evidence theory and multiagent collaborative search. M Vasile, Annals of the New York Academy of Science. 1065Vasile, M., 2005. Robust mission design through evidence theory and multiagent collaborative search. Annals of the New York Academy of Science 1065, 152- 173.
A multi-mirror solution for the deflection of dangerous NEOs. M Vasile, Communications in Nonlinear Science and Numerical Simulation. Vasile, M., September 2008. A multi-mirror solution for the deflection of danger- ous NEOs. Communications in Nonlinear Science and Numerical Simulation.
Optimal impact strategies for asteroid deflection. M Vasile, C Colombo, Journal of Guidance, Control and Dynamics. 314Vasile, M., Colombo, C., July-August 2008. Optimal impact strategies for aster- oid deflection. Journal of Guidance, Control and Dynamics 31 (4), 858-872.
On the deflection of asteroids with mirrors. M Vasile, C Maddock, Celestial Mechanics and Dynamical Astronomy. 1071-2Vasile, M., Maddock, C., 2010. On the deflection of asteroids with mirrors. Celestial Mechanics and Dynamical Astronomy 107 (1-2), 265-284.
Call for ideas: NEO Encounter 2029, NEO deflection through a multi-mirror system. M Vasile, C Maddock, G Radice, C Mcinnes, ID: 08/4301Contract Number: 21665/08/NL/CB, ESA/ESTEC Advanced Concepts Team. Tech. Rep. AriadnaVasile, M., Maddock, C., Radice, G., McInnes, C., 2009. Call for ideas: NEO Encounter 2029, NEO deflection through a multi-mirror system. Tech. Rep. Ariadna ID: 08/4301, Contract Number: 21665/08/NL/CB, ESA/ESTEC Advanced Concepts Team.
Evaporation of single crystal forsterite: Evaporation kinetics, magnesium isotope fractionation, and implications of mass-dependent isotopic fractionation of a diffusion-controlled reservoir. J Wang, A Davis, R Clayton, A Hashimoto, Geochimica et Cosmochimica Acta. 636Wang, J., Davis, A., Clayton, R., Hashimoto, A., 1999. Evaporation of single crystal forsterite: Evaporation kinetics, magnesium isotope fractionation, and implications of mass-dependent isotopic fractionation of a diffusion-controlled reservoir. Geochimica et Cosmochimica Acta 63 (6), 953-966.
Solar-pumped solid-state lasers. M Weksler, J Shwartz, Journal of Quantum Electronics. 246Weksler, M., Shwartz, J., 1988. Solar-pumped solid-state lasers. Journal of Quantum Electronics 24 (6), 1222-1228.
Spacecraft formation flying for Earth-crossing object deflections using a power limited laser ablating. S.-M Yoo, Y.-J Songa, S.-Y Park, K.-H Choi, Advances in Space Research. 4312Yoo, S.-M., Songa, Y.-J., Park, S.-Y., Choi, K.-H., 2009. Spacecraft formation flying for Earth-crossing object deflections using a power limited laser ablat- ing. Advances in Space Research 43 (12), 1873-1889.
| []
|
[
"Transmission eigenvalues for strictly concave domains",
"Transmission eigenvalues for strictly concave domains"
]
| [
"Georgi Vodev "
]
| []
| []
| We show that for strictly concave domains there are no interior transmission eigenvalues in a region of the form λ ∈ C : Re λ ≥ 0, |Im λ| ≥ C ε (Re λ + 1) 1 2 +ε , C ε > 0, for every 0 < ε ≪ 1. As a consequence, we obtain Weyl asymptotics for the number of the transmission eigenvalues with an almost optimal remainder term.Then, for every 0 < ε ≪ 1 there exists a constant C ε > 0 such that there are no transmission eigenvalues in the regionRemark 1. It has been proved in[13]that, under the conditions (1.2) and (1.3), for arbitrary domains there are no transmission eigenvalues inThe assumption that Γ is strictly concave does not improve the eigenvalue-free regions in Re λ < 0. Note that it is proved in [13] that for arbitrary domains there are no transmission eigenvalues in Re λ ≤ −C for some constant C > 0 under the assumption (1.2), and in λ ∈ C : Re λ ≤ 0, |Im λ| ≥ C N (|Re λ| + 1) −N for any N > 1 under the assumption (1.3). Remark 3. When the function in the left-hand side of (1.3) is strictly positive, large eigenvaluefree regions have been proved in[13]for arbitrary domains, which however are worse than the eigenvalue-free regions in the cases considered in the present paper. It seems that in this case no improvement is possible even if the domain is supposed strictly concave. Remark 4. It has been proved recently in [11] that the total counting function N (r) = #{λ − trans. eig. : |λ| ≤ r 2 }, r > 1, satisfies the asymptotics | 10.1007/s00208-015-1329-2 | [
"https://arxiv.org/pdf/1501.00797v1.pdf"
]
| 119,310,277 | 1501.00797 | c246735b626852b58ac63fcaa05904b21f771bce |
Transmission eigenvalues for strictly concave domains
5 Jan 2015
Georgi Vodev
Transmission eigenvalues for strictly concave domains
5 Jan 2015
We show that for strictly concave domains there are no interior transmission eigenvalues in a region of the form λ ∈ C : Re λ ≥ 0, |Im λ| ≥ C ε (Re λ + 1) 1 2 +ε , C ε > 0, for every 0 < ε ≪ 1. As a consequence, we obtain Weyl asymptotics for the number of the transmission eigenvalues with an almost optimal remainder term.Then, for every 0 < ε ≪ 1 there exists a constant C ε > 0 such that there are no transmission eigenvalues in the regionRemark 1. It has been proved in[13]that, under the conditions (1.2) and (1.3), for arbitrary domains there are no transmission eigenvalues inThe assumption that Γ is strictly concave does not improve the eigenvalue-free regions in Re λ < 0. Note that it is proved in [13] that for arbitrary domains there are no transmission eigenvalues in Re λ ≤ −C for some constant C > 0 under the assumption (1.2), and in λ ∈ C : Re λ ≤ 0, |Im λ| ≥ C N (|Re λ| + 1) −N for any N > 1 under the assumption (1.3). Remark 3. When the function in the left-hand side of (1.3) is strictly positive, large eigenvaluefree regions have been proved in[13]for arbitrary domains, which however are worse than the eigenvalue-free regions in the cases considered in the present paper. It seems that in this case no improvement is possible even if the domain is supposed strictly concave. Remark 4. It has been proved recently in [11] that the total counting function N (r) = #{λ − trans. eig. : |λ| ≤ r 2 }, r > 1, satisfies the asymptotics
Introduction and statement of results
Let Ω ⊂ R d , d ≥ 2, be a bounded, connected domain with a C ∞ smooth boundary Γ = ∂Ω. A complex number λ ∈ C, λ = 0, will be said to be a transmission eigenvalue if the following problem has a non-trivial solution:
(∇c 1 (x)∇ + λn 1 (x)) u 1 = 0 in Ω, (∇c 2 (x)∇ + λn 2 (x)) u 2 = 0 in Ω, (1.1) where ν denotes the exterior Euclidean unit normal to Γ, c j , n j ∈ C ∞ (Ω), j = 1, 2 are strictly positive real-valued functions. Let f ∈ C ∞ (R d ) be such that f < 0 in Ω, f > 0 in R d \ Ω, df = 0 on Γ. Given an Hamiltonian g ∈ C ∞ (T * Ω) of the form g(x, ξ) = d i,j=1 g ij (x)ξ i ξ j ≥ C|ξ| 2 , C > 0, the boundary Γ will be said to be g− strictly concave (viewed from the interior) iff for any (x, ξ) satisfying f (x) = 0, g(x, ξ) = 1, {g, f }(x, ξ) = 0,
u 1 = u 2 , c 1 ∂ ν u 1 = c 2 ∂ ν u 2 on Γ,we have {g, {g, f }}(x, ξ) > 0,
where {·, ·} denotes the Poisson brackets. Set g j (x, ξ) = c j (x) n j (x) |ξ| 2 . Our main result is the following Theorem 1.1 Let Γ be g j − strictly concave, j = 1, 2, and assume either the condition c 1 (x) ≡ c 2 (x), ∂ ν c 1 (x) ≡ ∂ ν c 2 (x), n 1 (x) = n 2 (x) on Γ, (1.2) or the condition (c 1 (x) − c 2 (x))(c 1 (x)n 1 (x) − c 2 (x)n 2 (x)) < 0 on Γ. To prove Theorem 1.1 we follow the same strategy as in [13]. We first reduce our problem to a semi-classical one by putting h = (Re λ) −1/2 , z = h 2 λ = 1 + ih 2 Im λ. Thus we have to show that the operator T (h, z) = c 1 N 1 (h, z) − c 2 N 2 (h, z) is invertible for |Im z| ≥ h 1−ε , 0 < h ≪ 1, ∀ 0 < ε ≪ 1 (see Theorem 7.1), where N j is the Dirichlet-to-Neumann (DN) map associated to the operator h 2 ∇c j ∇+zn j (see Section 2 for the precise definition and the main properties). It is shown in [13] that the operator T (h, z) is invertible in the region |Im z| ≥ h 1/2−ε for an arbitrary domain Ω. In the present paper we show that this region can be extended to |Im z| ≥ h 1−ε if Γ is strictly concave with respect to both g 1 and g 2 . To do so, we have to study more carefully the DN map N j near the glancing manifold Σ j = {(x, ξ) ∈ T * Γ : r 0 (x, ξ) = m j (x)}, where m j denotes the restriction on Γ of the function n j /c j , while r 0 > 0 is the principal symbol of the Laplace-Beltrami operator on Γ with Riemannian metric induced by the Euclidean metric in R d . We show that N j (h, z) = O(h ε/4 ) : L 2 (Γ) → L 2 (Γ) in an O(h ε ) neighbourhood of Σ j as long as h 1−ε ≤ |Im z| ≤ h ε (see Theorem 2.2). With this property in hands, the invertibility of T near Σ j is almost immediate since the conditions (1.2) and (1.3) guarantee that N 3−j is elliptic on Σ j , j = 1, 2. The invertibility of T outside an O(h ε ) neighbourhood of Σ 1 ∪ Σ 2 for |Im z| ≥ h 1−ε is much easier and can be done in precisely the same way as in [13] for an arbitrary domain. Indeed, the conditions (1.2) and (1.3) imply that in this region T (h, z) is an elliptic h − ΨDO, and hence easy to invert.
Thus the main (and the most difficult) point in our proof is the estimate (2.7) of Theorem 2.2 concerning the behavior of the DN map near the glancing manifold. Therefore the present paper is almost entirely devoted to the proof of Theorem 2.2. To do so, we make use of the global symplectic normal form proved in [12] in order to transform our boundary-value problem in an O(h ε ) neighbourhood of the glancing manifold to a much simpler one in which we have complete separation of the normal and tangential variables (see the model equation in Section 5). The advantage is that we can build a relatively simple parametrix in terms of the Airy function and its derivatives (see Section 5). Note that our parametrix is much simpler than the parametrix of Melrose-Taylor [4] and therefore easier to work with. In particular, it is easier to control it as |Im z| → 0. Using the properties of the Airy function (see Section 3) we show in Section 5 that our parametrix is valid in an O(h 1+ε /|Im z|) neighbourhood of the glancing manifold as long as h 1−2ε ≤ |Im z| ≤ h ε . To cover the entire O(h ε ) neighbourhood of the glancing manifold we have to build another parametrix in Section 6 following the parametrix construction in [13] and showing that it can be improved in the case of our model equation. When |Im z| ∼ h 2/3 a different parametrix, without using the Airy function, is constructed by Sjöstrand (see Section 11 of [10]). In this case, it provides another proof of the estimate (2.7). Note finally that in Section 3 we prove some properties of the Airy function which play a crucial role in the parametrix construction in Section 5. They are more or less well-known and most of them can be found in [6] and in the appendix of [4].
The Dirichlet-to-Neumann map
Let (X, g) be a compact Riemannian manifold of dimension n = dim X ≥ 2 with a non-empty smooth boundary ∂X. Then (∂X, g) is a Riemannian manifold without boundary of dimension n−1, where g is the Riemannian metric on ∂X induced by the metric g. Denote by ∆ X and ∆ ∂X the (negative) Laplace-Beltrami operators on (X, g) and (∂X, g), respectively. The boundary ∂X is said to be strictly concave if the second fundamental form of ∂X is strictly positive. In the case when X ⊂ R n this definition coincides with that one given in the previous section. Given a function f ∈ H 1 (∂X), let u solve the equation
h 2 ∆ X + 1 + iµ u = 0 in X, u = f on ∂X, (2.1)
where 0 < h ≪ 1 is a semi-classical parameter and µ ∈ R, 0 < |µ| ≤ 1. Then the semi-classical Dirichlet-to-Neumann (DN) map
N (h, µ) : H 1 (∂X) → L 2 (∂X) is defined by N (h, µ)f := D ν u| ∂X ,
where D ν = −ih∂ ν , ν being the unit normal to ∂X. It is well-known that for arbitrary manifolds one has the bound
N (h, µ) H 1 h (∂X)→L 2 (∂X) ≤ C |µ| (2.2)
with a constant C > 0 independent of h and µ, where H 1 h (∂X) denotes the Sobolev space H 1 (∂X) equipped with the semi-classical norm (1 − h 2 ∆ ∂X ) 1/2 f L 2 (∂X) . It has been proved recently that better bounds are possible if µ is not too close to zero. Indeed, it follows from Theorem 3.2 of [13], still for arbitrary manifolds, that for every ε > 0 there is a constant 0
< h 0 (ε) ≪ 1 such that for all 0 < h ≤ h 0 , |µ| ≥ h 1 2 −ε we have the bound N (h, µ) H 1 h (∂X)→L 2 (∂X) ≤ C (2.3)
with a constant C > 0 independent of h, µ and ε. Note that (2.3) does not follow from (2.2). In [13] semi-classical parametrices of the operator N (h, µ) are constructed in the hyperbolic zone
H = {(x ′ , ξ ′ ) ∈ T * ∂X : r 0 (x ′ , ξ ′ ) < 1}, in the glancing zone G = {(x ′ , ξ ′ ) ∈ T * ∂X : r 0 (x ′ , ξ ′ ) = 1} and in the elliptic zone E = {(x ′ , ξ ′ ) ∈ T * ∂X : r 0 (x ′ , ξ ′ ) > 1}.
Hereafter, r 0 (x ′ , ξ ′ ) denotes the principal symbol of the operator −∆ ∂X written in the coordinates (x ′ , ξ ′ ). To be more precise,
introduce the set S k δ , k ∈ R, 0 ≤ δ < 1 2 , of all functions a ∈ C ∞ (T * ∂X) satisfying ∂ α x ′ ∂ β ξ ′ a(x ′ , ξ ′ ) ≤ C α,β h −δ(|α|+|β|) ξ ′ k−|β|
for all multi-indices α, β with constants C α,β > 0 independent of h. We will denote by OPS k δ the set of the h-pseudo-differential operators (h-ΨDOs) with symbols in S k δ defined as follows
(Op h (a)f ) (x ′ ) = 1 2πh n−1 T * ∂X e − i h x ′ −y ′ ,ξ ′ a(x ′ , ξ ′ )f (y ′ )dy ′ dξ ′ .
Let χ − , χ 0 , χ + ∈ C ∞ (T * ∂X) be independent of h and such that χ − +χ 0 +χ + ≡ 1, supp χ − ⊂ H, supp χ + ⊂ E, χ 0 is supported in a small h-independent neighbourhood of G,
χ 0 = 1 in a smaller h-independent neighbourhood of G. Set ρ(x ′ , ξ ′ , µ) = −r 0 (x ′ , ξ ′ ) + 1 + iµ with Im ρ > 0. It was shown in [13] that, mod O(h ∞ ), the operator N (h, µ)Op h (χ − ) belongs to OPS 0 0 for |µ| ≥ h 1−ε , 0 < ε ≪ 1, with a principal symbol ρχ − , the operator N (h, µ)Op h (χ 0 ) belongs to OPS 0 1/2−ε for |µ| ≥ h 1/2−ε with a principal symbol ρχ 0 , and N (h, µ)Op h (χ + ) belongs to OPS 1 0
with a principal symbol ρχ + . Summing up, we conclude that, mod O(h ∞ ), the operator N (h, µ) belongs to OPS 1 1/2−ε for |µ| ≥ h 1/2−ε with a principal symbol ρ. Therefore, in this case the bound (2.3) is a consequence of well-known properties of the h-ΨDOs. In fact, a more detailed anaysis of the operator N (h, µ) can be carried out allowing the functions χ + , χ − and χ 0 to depend on h. More generally, it follows from the analysis in [13] that given any function χ ∈ C ∞ 0 (T * ∂X), for arbitrary ∂X, one can construct a parametrix for the operator N (h, µ)Op h (χ) as long as min supp χ
|ρ| 2 ≥ h 1−ε |µ| for some ε > 0.
It is easy to see that given a parameter 0 < δ ≪ 1, there are functions
χ − δ , χ 0 δ , χ + δ ∈ S 0 δ such that χ − δ +χ 0 δ +χ + δ ≡ 1, supp χ − δ ⊂ {r 0 −1 ≤ −h δ }, supp χ + δ ⊂ {r 0 −1 ≥ h δ }, supp χ 0 δ ⊂ {|r 0 −1| ≤ 2h δ }, χ 0 δ = 1 on {|r 0 − 1| ≤ h δ }.
As in [13] one can prove the following
Theorem 2.1 For every 0 < ε ≪ 1 there is h 0 (ε) > 0 such that for 0 < h ≤ h 0 , h 1−ε ≤ |µ| ≤ h ε , we have the bound N (h, µ)Op h (χ − ε/2 ) − Op h (ρχ − ε/2 ) L 2 (∂X)→L 2 (∂X) ≤ Ch 1/2 . (2.4)
For |µ| ≤ h ε we also have the bound
N (h, µ)Op h (χ + ε/2 ) − Op h (ρχ + ε/2 ) L 2 (∂X)→L 2 (∂X) ≤ Ch 1/2 . (2.5) For h 1/2−ε ≤ |µ| ≤ h ε , we have the bound N (h, µ)Op h (χ 0 ε/2 ) L 2 (∂X)→L 2 (∂X) ≤ Ch ε/4 . (2.6)
When ∂X is strictly concave, Sjöstrand showed (see Section 11 of [10]) that (2.3) still holds for C 1 h 2/3 ≤ |µ| ≤ C 2 h 2/3 , C 2 > C 1 > 0 being arbitrary, independent of h and µ. We will show in the present paper that for strictly concave ∂X the bound (2.3) holds true for h 1−ε ≤ |µ| ≤ h ε , ∀ 0 < ε ≪ 1. To this end, we need to improve only the bound (2.6). We have the following
Theorem 2.2 If ∂X is strictly concave, for every 0 < ε ≪ 1 there is h 0 (ε) > 0 such that for 0 < h ≤ h 0 , h 1−ε ≤ |µ| ≤ h ε , we have the bound N (h, µ)Op h (χ 0 ε/2 ) L 2 (∂X)→L 2 (∂X) ≤ Ch ε/4 . (2.7)
Proof. We will make use of the symplectic normal form obtained in [12] to reduce our problem to a simpler one for which it is easier to construct a parametrix. This model problem will be studied in the next sections. Let y = (y 1 , y ′ ) ∈ X δ := (−δ, δ) × ∂X, 0 < δ ≪ 1, be the normal geodesic coordinates with respect to the Riemannian metric g. Here we identify the points in (0, δ) × ∂X with {x ∈ X : dist(x, ∂X) < δ}. Then in these coordinates we can write −h 2 ∆ X = D 2 y 1 + q(y 1 , y ′ , D y ′ ) + lower order terms,
where D y 1 = −ih∂ y 1 , D y ′ = −ih∂ y ′ , q(y 1 , y ′ , η ′ ) = |α|=2 q α (y 1 , y ′ )η ′α . Moreover q 0 (y ′ , η ′ ) := q(0, y ′ , η ′ ) is the principal symbol of −∆ ∂X written in the coordinates (y ′ , η ′ ), while
q 1 (y ′ , η ′ ) := ∂q ∂y 1 (0, y ′ , η ′ ) > 0
is the second fundamental form of ∂X supposed to be strictly positive (which is nothing else but the definition of g− strictly concavity). Then the principal symbol p of the operator P (h, µ) = −h 2 ∆ X − 1 − iµ can be written in the coordinates (y, η) ∈ T * X δ as follows
p(y, η) = η 2 1 + q(y 1 , y ′ , η ′ ) − 1 − iµ = η 2 1 + q 0 (y ′ , η ′ ) + y 1 q 1 (y ′ , η ′ ) − 1 − iµ + O(y 2 1 q 0 ).
Denote by R the set of all functions a ∈ C ∞ (T * X δ ) satisfying (with all derivatives)
a = O(x ∞ 1 ) + O(ξ ∞ 1 ) + O((1 − q 0 ) ∞ )
in a neighbourhood of K = {x 1 = ξ 1 = 1 − q 0 = 0}. We will also denote by OPR the h − ΨDOs on X δ with symbols of the form ∞ j=0 h j a j , where a j ∈ R do not depend on h. Let φ ∈ C ∞ (R), φ(σ) = 1 for |σ| ≤ 1/2, φ(σ) = 0 for |σ| ≥ 1. Given any 0 < ε ≪ 1, denote by A ε the h − ΨDO on X δ with symbol φ(
x 1 /h ε )φ((1 − q 0 )/h ε ). Clearly, if R ∈ OPR, we have RA ε , A ε R = O(h ∞ ) : L 2 (X δ ) → L 2 (X δ ).
It is shown in [12] (see Theorem 3.1) that there exists an exact symplectic map χ : T * X δ → T * X δ such that χ(x, ξ) = (y(x, ξ), η(x, ξ)) satisfies
y 1 = x 1 q 1 (x ′ , ξ ′ ) −1/3 + O(x 2 1 ) + O(x 1 (1 − q 0 )), η 1 = ξ 1 q 1 (x ′ , ξ ′ ) 1/3 + O(x 1 ) + O(ξ 1 (1 − q 0 )), (y ′ , η ′ ) = (x ′ , ξ ′ ) + O(x 1 ), (p • χ)(x, ξ) = q 1 (x ′ , ξ ′ ) 2/3 + O(x 1 ) (ξ 2 1 + x 1 − ζ(x ′ , ξ ′ )) (mod R) in a neighbourhood of K, where ζ(x ′ , ξ ′ ) = q 1 (x ′ , ξ ′ ) −2/3 + O(1 − q 0 ) (1 + iµ − q 0 (x ′ , ξ ′ )).
Thus, if U ⊂ T * X δ is a small neighbourhood of K, then χ sends U into itself. Using h− Fourier integral operators on X δ (h− FIOs) associated to the canonical relation
Λ = {(y, η, x, ξ) ∈ T * X δ × T * X δ : (y, η) = χ(x, ξ), (x, ξ) ∈ U }
one can transform the operator P into a simpler one, P ′ 0 , which can be written in the coordinates (x, ξ) as follows
P ′ 0 = D 2 x 1 + x 1 − L 1 (x ′ , D x ′ ; h) − iµL 2 (x ′ , D x ′ ; h) where L j (x ′ , ξ ′ ; h) = ∞ k=0 h k L (k) j (x ′ , ξ ′ ), j = 1, 2, with L (0) 1 (x ′ , ξ ′ ) = q 1 (x ′ , ξ ′ ) −2/3 + O(1 − q 0 ) (1 − q 0 (x ′ , ξ ′ )), L (0) 2 (x ′ , ξ ′ ) = q 1 (x ′ , ξ ′ ) −2/3 + O(1 − q 0 )
. More precisely, there exist zero-order elliptic (in U ) h − ΨDOs on X δ , A, A ′ , and a zero-order elliptic h− FIO on X δ , U , associated to Λ, such that if we set T = U A, T ′ = U A ′ , we have the relations (see Theorem 4.2 of [12]):
P T = T ′ P ′ 0 + T ′ R 0 , (2.8) ι * T = Q 1 ι * + hQ 2 ι * D x 1 + ι * V P ′ 0 + ι * R, (2.9) ι * D x 1 T = Q 1 ι * D x 1 + h Q 2 ι * + ι * V P ′ 0 + ι * R, (2.10)
where ι * deontes the restriction on x 1 = 0, Q j , Q j , j = 1, 2, are zero-order h − ΨDOs on ∂X, Q 1 and Q 1 being elliptic in a neighbourhood of {q 0 = 1}, V and V are zero-order h − ΨDOs on X δ , and R 0 , R, R ∈ OPR. One can further simplify the operator P ′ 0 by making a new symplectic change of the tangential variables (
x ♯ , ξ ♯ ) = χ ♯ (x ′ , ξ ′ ) ∈ T * ∂X such that ξ ♯ n = −L (0) 1 (x ′ , ξ ′ ).
Then, in these coordinates the glancing manifold {q 0 = 1} is defined by ξ ♯ n = 0. Conjugating with a zero-order elliptic (in a neighbourhood of the glancing manifold) h−FIO operator on ∂X we get (2.8), (2.9) and (2.10) with new operators of the same type (which we will denote in the same way below) and P ′ 0 replaced by
P 0 = D 2 x 1 + x 1 + D x ♯ n − iµQ 0 (x ♯ , D x ♯ ) + Q(x ♯ , D x ♯ ; µ, h) where Q 0 (x ♯ , ξ ♯ ) > 0 in a neighbourhood of ξ ♯ n = 0, and Q = ∞ k=1 h k Q k (x ♯ , ξ ♯ ; µ).
Thus we get the model operator studied in Sections 5 and 6. Indeed, given a function f ∈
L 2 (∂X), it is constructed a parametrix u(x 1 , x ♯ ) supported in 0 ≤ x 1 ≤ h ε such that u| x 1 =0 = Op h φ(ξ ♯ n /h ε ) f + O(h ∞ ) f , P 0 u H s ((0,δ)×∂X) ≤ C M h M f L 2 (∂X) (2.11)
for every s ≥ 0, where M ≫ 1 is an arbitrary integer independent of h. Hereafter, the Sobolev spaces H s will be equipped with the semi-classical norm. Moreover, by Theorem 6.6 the operator defined by
N f := D x 1 u| x 1 =0 satisfies the bound N L 2 (∂X)→H s (∂X) ≤ Ch ε/4 . (2.12)
By (2.8), (2.9) and (2.10) (with P ′ 0 replaced by P 0 ) combined with (2.11) and (2.12) we obtain that the function u = T u satisfies the bounds
P u H s ((0,δ)×∂X) ≤ C M h M f L 2 (∂X) (2.13) u| ∂X − (Q 1 + hQ 2 N ) f H s (∂X) ≤ C M h M f L 2 (∂X) (2.14) D x 1 u| ∂X L 2 (∂X) ≤ Ch ε/4 f L 2 (∂X)
.
(2.15)
Given any function f ∈ L 2 (∂X), let v solve the equation
h 2 ∆ X + 1 + iµ v = 0 in X, v = Op h (φ((q 0 − 1)/h ε )) f on ∂X, (2.16)
where the function φ is as above.
Let φ 1 ∈ C ∞ 0 (R) be such that φ 1 = 1 on supp φ. Since Q 1 is a zero-order h − ΨDO on ∂X, elliptic in a neighbourhood of {q 0 = 1}, thete exists a zero-order h− ΨDO, Q ♭ 1 , elliptic on T * ∂X, such that (Q ♭ 1 ) −1 = O(1) and (Q ♭ 1 − Q 1 )Op h (φ 1 ((q 0 − 1)/h ε )) = O(h ∞ ) as operators on H s (∂X), s ≥ 0. Set Z = Q ♭ 1 + hQ 2 N , Op h (φ 1 ((q 0 − 1)/h ε )) = O(h 1−ε ) : H s (∂X) → H s (∂X).
Then, for h small enough the operator Q ♭ 1 + Z is invertible on H s (∂X) and
Q ♭ 1 + Z −1 = O(1) : H s (∂X) → H s (∂X).
Denote by u the parametrix above with
f = Op h (φ 1 ((q 0 − 1)/h ε )) (Q ♭ 1 + Z) −1 Op h (φ((q 0 − 1)/h ε )) f. We have f L 2 (∂X) ≤ O(1) f L 2 (∂X) and (Q 1 + hQ 2 N ) f = (Q ♭ 1 + hQ 2 N ) f + O(h ∞ )f = Op h (φ((q 0 − 1)/h ε )) f + Z 1 f + O(h ∞ )f where we have put Z 1 = Op h ((1 − φ 1 )((q 0 − 1)/h ε )) Z(Q ♭ 1 + Z) −1 Op h (φ((q 0 − 1)/h ε )) .
We need now the following
Lemma 2.3 For small h we have Z 1 = O(h ∞ ) : L 2 (Y ) → L 2 (Y ).
Proof. Given any integer m ≥ 1 we can write
Z(Q ♭ 1 + Z) −1 = I − Q ♭ 1 (Q ♭ 1 + Z) −1 = I − m k=0 Q ♭ 1 (−(Q ♭ 1 ) −1 Z) k (Q ♭ 1 ) −1 − Q ♭ 1 (−(Q ♭ 1 ) −1 Z) m+1 (I + (Q ♭ 1 ) −1 Z) −1 (Q ♭ 1 ) −1
where I denotes the identity. Hence, to prove the lemma it suffices to show that
Op h ((1 − φ 1 )((q 0 − 1)/h ε )) Q ♭ 1 (−(Q ♭ 1 ) −1 Z) k (Q ♭ 1 ) −1 Op h (φ((q 0 − 1)/h ε )) = O(h ∞ ) : L 2 (Y ) → L 2 (Y ) (2.17)
for every integer k ≥ 0, and all functions φ, φ 1 ∈ C ∞ 0 (R) independent of h and such that φ 1 = 1 on supp φ. For k = 0, (2.17) follows from well-known properties of the h − ΨDOs. It is easy also to see that (2.17) with k = 1 implies (2.17) for every k ≥ 1. On the other hand, to prove (2.17)
with k = 1 it suffices to prove it with N in place of Q ♭ 1 (−(Q ♭ 1 ) −1 Z) k (Q ♭ 1 ) −1 .
This property of the operator N , however, follows from Theorem 6.6. ✷ By (2.13), (2.14) and Lemma 2.3, we get
P (v − u) H s ((0,δ)×∂X) ≤ C M h M f L 2 (∂X) (2.18) (v − u)| ∂X H s (∂X) ≤ C M h M f L 2 (∂X) (2.19) while (2.15) implies D x 1 u| ∂X L 2 (∂X) ≤ Ch ε/4 f L 2 (∂X) .(D x 1 v| ∂X L 2 (∂X) ≤ Ch ε/4 f L 2 (∂X) . (2.21) Denote by G D the self-adjoint Dirichlet realization of the operator −∆ X on L 2 (X). We have v − u = E ((v − u)| ∂X ) + h 2 G D − iµ −1 P (v − u) + h 2 G D − iµ −1 h 2 ∆ X + 1 + iµ E ((v − u)| ∂X ) where E = O(h 1/2 ) : H s (∂X) → H s+1/2 (X), s ≥ 0, is the extension map, (Ef )| ∂X = f , f H s (∂X) ≤ O(h −1/2 ) Ef H s+1/2 (X)
.
By (2.18), (2.19), with D ν = −ih∂ ν , we have D ν (v − u) L 2 (∂X) ≤ Ch 1/2 E ((v − u)| ∂X ) H 3/2 (X) +Ch 1/2 h 2 G D − iµ −1 P (v − u) H 3/2 (X) +Ch 1/2 h 2 G D − iµ −1 h 2 ∆ X + 1 + iµ E ((v − u)| ∂X ) H 3/2 (X) ≤ C 1 + |µ| −1 (v − u)| ∂X H 1 (∂X) + Ch 1/2 |µ| −1 P (v − u) H 3/2 (X) ≤ C M h M −1 f L 2 (∂X) (2.22) provided h 1−ε ≤ |µ| ≤ h ε ,
Some properties of the Airy function
It is well-known that the Airy function Ai(z) is an entire function of order 3 2 with simple zeros {ν j } ⊂ (−∞, 0), −ν j ∼ (3π/2) 2/3 j 2/3 , and satisfying the equation
(∂ 2 z − z)Ai(z) = 0. (3.1)
Differentiating (3.1) k times leads to the following equation for the derivatives of the Airy function,
Ai (k) (z) = d k Ai(z) dz k , (∂ 2 z − z)Ai (k) (z) = kAi (k−1) (z). (3.2)
It is also known that the Airy function satisfies the identities
Ai(−z) = e iπ/3 Ai + (z) + e −iπ/3 Ai − (z), (3.3) Ai(−z) −1 = c ± 1 F (−z)Ai ± (z) + c ± 2 Ai ′ ± (z), (3.4)
where c ± j are some constants and we have put
Ai ± (z) = Ai(ze ±iπ/3 ), F (z) = Ai ′ (z) Ai(z) .
The functions Ai and Ai ± satisfy
Ai(z) = Ai(z), Ai + (z) = Ai − (z). (3.5)
In particular, this imples |Ai + (z)| = |Ai − (z)| for real z. For | arg z| < π we also have the formula
Ai(z) = exp − 2 3 z 3/2 B(z), (3.6) B(z) = π −1 ∞ 0 e −t 2 z 1/2 cos t 3 3 dt,
where z 1/2 is taken so that Re z 1/2 > 0, that is,
z 1/2 = |z| 1/2 exp i 1 2 arg z , z 3/2 = |z| 3/2 exp i 3 2 arg z .
Observe that
Re z 1/2 ≥ |Im z| 2|z| 1/2 .
The function B satisfies the asymptotic expansion
B(z) = z −1/4 ∞ ℓ=0 (−1) ℓ b ℓ ξ −ℓ (3.7)
for |z| ≫ 1, | arg z| ≤ π − δ, 0 < δ ≪ 1, where ξ = 2 3 z 3/2 and b ℓ are strictly positive real numbers, b 0 = (2 √ π) −1 . In view of (3.6), (3.7) provides an asymptotic expansion for the Airy function Ai(z). Moreover (3.7) can be differentiated a finite number of times thus getting an asymptotic expansion for Ai (k) (z). In particular, we get that for | arg z| ≤ π − δ the function F (z) has the expansion
F (z) = −z 1/2 ∞ ℓ=0 b ℓ ξ −ℓ , |z| ≫ 1, (3.8) where b 0 = 1. Moreover, the function F (k) (z) = d k F (z)
dz k has the expansion obtained by differentiating (3.8) k times. The behaviour of the functions Ai(z) and F (z) for z ∈ Λ δ := C \ {| arg z| ≤ π − δ} is more complicated.
Lemma 3.1 For Im z = 0 and every integer k ≥ 0, we have the bound
F (k) (z) ≤ C k |Im z| −k |z| 1/2 + |Im z| −1 . (3.9)
Proof. Given any z ∈ C with Im z = 0, denote B(z) = {w ∈ C : |w − z| ≤ |Im z|/2}. Since the function F is analytic on B(z), by the Cauchy theorem we have
F (k) (z) ≤ C k |Im z| −k max w∈∂B(z) |F (w)|.
(3.10)
It follows from (3.10) that if (3.9) holds with k = 0, it holds for all k.
Since the function F (z) is analytic at z = 0, there exists a constant z 0 > 0 such that the bound (3.9) holds trivially for |z| ≤ z 0 . For | arg z| ≤ π − δ, |z| ≫ 1, it follows easily from (3.8). Therefore, we may suppose that z 0 ≤ |z| ≤ z 1 , z 1 > z 0 > 0 being constants, or z ∈ Λ δ , |z| ≫ 1. To deal with the first case we will use the Hadamard factorization theorem. Since the zeros of the Airy function are simple, we can write
Ai(z) = e C 1 z+C 2 ∞ j=1 1 − z ν j e z ν j .
Hence we can write the function F in the form
F (z) = C 1 + ∞ j=1 (z − ν j ) −1 + ν −1 j . Since ν j is real; we have |z − ν j | −1 ≤ |Im z| −1 , while for |ν j | ≥ 2|z| we have |z − ν j | −1 ≤ 2|ν j | −1 .
Thus we obtain
|F (z)| ≤ |C 1 | + 2|z| j=1 |z − ν j | −1 + |ν j | −1 + |z| ∞ j=2|z| |z − ν j | −1 |ν j | −1 ≤ |C 1 | + 2|z| + 2|z||Im z| −1 + 2|z| ∞ j=1 |ν j | −2
which gives the desired bound for |F (z)| in this case.
In the second case we will use (3.3). Let −z ∈ Λ δ , |z| ≫ 1. Then | arg z| ≤ δ and if ξ = 2 3 z 3/2 , we have Im ξ = Im z(Re z) 1/2 (1 + O(δ)).
Hence |Im ξ| ≥ C δ |Im z||z| 1/2 , C δ > 0. (3.11) It suffices to consider the case Im z > 0 since the case Im z < 0 is similar. Then we have Im ξ > 0. In view of (3.7), the functions B ± (z) = z 1/4 e ∓iπ/12 B(e ±iπ/3 z) satisfy the asymptotics
B ± (z) = b 0 ± ib 1 ξ −1 + O(ξ −2 ), −zB ′ ± (z) = ± 3ib 1 2 ξ −1 + O(ξ −2 ),
where b 0 , b 1 > 0 are constants. In particular, we have
±Im B ± (z)B ′ ± (z) = 3b 0 b 1 2 |z| −5/2 1 + O(δ) + O(|z| −3/2 ) > 0. (3.12)
Let us see that (3.12) implies the inequality
|B + (z)| ≥ |B − (z)|.(3.13)
To this end, observe that the first derivative of the function
f (τ ) = |B + (Re z + iτ )| 2 − |B − (Re z + iτ ))| 2
is given by
f ′ (τ ) = 2Im B + (Re z + iτ )B ′ + (Re z + iτ ) − 2Im B − (Re z + iτ )B ′ − (Re z + iτ ) .
By (3.12) we get f ′ (τ ) > 0 as long as 0 ≤ τ ≤ δRe z and Re z ≫ 1. On the other hand, in view of (3.5) we have f (0) = 0. Hence f (τ ) ≥ 0 for τ ≥ 0, which proves (3.13). By (3.6) and (3.13) we have
Ai − (z) Ai + (z) = e −2Im ξ B − (z) B + (z) ≤ e −2Im ξ . (3.
14)
It is easy to see that the above asymptotics also lead to the bounds
Ai ′ − (z) Ai ′ + (z) ≤ C, Ai ′ + (z) Ai + (z) ≤ C|z| 1/2 (3.15)
with some constant C > 0. By (3.11), (3.14) and (3.15),
|F (−z)| ≤ Ai ′ + (z) Ai + (z) 1 + Ai ′ − (z) Ai ′ + (z) 1 − Ai − (z) Ai + (z) −1 ≤ C|z| 1/2 1 − e −2Im ξ ≤ C|z| 1/2 min{1, 2Im ξ} ≤ C|z| 1/2 + C|Im z| −1 . ✷ Given any integer k ≥ 0, set Φ k (z) = Ai(z)∂ k z Ai(z) −1 = ∂ z Φ k−1 (z) − F (z)Φ k−1 (z) (3.16)
where Φ −1 = 0. Clearly, Φ 0 = 1 and Φ 1 = −F . Lemma 3.2 For Im z = 0 and all integers k ≥ 1, ℓ ≥ 0, we have the bound
∂ ℓ z Φ k (z) ≤ C k,ℓ |Im z| −ℓ |z| 1/2 + |Im z| −1 k . (3.17)
Proof. Differentiating the identity (3.16) ℓ times we get
∂ ℓ z Φ k (z) = ∂ ℓ+1 z Φ k−1 (z) − ℓ j=0 c ℓ,j F (j) (z)∂ ℓ−j z Φ k−1 (z). (3.18)
It is easy to see by induction in k that (3.17) follows from (3.9). ✷ For t ≥ 0 and z ∈ C, | arg z| < π, set
Ψ k (t, z) = Ai (k) (t + z) Ai(z) , Ψ (ℓ) k (t, z) = ∂ ℓ z Ψ k (t, z).k (0, z) ≤ C k,ℓ |Im z| −ℓ |z| 1/2 + |Im z| −1 k . (3.19)
For t > 0, Im z = 0 and all integers k ≥ 0, ℓ ≥ 0, we have the bound
Ψ (ℓ) k (t, z) ≤ C k,ℓ |Im z| −ℓ |z| 1/2 + |Im z| −1 k+1 (3.20) while for t ≥ |z| we have Ψ (ℓ) k (t, z) ≤ C k,ℓ |Im z| −ℓ |z| 1/2 + |Im z| −1 t 1/2 + |Im z| −1 k e −t 1/2 |Im z|/4 . (3.21)
Proof. In view of (3.10) with Ψ k in place of F , it suffices to prove these bounds with ℓ = 0. Furthermore, using (3.2) it is easy to see by induction in k that (3.9) implies the estimate Therefore, to prove the lemma we have to bound |Ψ 0 |. Clearly, Ψ 0 (0, z) = 1 which proves (3.19).
Ai (k) (z) ≤ C k |z| 1/2 + |Im z| −1 k |Ai(z)| . (3.22) Hence |Ψ k (t, z)| ≤ C k t 1/2 + |z| 1/2 + |Im z| −1 k |Ψ 0 (t, z)| .
To bound |Ψ 0 (t, z)| for t > 0, let us see that the Airy function satisfies the bounds
|Ai(z)| ≤ C z −1/4 e − 2 3 Re z 3/2 ,(3.|Ψ 0 (t, z)| ≤ C |z| 1/2 + |Im z| −1 e −ϕ(t,z) (3.26) where ϕ = 2 3 Re (z + t) 3/2 − 2 3 Re z 3/2 = t 0 Re (z + τ ) 1/2 dτ ≥ 1 2 t 0 |Im z| |z + τ | 1/2 dτ ≥ t|Im z| 2|z| 1/2 + 2t 1/2 . (3.27)
Hence ϕ ≥ 0 for t ≥ 0, while for t ≥ |z| we have ϕ ≥ 1 4 t 1/2 |Im z|. Therefore, the desired bounds for |Ψ 0 | follow from (3.26). ✷
Some properties of the h − Ψ DOs
Let Y be an n − 1 dimensional compact manifold without boundary or an open neighbourhood in R n−1 . In this section we will recall some useful criteria on a symbol a ′ y, η) ∈ T * Y for the h − Ψ DO, Op h (a), to be bounded on L 2 (Y ). We will make use of the analysis developed in Section 7 of [1] (see also Section 2 of [13]). We first have the following
Proposition 4.1 Let a ∈ T * Y satisfy the bounds ∂ α y a(y, η) ≤ a 0 (h)h −|α|/2 (4.1) for |α| ≤ n, where a 0 > 0 is a parameter. Then there is a constant C > 0 independent of h such that Op h (a) L 2 (Y )→L 2 (Y ) ≤ Ca 0 (h). (4.2)
This proposition follows for example from Proposition 2.1 of [13]. The next proposition can be derived from the analysis in Section 7 of [1].
Proposition 4.2 Let a, b ∈ T * Y satisfy the bounds ∂ α y ∂ β η a(y, η) ≤ C α,β , (4.3) ∂ α y b(y, η) ≤ C α h −M 0 −δ|α| (4.4)
where 0 ≤ δ < 1, for all multi-indices α and β with constants C α , C α,β > 0 independent of h, and M 0 > 0 independent of h and α. Then for every integer M ≫ M 0 there is a constant C M > 0 independent of h such that
Op h (ab) − Op h M |α|=0 (−ih) |α| |α|! ∂ α η a∂ α y b L 2 (Y )→L 2 (Y ) ≤ C M h M (1−δ)/2 . (4.5)
Proof. In view of formula (7.15) of [1] the operator in the left-hand side of (4.5) whose norm we would like to bound is an h-psdo with symbol c(x, ξ, x, ξ), where the function c is given by
c(x, ξ, y, η) = e ihD ξ ·Dy a(x, ξ)b(y, η) − M |α|=0 (−ih) |α| |α|! ∂ α η a(x, ξ)∂ α y b(y, η)
where we have put D = −i∂. The inequality (7.17) of [1] together with (4.3) and (4.4) yield the estimate if M is taken large enough. ✷
|c(x, ξ, y, η)| ≤ C s,M h M |α|+|β|≤s D α ξ D β y (D ξ · D y ) M a(x, ξ)b(y, η) L 2 ≤ C s,M h M (1−δ)−M 0 −sδ
Parametrix construction for the model equation
Let the parameters h and µ be as in Section 2, h 1−2ε ≤ |µ| ≤ h ε , 0 < ε ≪ 1. Let also Y be as in Section 4. Consider the operator
P 0 = D 2 t + t + D y 1 + iµq(y, D y ) + h q(y, D y ; h, µ), t > 0,
where D t = −ih∂ t , D y = −ih∂ y , y ∈ Y , the function q ∈ C ∞ (T * Y ), q ∈ S 0 0 , is real-valued and does not depend on t, h and µ, satisfying 0 < C 1 ≤ q ≤ C 2 , C 1 and C 2 being constants, q ∈ S 0 0 uniformly in h and µ. Let η = (η 1 , η ′ ) be the dual variables of y = (y 1 , y ′ ). Let also the function φ be as in Section 2. We are going to build a parametrix, u, for the solution u of the equation
P 0 u = 0 in R + × Y, u = f 1 on Y,(5.1)
where f 1 is microlocally suppoted in the region G(ε) := {(µ, η 1 ) ∈ R 2 : |µ| + |η 1 | ≤ 2h ε }. We will first construct a parametrix in the region
G 1 (ε) := {(µ, η 1 ) ∈ R 2 : |µ| (|µ| + |η 1 |) ≤ h 1+ε }. (5.2)
More precisely, in this section we will construct a parametrix, u 1 , of the solution of the equation
(5.1) with f 1 = Op h φ(η 1 |µ|/h 1+ε ) f + O(h ∞ )f , f ∈ L 2 (Y ) being arbitrary.
The construction in the region G 2 (ε) := {(µ, η 1 ) ∈ R 2 : h 1+ε /|µ| ≤ |µ| + |η 1 | ≤ 2h ε } will be carried out in the next section.
We will be looking for u 1 in the form
u 1 = φ(t/h ε )Op h (A(t))g
where g ∈ L 2 (Y ) will be determined later on such that g L 2 (Y ) ≤ O(1) f L 2 (Y ) , and
A(t) = M k=0
a k (y, η; h, µ)ψ k (t, y, η; h, µ),
ψ k = h k/3 Ψ k th −2/3 , (η 1 + iµq(y, η))h −2/3 ,
Ψ k being the functions introduced in Section 3, M is an arbitrary integer, a 0 = φ 1 (η 1 |µ|/h 1+ε ), φ 1 ∈ C ∞ 0 (R) being such that φ 1 = 1 on supp φ, while a k , k ≥ 1, do not depend on the variable t and will be determined later on. Observe first that we have
P 0 Op h (A(t)) = Op h (D 2 t + t + η 1 + iµq(y, η) − ih∂ y 1 )A(t) +iµq(y, D y )Op h (A(t)) − iµOp h (qA(t)) + h q(y, D y )Op h (A(t)). (5.3)
It is easy to see that (3.2) implies the identity (D 2 t + t + η 1 + iµq(y, η))Ψ k th −2/3 , (η 1 + iµq(y, η))h −2/3 = −kh 2/3 Ψ k−1 th −2/3 , (η 1 + iµq(y, η))h −2/3 and hence
(D 2 t + t + η 1 + iµq(y, η))A(t) = −h M −1 k=0 (k + 1)a k+1 ψ k . (5.4) Using the identity ∂ z Ψ k (z) = Ψ k+1 (t, z) − F (z)Ψ k (t, z)
we can also write
∂ y 1 Ψ k th −2/3 , (η 1 + iµq(y, η))h −2/3 = iµh −2/3 ∂ y 1 qΨ k+1 th −2/3 , (η 1 + iµq(y, η))h −2/3 −iµh −2/3 ∂ y 1 qF η 1 + iµq(y, η))h −2/3 Ψ k th −2/3 , (η 1 + iµq(y, η))h −2/3 . Hence ∂ y 1 A(t) = M k=0 ∂ y 1 a k − iµh −1 ∂ y 1 qF ♯ a k + iµh −1 ∂ y 1 qa k−1 ψ k +iµh −1 ∂ y 1 qa M ψ M +1 (5.5)
where a −1 = 0 and we have put
F ♯ = h 1/3 F η 1 + iµq(y, η))h −2/3 . Set ρ 1 = |η 1 | 1/2 + |µ| 1/2 + h |µ| < 1.∂ α y ψ k ≤ C k,α ρ k 1 . (5.6)
For all t > 0, k ≥ 0 and multi-indices α, we have the bound
∂ α y ψ k ≤ C k,α h −1/3 ρ k 1 . (5.7)
Moreover, there exists a constant C > 0 such that for C(|µ| + |η 1 |) ≤ t ≤ 1 we have the bound
∂ α y ψ k ≤ C k,α h −1/3 e −t 1/2 |µ|/4h . (5.8)
We also have the bound
∂ α y F ♯ ≤ C α ρ 1 . (5.9)
Proof. It is easy to see by induction that
∂ α y Ψ k th −2/3 , (η 1 + iµq(y, η))h −2/3 = |α| j=0 c α,j (y, η) µ h 2/3 j Ψ (j) k th −2/3 , (η 1 + iµq(y, η))h −2/3 (5.10)
with some function c α,j independent of t, h and µ, c α,0 = 0 for |α| ≥ 1. Recall that q ≥ C 1 > 0. Now (5.6)-(5.8) follow from Lemma 3.3 and (5.10). The bound (5.9) follows from (3.9) and (5.10) applied with F ♯ in place of Ψ k . ✷
Set
E 1 (t) = iµ h M |α|=1 (−ih) |α| |α|! ∂ α η q∂ α y A(t), E 2 (t) = M |α|=0 (−ih) |α| |α|! ∂ α η q∂ α y A(t), E 1 (t) = iµ q(y, D y )Op h (A(t)) − iµ Op h (qA(t)) − h Op h (E 1 (t))
,
E 2 (t) = h q(y, D y )Op h (A(t)) − h Op h (E 2 (t))
.
Lemma 5.2 We have the identities
E j (t) = 2M k=0 k ℓ=0 k |α|=0 b (j) k,ℓ,α (y, η; h, µ)∂ α y a ℓ ψ k (5.11)
where the functions b Proof. Using the identity
Ψ (ℓ) k (t, z) = ℓ ν=0 γ ℓ,ν ∂ ℓ−ν z Ai(z) −1 Ai (k+ν) (t + z) = ℓ ν=0 γ ℓ,ν Φ ℓ−ν (z)Ψ k+ν (t, z)
together with (5.10), we get the identity,
∂ α y ψ k = |α| j=0 j ν=0 c α,j,ν (y, η) µ h j Φ ♯ j−ν ψ k+ν (5.12) where we have put Φ ♯ k = h k/3 Φ k (η 1 + iµq(y, η))h −2/3 .
As in the proof of Lemma 5.1, one can deduce from Lemma 3.2 that ∂ β y Φ ♯ k = O k,β (1). Therefore, using (5.12) we can write
h |α| ∂ α y A(t) = M k=0 |α 1 |+|α 2 |=|α| γ α 1 ,α 2 (h∂ y ) α 1 a k (h∂ y ) α 2 ψ k = M +|α| k=0 k ℓ=0 |α| |α 1 |=0
e k,ℓ,α 1 (y, η; h, µ)∂ α 1 y a ℓ ψ k (5.13) with functions e k,ℓ,α 1 independent of a k , ψ k , and satisfying the bounds ∂ β y e k,ℓ,α 1 = O β (1). Moreover, when |α| ≥ 1 we have c α,j,ν = 0 for j = 0 in (5.12), and hence in this case ∂ β y e k,ℓ,α 1 = O β (|µ|). Since (5.2) implies |µ| 2 ≤ h, it is easy to see that (5.13) implies (5.11). ✷
We let now the functions a k satisfy the equations
(k + 1)a k+1 = −i∂ y 1 a k + µh −1 ∂ y 1 qF ♯ a k − µh −1 ∂ y 1 qa k−1 + k ℓ=0 k |α|=0 b (1) k,ℓ,α + b (2)
k,ℓ,α ∂ α y a ℓ . (5.14)
Set
ρ 2 = |µ|ρ 1 h + |µ| h > 1.
Lemma 5.3
For all integers k ≥ 0 and all multi-indices α, we have the bound
∂ α y a k ≤ C k,α ρ k 2 . (5.15)
Proof. In view of Lemmas 5.1 and 5.2, differentiating (5.14) we get
∂ α y a k+1 = |α|+1 |α 1 |=0 O(ρ 2 2 )∂ α 1 y a k−1 + |α| |α 2 |=0 O(ρ 2 )∂ α 2 y a k + k ℓ=0 k+|α| |β|=0 O(1)∂ β y a ℓ .∂ α y (a k ψ k ) ≤ C k,α (ρ 1 ρ 2 ) k . (5.18)
For all t ≥ 0, k ≥ 0 and multi-indices α, we have the bounds
∂ α y (a k ψ k ) ≤ C k,α h −1/3 (ρ 1 ρ 2 ) k , (5.19) ∂ α y B(t) ≤ C M,α (ρ 1 ρ 2 ) M , (5.20)
Moreover, there exists a constant C > 0 such that for C(|µ| + |η 1 |) ≤ t ≤ 1 we have the bound
∂ α y (a k ψ k ) ≤ C k,α h −1/3 ρ k 2 e −t 1/2 |µ|/4h . (5.21)
Observe now that the condition (5.2) implies
ρ 1 ρ 2 ≤ C h |µ| + C |µ| h (|µ| + |η 1 |) 1/2 +C h |µ| + C |µ| h (|µ| + |η 1 |) ≤ O(h ε/2 ). (5.22)
Using Lemma 5.4 together with (5.22) we will prove the following Proposition 5.5 For all s ≥ 0, we have the bounds
P 0 u 1 H s (R + ×Y ) ≤ C s,M h M ε/2 g L 2 (Y ) , (5.23) Op h (A(0))g − Op h (a 0 )g L 2 (Y ) ≤ Ch ε/2 g L 2 (Y ) , (5.24) Op h (D t A(0))g L 2 (Y ) ≤ Ch ε g L 2 (Y ) . (5.25)
Proof. In view of (5.17) we can write On the other hand, since (5.2) implies |µ| + |η 1 | ≤ h 2ε , taking h small enough we can arrange that t ≥ C(|µ| + |η 1 |) as long as t ∈ supp D 2 t , φ(t/h ε ) . Therefore, we can use (5.21) to conclude that for t ∼ h ε we have the bounds ∂ α y D ℓ t A(t) = O α,ℓ e −ch −ε/2 , ∀α, ℓ, with some constant c > 0. Thus, Proposition 4.1 yields the bound Set Z = Op h (A(0) − a 0 ). Since the estimate (5.24) holds for every g ∈ L 2 (Y ), we have Z = O(h ε/2 ) : L 2 (Y ) → L 2 (Y ). Hence the operator I + Z is invertible on L 2 (Y ) for small h. Given any f ∈ L 2 (Y ), take now
P 0 u 1 = φ(t/h ε ) (Op h (B(t)) + E 1 (t) + E 2 (t)) g + D 2 t , φ(t/h ε ) Op h (A(t)) g. (5.26) By (5.19) we have ∂ α y D ℓ t A(t) = O α,ℓ h −1/3 , ∀α, ℓ, and hence by Proposition 4.2 we get the bound ∂ α y D ℓ t E j (t)g L 2 (R + ×Y ) ≤ C M,α,ℓ h M g L 2 (Y ) .∂ α y D ℓ t D 2 t , φ(t/h ε ) Op h (A(t)) g L 2 (R + ×Y ) ≤ C α,ℓ e −ch −ε/2 g L 2 (Y ) .g = (I + Z) −1 Op h φ(η 1 |µ|/h 1+ε ) f.
With this choice of g we have
u 1 | t=0 = Op h (A(0))g = Op h φ(η 1 |µ|/h 1+ε ) f + Z 1 f where we have put Z 1 = Op h (1 − φ 1 )(η 1 |µ|/h 1+ε ) (I + Z) −1 Op h φ(η 1 |µ|/h 1+ε ) .
Thus, to complete the parametrix construction in this case we have to prove the following
Lemma 5.6 For small h we have Z 1 = O(h ∞ ) : L 2 (Y ) → L 2 (Y ).
Proof. Given any integer m ≥ 1 we can write
(I + Z) −1 = m k=0 (−Z) k + (−Z) m+1 (I + Z) −1 .
Hence, to prove the lemma it suffices to show that
Op h (1 − φ 1 )(η 1 |µ|/h 1+ε ) Z k Op h φ(η 1 |µ|/h 1+ε ) = O(h ∞ ) : L 2 (Y ) → L 2 (Y ) (5.31)
for every integer k ≥ 0. Clearly, (5.31) holds trivially for k = 0. It is easy also to see that (5.31) with k = 1 implies (5.31) for every k ≥ 1. On the other hand, since
ZOp h φ(η 1 |µ|/h 1+ε ) = Op h (A(0) − a 0 )φ(η 1 |µ|/h 1+ε )
and φ 1 = 1 on supp φ, (5.31) with k = 1 follows from Proposition 4.2. ✷ Thus, by Proposition 5.5 we get that the parametrix u 1 has the following properties.
Theorem 5.7 For all s ≥ 0, we have the bounds
P 0 u 1 H s (R + ×Y ) ≤ C s,M h M ε/2 f L 2 (Y ) , (5.32) u 1 | t=0 − Op h φ(η 1 |µ|/h 1+ε ) f L 2 (Y ) ≤ O(h ∞ ) f L 2 (Y ) , (5.33) D t u 1 | t=0 L 2 (Y ) ≤ Ch ε f L 2 (Y ) . (5.34)
6 Parametrix construction in the region G 2 (ε)
In this section we will construct a parametrix, u 2 , of the solution of the equation (5.1) with
f 1 = Op h (φ 2 (η 1 ))f , where φ 2 ∈ C ∞ 0 (R) is such that on supp φ 2 we have |µ| |µ| + |η 1 | ≥ h 1−ε , (6.1) |µ| + |η 1 | ≤ O(h ε ). (6.2)
Let ρ be the solution to the equation
ρ 2 + η 1 + iµq(y, η) = 0
with Im ρ > 0. We will be looking for u 2 in the form
u 2 = Op h (A(t)) f, A(t) = φ(t/|ρ| 2 δ 1 )a(t, y, η; µ, h)e iϕ(t,y,η;µ)/h ,
where φ is the same function as in the previous section, δ 1 > 0 is a small constant to be fixed later on, a = φ 2 (η 1 ), ϕ = 0 for t = 0. The phase ϕ is independent of h and is of the form
ϕ = M k=1 t k ϕ k
where ϕ k do not depend on t, M ≫ 1 being an arbitrary but fixed integer. The amplitude a is of the form a = 0≤k+ν≤M h k t ν a k,ν
where the functions a k,ν do not depend on t. Note that the identity (5.3) still holds with the new function A = φ(t/|ρ| 2 δ 1 )e iϕ/h a. Moreover, we have the identity
e −iϕ/h (D 2 t + t + η 1 + iµq(y, η) − ih∂ y 1 )(e iϕ/h a) = −2ih∂ t ϕ∂ t a − h 2 ∂ 2 t a − ih∂ y 1 a + ((∂ t ϕ) 2 + ∂ y 1 ϕ + t − ρ 2 )a = −2ih 0≤k+ν≤2M −2 h k t ν ν j=0 (j + 1)(ν + 1 − j)ϕ ν+1−j a k,j+1 −h 0≤k+ν≤M −1 (ν + 1)(ν + 2)h k t ν a k−1,ν+2 − ih 0≤k+ν≤M h k t ν ∂ y 1 a k,ν +((∂ t ϕ) 2 + ∂ y 1 ϕ + t − ρ 2 )a. (6.3)
Let E j (t), E j (t), j = 1, 2 be defined as in the previous section with the new A. Given a multiindex α = (α 1 , ..., α n−1 ), set
g α (ϕ) = lim h→0 (−ih) |α| |α|! e −iϕ/h ∂ α y (e iϕ/h ) = 1 |α|! n−1 j=1 (∂ y j ϕ) α j .
The phase satisfies the eikonal equation
(∂ t ϕ) 2 + ∂ y 1 ϕ + t − ρ 2 + iµ M |α|=1 g α (ϕ) = R M (t) (6.4) where R M (t) = O(t M +1 ) as t → 0.
It is easy to see that we have the identities
(∂ t ϕ) 2 = 2M K=0 t K k+j=K (k + 1)(j + 1)ϕ k+1 ϕ j+1 , M |α|=1 g α (ϕ) = M 2 K=1 t K M j=1 k i ≥1,k 1 +...+k j =K |α i |=1 γ α 1 ,...,α j ,k 1 ,...,k j ∂ α 1 y ϕ k 1 ...∂ α j y ϕ k j
where γ α 1 ,...,α j ,k 1 ,...,k j are constants. Thus, if we choose ϕ k satisfying the equations
ϕ 2 1 − ρ 2 = 0, (6.5) k+j=K (k + 1)(j + 1)ϕ k+1 ϕ j+1 + ∂ y 1 ϕ K + ǫ K = −iµ M j=1 k i ≥1,k 1 +...+k j =K |α i |=1 γ α 1 ,...,α j ,k 1 ,...,k j ∂ α 1 y ϕ k 1 ...∂ α j y ϕ k j , K ≥ 1,(6.6)
where ǫ 1 = 1, ǫ K = 0 for K ≥ 2, then ϕ satisfies the equation (6.4) with
R M (t) = 2M K=M +1 t K k+j=K (k + 1)(j + 1)ϕ k+1 ϕ j+1 +iµ M 2 K=M +1 t K M j=1 k i ≥1,k 1 +...+k j =K |α i |=1 γ α 1 ,...,α j ,k 1 ,...,k j ∂ α 1 y ϕ k 1 ...∂ α j y ϕ k j .
Clearly, ϕ 1 = ρ is a solution of (6.5). Then, given ϕ 1 , ..., ϕ K , K ≥ 1, we can determine ϕ K+1 uniquely from (6.6).
Lemma 6.1 For all integers k ≥ 2 and all multi-indices α we have the bounds
|∂ α y ϕ k | ≤ C k,α |ρ| 3−2k ,(6.7)
|Im ∂ α y ϕ k | ≤ C k,α |ρ| 2−2k Im ρ. (6.8)
We also have the bound |∂ α y (|ρ| −2 )| ≤ C α |ρ| −2 . (6.9)
Moreover, if 0 < t ≤ δ 1 |ρ| 2 with a constant δ 1 > 0 small enough, we have Im ϕ ≥ t Im ρ/2. (6.10)
Proof. The bound (6.7) with k = 1 follows easily by induction in |α| from the identity |α 1 |+|α 2 |=|α| γ α 1 ,α 2 ∂ α 1 y ρ∂ α 2 y ρ = iµ∂ α y q(y, η)
for |α| ≥ 1, γ α 1 ,α 2 = 0 being some constants, together with the fact that µ = O(|ρ| 2 ). The proof of (6.9) is similar, using that |ρ| 2 = η 2 1 + µ 2 q(y, η) 2 together with the identity
|α 1 |+|α 2 |=|α| γ α 1 ,α 2 ∂ α 1 y (|ρ| −2 )∂ α 2 y (|ρ| 2 ) = 0
for |α| ≥ 1. To prove (6.7) for all k ≥ 2 and all multi-indices α we will proceed by induction in k + |α|. Suppose first that (6.7) holds for all k ≤ K. Then the right-hand side of (6.6) is
M j=1 O(|ρ| 3j−2K ) = O(|ρ| 3−2K )
. Thus by (6.6) we get that ρϕ K+1 = O(|ρ| 2−2K ), which is the desired bound for ϕ K+1 . To bound ∂ α y ϕ K+1 we apply the operator ∂ α y to the equation (6.6) and proceed in the same way. The proof of (6.8) is similar, using that |µ| ≤ C|ρ|Im ρ together with the inequality
|Im (z 1 ...z k )| ≤ C k |z 1 |...|z k | k j=1 |Im z j | |z j | .
To prove (6.10) we use (6.8) to obtain, for 0 < t ≤ δ 1 |ρ| 2 ,
Im ϕ = M k=1 t k Im ϕ k ≥ t Im ρ 1 − C M −1 k=0 t k |ρ| −2k ≥ t Im ρ (1 − O(δ 1 )) ≥ t Im ρ/2 provided δ 1 is taken small enough. ✷ Set E 1 (t) = iµ h M |α|=1 (−ih) |α| |α|! ∂ α η q e −iϕ/h ∂ α y (e iϕ/h a) − g α (ϕ)a , E 2 (t) = M |α|=0 (−ih) |α| |α|! ∂ α η qe −iϕ/h ∂ α y (e iϕ/h a).
Lemma 6.2 We have the identities
E j (t) = k+ν≤M (M +1) h k t ν M |α|=0 k k ′ =0 ν ν ′ =0 b (j) α,k,k ′ ,ν,ν ′ ∂ α y a k ′ ,ν ′ (6.11)
where the functions b (j) α,k,k ′ ,ν,ν ′ do not depend on t, h and the functions a k,ν , and satisfy the bounds
∂ β y b (j) α,k,k ′ ,ν,ν ′ ≤ C β |ρ| −2ν+2ν ′ (6.12)
for every multi-index β.
Proof. We will first prove by induction in |α| the identity
e −iϕ/h (−ih∂ y ) α (e iϕ/h ) = |α| k=0 M |α| ν=0
h k t ν c α,k,ν (6.13) with functions c α,k,ν independent of t, h and satisfying the bounds ∂ β y c α,k,ν ≤ C β |ρ| −2ν (6.14)
for every multi-index β. Let α = α 1 + α 2 with |α 1 | = 1 and suppose (6.13) fulfilled with α 2 . Then we have
e −iϕ/h (−ih∂ y ) α 1 +α 2 (e iϕ/h ) = e −iϕ/h (−ih∂ y ) α 1 e iϕ/h |α 2 | k=0 M |α 2 | ν=0 h k t ν c α 2 ,k,ν = ∂ α 1 y ϕ |α 2 | k=0 M |α 2 | ν=0 h k t ν c α 2 ,k,ν − i |α 2 | k=0 M |α 2 | ν=0 h k+1 t ν ∂ α 1 y c α 2 ,k,ν = |α 2 | k=0 M |α 2 |+M ν=0 h k t ν ν ℓ=1 ∂ α 1 y ϕ ℓ c α 2 ,k,ν−ℓ − i |α 2 |+1 k=0 M |α 2 | ν=0 h k t ν ∂ α 1 y c α 2 ,k−1,ν .
Hence (6.13) holds for α 1 + α 2 with
c α 1 +α 2 ,k,ν = ν ℓ=1 ∂ α 1 y ϕ ℓ c α 2 ,k,ν−ℓ − i∂ α 1 y c α 2 ,k−1,ν . (6.15)
It follows from (6.7) and (6.15) that if (6.14) holds with α 2 , it holds with α 1 + α 2 , which proves the assertion. Using (6.13) we can write
e −iϕ/h (−ih∂ y ) α (e iϕ/h a) = |α 1 |+|α 2 |=|α| γ α 1 ,α 2 e −iϕ/h (−ih∂ y ) α 1 (e iϕ/h )(−ih∂ y ) α 2 a = |α| k=0 M |α| ν=0 h k t ν |α 1 |+|α 2 |=|α| γ α 1 ,α 2 c α 1 ,k−|α 2 |,ν (−i∂ y ) α 2 a.
It follows from this identity and (6.14) that the functions E j are of the form
E j (t) = M k=0 M 2 ν=0 M |α|=0 h k t ν c (j) α,k,ν ∂ α y a (6.16) with functions c (j)
α,k,ν independent of t, h and a, and satisfying the bounds ∂ β y c (j) α,k,ν = O β |(ρ| −2ν ), ∀β. Now (6.11) follows from (6.16) with
b (j) α,k,k ′ ,ν,ν ′ = c (j) α,k−k ′ ,ν−ν ′ .
✷
We let now the functions a k,ν satisfy the equations 2i ν j=0 (j + 1)(ν + 1 − j)ϕ ν+1−j a k,j+1 + (ν + 1)(ν + 2)a k−1,ν+2 + i∂ y 1 a k,ν
= 2 j=1 M |α|=0 k k ′ =0 ν ν ′ =0 b (j) α,k,k ′ ,ν,ν ′ ∂ α y a k ′ ,ν ′ ,(6.17)
a 0,0 = φ 2 (η 1 ), a k,0 = 0 for k ≥ 1, a −1,ν = 0, ν ≥ 0. Let K, J ≥ 0 be any integers. Now it is clear that, given a k,ν for k ≤ K, ∀ν ≥ 0, and a K+1,ν for ν ≤ J, we can determine a K+1,J+1 from (6.17). Therefore, by (6.17) we can find all a k,ν . Moreover, using (6.7) and (6.12) one can easily prove the following Lemma 6.3 For all integers k, ν ≥ 0 and all multi-indices α we have the bounds
∂ α y a k,ν ≤ C k,ν,α |ρ| −3k−2ν . (6.18)
In view of (6.3) and (6.11), in this case we still have the identity (5.17) with a function B of the form
B(t) = e iϕ/h φ(t/|ρ| 2 δ 1 )B 1 (t) + B 2 (t),
where Proof. Note first that the condition (6.1) implies
B 1 (t) = −2ih M +1≤k+ν≤2M −2 h k t ν ν j=0 (j + 1)(ν + 1 − j)ϕ ν+1−j a k,j+1 +h k+ν=M (ν + 1)(ν + 2)h k t ν a k−1,ν+2 + R M (t)a + 2 j=1 M +1≤k+ν≤M (M +1) h k t ν M |α|=0 k k ′ =0 ν ν ′ =0 b (j) α,k,k ′ ,ν,ν ′ ∂ α y a k ′ ,ν ′ , B 2 (t) = D 2 t − ih∂ y 1 , φ(t/|ρ| 2 δ 1 ) e iϕ/h a + iµ h M |α|=1 (−ih) |α| |α|! ∂ α η q ∂ α y (φe iϕ/h a) − φ∂ α y (e iϕ/h a) + M |α|=0 (−ih) |α| |α|! ∂ α η q ∂ α y (φe iϕ/h a) − φ∂ α y (e iϕ/h a) .h |ρ| 3 ≤ C 1 h |µ||ρ| ≤ C 2 h ε (6.21)
with some constants C 1 , C 2 > 0. By (6.7), (6.18) and (6.21) we have, for 0 ≤ t ≤ δ 1 |ρ| 2 ,
h k t ν e iϕ/h a k,ν ≤ C k,ν h |ρ| 3 k t |ρ| 2 ν e −tIm ρ/2h ≤ C k,ν h |ρ| 3 k h |µ||ρ| ν ≤ C k,ν h ε(k+ν) (6.22)
where we have used that |ρ|Im ρ ≥ C|µ| with some constant C > 0. In the same way, since e −iϕ/h (h∂ y ) α (e iϕ/h ) = O α (1) for 0 ≤ t ≤ 1, one can get that for any multi-index α and for
0 ≤ t ≤ δ 1 |ρ| 2 , h k t ν (h∂ y ) α e iϕ/h a k,ν ≤ C α,k,ν h ε(k+ν) . (6.23)
It follows easily from (6.23) that, for 0 ≤ t ≤ δ 1 |ρ| 2 ,
(h∂ y ) α e iϕ/h B 1 (t) ≤ C α h εM . (6.24)
On the other hand, for δ 1 2 |ρ| 2 ≤ t ≤ δ 1 |ρ| 2 , we have
e iϕ/h ≤ e −δ 1 |ρ| 2 Im ρ/4h ≤ e −c 1 |ρ||µ|/h ≤ e −c 2 h −ε (6.25)
with some constants c 1 , c 2 > 0. In view of (6.9) we have ∂ α y φ(t/|ρ| 2 δ 1 ) = O α (1), ∀α, and ∂ ℓ t φ(t/|ρ| 2 δ 1 ) = O ℓ (|µ| −ℓ ) = O ℓ (h −ℓ ), ∀ℓ. Therefore, by (6.23) and (6.25) we obtain
∂ α y B 2 (t) ≤ C α e −ch −ε (6.26)
with some constant c > 0. Thus (6.19) follows from (6.24) and (6.26).
To prove (6.20) we need to improve the estimate (6.23) when |α| ≥ 1. To this end, observe that by Lemma 6.1 we have ∂ α y ϕ = O α (t|ρ|) = O α (|ρ| 3 ), ∀α, for 0 ≤ t ≤ δ 1 |ρ| 2 . Therefore, by induction in |α| one easily gets
e −iϕ/h ∂ α y (e iϕ/h ) ≤ C α |ρ| 3 h |α| + C α . (6.27)
By (6.2), (6.10) and (6.27), for 0 ≤ t ≤ δ 1 |ρ| 2 ,
∂ α y (e iϕ/h ) ≤ C α |ρ| 3 h |α| + C α ≤ C α h −(1−3ε)|α| . (6.28)
On the other hand, by (6.18) we have ∂ α y a = O α (1) for 0 ≤ t ≤ δ 1 |ρ| 2 . Therefore, (6.20) follows from (6.28). ✷ Lemma 6.4 implies the following Proposition 6.5 For all s ≥ 0, we have the bounds
P 0 u 2 H s (R + ×Y ) ≤ C s,M h M ε/2 f L 2 (Y ) , (6.29) D t u 2 | t=0 L 2 (Y ) ≤ Ch ε f L 2 (Y ) .((D α y D β t B(t)) = O α,β (h M ε−ℓ ) : L 2 (Y ) → L 2 (Y ),D β t E j (t) = O α,β (h M ε−ℓ ) : L 2 (Y ) → L 2 (Y ), ∀α, β.
This implies (6.29) in view of the identity (5.17).
To prove (6.30), observe that
D t u 2 | t=0 = Op h ρ − ih M −1 k=0
h k a k,1 f.
In view of (6.2) and (6.7), we have ∂ α y ρ = O α (|ρ|) = O α (h ε ), and hence by Proposition 4.1 we get Op h (ρ) = O α (h ε ) : L 2 (Y ) → L 2 (Y ). Furthermore, by (6.18) we also have h k+1 ∂ α y a k,1 = O α (|ρ|) = O α (h ε ), and we apply once again Proposition 4.1 to get (6.30). ✷
To complete the construction of our parametrix u we will consider two cases. Case 1. h (1+ε)/2 ≤ |µ| ≤ h ε , 0 < ε ≪ 1. Then the condition (6.1) is fulfilled for all η 1 . We take u = u 2 , where u 2 is the parametrix constructed above with φ 2 (η 1 ) = φ(η 1 /h ε ). Clearly the condition (6.2) is fullfiled as long as η 1 ∈ supp φ 2 .
Case 2. h 1−2ε ≤ |µ| ≤ h (1+ε)/2 . Then (µ, η 1 ) ∈ G 1 (ε) as long as η 1 ∈ supp φ(η 1 |µ|/h 1+ε ). We take u = u 1 + u 2 , where u 1 is the parametrix constructed in Section 5 and u 2 is the parametrix constructed in Section 6 with φ 2 (η 1 ) = φ(η 1 /h ε )−φ(η 1 |µ|/h 1+ε ). Clearly η 1 = O(h ε ) on supp φ 2 , and hence the condition (6.2) is fulfilled in this case. Moreover, if (µ, η 1 ) ∈ G 2 (ε), then |µ| |µ| + |η 1 | ≥ |µ| 1/2 h (1+ε)/2 ≥ h 1−ε/2 . Hence, with this choice of the function φ 2 , the condition (6.1) is satisfied (with ε/2 in place of ε) as long as η 1 ∈ supp φ 2 .
In both cases the operator N defined by N f := D t u| t=0 provides a parametrix for the DN map f → D t u| t=0 , where u is the solution to the equation (5.1) with u| t=0 = Op h (φ(η 1 /h ε ))f . It follows from Theorem 5.7 and Proposition 6.5 that u and N have the following properties. Theorem 6.6 For all s ≥ 0, we have the bounds
P 0 u H s (R + ×Y ) ≤ C s,M h M ε/2 f L 2 (Y ) , (6.31) u| t=0 − Op h (φ(η 1 /h ε )) f L 2 (Y ) ≤ O(h ∞ ) f L 2 (Y ) , (6.32) N f L 2 (Y )
≤ Ch ε/2 f L 2 (Y ) , (6.33)
Op h ((1 − φ 1 )(η 1 /h ε )) N f L 2 (Y ) ≤ O(h ∞ ) f L 2 (Y ) ,(6.34)
where φ 1 ∈ C ∞ 0 (R) is independent of h and µ, and φ 1 = 1 on supp φ.
Note that the estimate (6.34) follows from Proposition 4.2 in the same way as in the proof of Lemma 5.6.
Eigenvalue-free regions
In this section we will study the problem h 2 ∇c 1 (x)∇ + zn 1 (x) u 1 = 0 in Ω, h 2 ∇c 2 (x)∇ + zn 2 (x) u 2 = 0 in Ω,
u 1 = u 2 , c 1 ∂ ν u 1 = c 2 ∂ ν u 2 on Γ,(7.1)
where 0 < h ≪ 1, z = 1 + i Im z, 0 < |Im z| ≤ 1. Denote by N j (h, z), j = 1, 2, the Dirichlet-to-Neumann map corresponding to the Laplacian n j (x) −1 ∇c j (x)∇ introduced in Section 2 (with µ = Im z). In this section we will prove the following Theorem 7.1 Under the conditions of Theorem 1.1, given any 0 < ε ≪ 1 there is h 0 (ε) > 0 so that the operator
T (h, z) = c 1 N 1 (h, z) − c 2 N 2 (h, z) : H 1 (Γ) → L 2 (Γ) is invertible for 0 < h ≤ h 0 , |Im z| ≥ h 1−ε .
Proof. We may suppose that |Im z| ≤ h ε since for h ε ≤ |Im z| ≤ 1 the theorem is proved in [13]. Let ∆ Γ be the negative Laplace-Beltrami operator on Γ with the Riemannian metric induced by the Euclidean one in R d . Denote by r 0 (x ′ , ξ ′ ) the principal symbol of −∆ Γ written in the coordinates (x ′ , ξ ′ ) ∈ T * Γ. Set Σ j (ε) = (x ′ , ξ ′ ) ∈ T * Γ : |r 0 − m j | ≤ h ε/2 , where m j denotes the restriction on Γ of the function n j /c j . It is easy to see that the conditions (1.2) and (1.3) imply Σ 1 (ε) ∩ Σ 2 (ε) = ∅, provided h is taken small enough. Throughout this section, ρ j , j = 1, 2, will denote the solution to the equation ε )ρ j ∈ S 1 ε/2 . By (7.2) we also have C 1 r 0 k/2 ≤ |c 1 ρ 1 − c 2 ρ 2 | ≤ C 2 r 0 k/2 , C 2 > C 1 > 0, (7.3) where k = −1 if (1.2) holds, k = 1 if (1.3) holds. Since χ (j) ε ρ j = O(h ε/4 ), (7.3) remains valid with ρ j in place of ρ j . Using this we will prove the following
N
(r) = (τ 1 + τ 2 )r d + O ε (r d−κ+ε ), ∀ 0 < ε ≪ 1,where 0 < κ ≤ 1 is such that there are no transmission eigenvalues in the region λ ∈ C : |Im λ| ≥ C (|Re λ| + 1) being the volume of the unit ball in R d . Theorem 1.1 and Remark 4 imply the followingCorollary 1.2 Under the conditions of Theorem 1.1, the counting function of the transmission eigenvalues satisfies the asymptoticsN (r) = (τ 1 + τ 2 )r d + O ε (r d−1+ε ), ∀ 0 < ε ≪ 1.(1.4)
Lemma 3. 3 For
3Im z = 0 and all integers k ≥ 0, ℓ ≥ 0, we have the bound Ψ (ℓ)
, (3.23) implies that(3.19) and (3.21) with ℓ = 0, k ≥ 1, follows from (3.19) and (3.21) with ℓ = 0, k = 0. The same conclusion is still valid concerning the bound (3.20) as long as t ≤ 2|z|. For t ≥ 2|z|, (3.20) follows from (3.21) in view of the inequality t k/2 e −t 1/2 |Im z|/4 ≤ C k |Im z| −k .
e 2 3
224)|Ai(z)| −1 ≤ C z −1/4 |z| 1/2 + |Im z| −1 Re z 3/2 .(3.25) Indeed, for | arg z| ≤ π − δ, (3.24) and (3.25) follow from (3.6) and (3.7), while for z ∈ Λ δ they follow from (3.3) and (3.4) combined with Lemma 3.1. By (3.24) and (3.25),
(n − 1 )/ 2 .
12Similarly, for all multi-indices α and β, we have|∂ α x ∂ β y c(x, ξ, y, η)| ≤ C s,M,α,β h M (1−δ)−M 0 −sδ−|β|δ .
(x, ξ, x, ξ)| ≤ C s,M,α h M (1−δ)−M 0 −sδ−|α|δ .
some ℓ > 0 depending only on the dimension, we concludeOp h (c(x, ξ, x, ξ)) L 2 →L 2 ≤ C M h M (1−δ)−M 0 −ℓδ ≤ C M h M (1−δ)/2 (4.9)
Lemma 5. 1
1For t = 0, all k ≥ 0 and multi-indices α, we have the bound
ℓ,α do not depend on a ν , ψ ν , and satisfy the bounds ∂ β y b (j) k,ℓ,α = O β (1) for all multi-indices β uniformly in µ and h.
5.15) is trivially fulfilled for k = 0, it is easy to see by induction in k that (5.16) implies (5.15) for all k.✷ With this choice of the functions a k the identity (5.3) becomes P 0 Op h (A(t)) = Op h (B(t)) + E 1 (t) + E 2 (t) (5.17) where B(t) = h(M + 1)a M +1 ψ M + µ∂ y 1 qa M ψ M +1 ℓ,α (y, η; h, µ)∂ α y a ℓ ψ k . Combining Lemmas 5.1, 5.2 and 5.3 leads to the following Lemma 5.4 For t = 0, all k ≥ 0 and multi-indices α, we have the bound
5.20) and (5.22) we have ∂ α y D ℓ t B(t) = O α,ℓ h M ε/2 , ∀α, ℓ, and hence by Proposition 4.1 we get the bound ∂ α y D ℓ t Op h (B(t)) g L 2 (R + ×Y ) ≤ C M,α,ℓ h M ε/2 g L 2 (Y ) . (5.28)
5.23) follows from (5.26)-(5.29) by taking M big enough, depending on ε. Since ψ 0 = 1 for t = 0, the bound (5.24) follows from (5.18), (5.22) and Proposition 4.1. The proof of (5.25) is similar, in view of the identity h∂ t A(t) and (5.30), we have ∂ α y D t A(0) = O α (ρ 1 ), ∀α. Therefore, since ρ 1 = O(h ε ), we get (5.25) by Proposition 4.1. ✷
For all multi-indices α we have the bounds
(t) ≤ C α h εM −|α| , (6.19)
∂ α y A(t) ≤ C α h −(1−3ε)|α| . (6.20)
n 1
1ρ 2 + r 0 (x ′ , ξ ′ ) − zm j (x ′ ) = 0with Im ρ > 0. Observe thatc 1 ρ 1 − c 2 ρ 2 = c(x ′ )(c 0 (x ′ )r 0 (x ′ , ξ ′ ) − z) c 1 ρ 1 + c 2 ρ 2 (7.2)where c and c 0 are the restrictions on Γ of the functionsc 1 n 1 − c 2 − c 2 n 2respectively. Clearly, under the conditions of Theorem 1.1, we have c(x ′ ) = 0, ∀x ′ ∈ Γ. Moreover,(1.2) implies c 0 ≡ 0, while (1.3) implies c 0 (x ′ ) < 0, ∀x ′ ∈ Γ.Hence, under the conditions of Theorem 1.1, we have c Γ as |Im z| → 0. It is easy to see that |ρ j | ≥ Const > 0 on Σ 3−j (ε), O(h ε/2 ) neighbourhood of {r 0 = m j }. Then we have ρ j = (1 − χ (j)
where we have used the coercivity (ellipticity) of the operator G D .Taking M big enough we deduce (2.21) from (2.20) and (2.22). Clearly, (2.21) implies (2.7).
6.30) Proof. By Proposition 4.1 and (6.19), there is ℓ > 0 dpending only on the dimension such that Op h
∀α, β, while by Proposition 4.2 and (6.20) we have D α y
Acknowledgements. I would like to thank Vesselin Petkov for some very usefull discussions and suggestions.The estimate (7.4) with f replaced by Op h (1 − χ)f is proved, under the conditions (1.2) and (1.3), in Section 5 of[13](see also[11]). Therefore, to prove (7.4) it suffices to show thatin place of χ follows from the estimate (2.7) of Theorem 2.2, while (7.5) with χ − χε in place of χ follows from the estimates (2.4) and (2.5) of Theorem 2.1. ✷Thus we have reduced the problem to that one of inverting the operator A = Op h (c 1 ρ 1 −c 2 ρ 2 ). This, however, is much easier since the symbol c 1 ρ 1 − c 2 ρ 2 ∈ S k ε/2 is elliptic in view of (7.3). Hence (c 1 ρ 1 − c 2 ρ 2 ) −1 ∈ S −k ε/2 and there exists an inversewhich after taking h small enough becomes(7.6)Clearly, (7.6) implies the invertibility of the operator T in the desired region. ✷
Spectral asymptotics in semi-classical limit. M Dimassi, J Sjöstrand, London Mathematical Society. 268Cambridge University PressLecture Notes SeriesM. Dimassi and J. Sjöstrand, Spectral asymptotics in semi-classical limit, London Math- ematical Society, Lecture Notes Series, 268, Cambridge University Press, 1999.
Upper bound for the counting function of interior transmission eigenvalues. M Dimassi, V Petkov, preprint 2013M. Dimassi and V. Petkov, Upper bound for the counting function of interior transmission eigenvalues, preprint 2013.
The interior transmission problem and bounds of transmission eigenvalues. M Hitrik, K Krupchyk, P Ola, L Päivärinta, Math. Res. Lett. 18M. Hitrik, K. Krupchyk, P. Ola and L. Päivärinta, The interior transmission prob- lem and bounds of transmission eigenvalues, Math. Res. Lett. 18 (2011), 279-293.
Boundary problems for wave equations with glancing and gliding rays. R Melrose, M Taylor, unpublished manuscriptR. Melrose and M. Taylor, Boundary problems for wave equations with glancing and gliding rays, unpublished manuscript.
Application of elliptic theory to the isotropic interior transmission eigenvalue problem. E Lakshtanov, B Vainberg, Inverse Problems. 29104003E. Lakshtanov and B. Vainberg, Application of elliptic theory to the isotropic interior transmission eigenvalue problem, Inverse Problems 29 (2013), 104003.
F Olver, Asymptotics and Special Functions. New York, LondonAcademic PressF. Olver, Asymptotics and Special Functions, Academic Press, New York, London, 1974.
Weyl asymptotics of the transmission eigenvalues for a constant index of refraction, Inverse problems and imagining. H Pham, P Stefanov, 8H. Pham and P. Stefanov, Weyl asymptotics of the transmission eigenvalues for a con- stant index of refraction, Inverse problems and imagining, 8(3) (2014), 795-810.
Spectral analysis of interior transmission eigenvalues. L Robbiano, Inverse Problems. 29104001L. Robbiano, Spectral analysis of interior transmission eigenvalues, Inverse Problems 29 (2013), 104001.
Counting function for interior transmission eigenvalues. L Robbiano, preprintL. Robbiano, Counting function for interior transmission eigenvalues, preprint 2013.
Weyl law for semi-classical resonances with randomly perturbed potentials, Memore de la SMF. J Sjöstrand, 136J. Sjöstrand, Weyl law for semi-classical resonances with randomly perturbed potentials, Memore de la SMF, 136 (2014).
Asymptotics of the number of the interior transmission eigenvalues. V Petkov, G Vodev, J. Spectral Theory. to appearV. Petkov and G. Vodev, Asymptotics of the number of the interior transmission eigen- values, J. Spectral Theory, to appear.
Resonances near the real axis for transparent obstacles. G Popov, G Vodev, Comm. Math. Phys. 307G. Popov and G. Vodev, Resonances near the real axis for transparent obstacles, Comm. Math. Phys. 307 (1999), 411-438.
Transmission eigenvalue-free regions. G Vodev, BP 92208, 44332 Nantes Cedex 03. FranceVodev Université de Nantes, Département de Mathématiquesto appear. G.. UMR 6629 du CNRS, 2, rue de la HoussinièreG. Vodev, Transmission eigenvalue-free regions, Comm. Math. Phys., to appear. G. Vodev Université de Nantes, Département de Mathématiques, UMR 6629 du CNRS, 2, rue de la Houssinière, BP 92208, 44332 Nantes Cedex 03, France, e-mail: [email protected]
| []
|
[
"Understanding Anatomy Classification Through Visualization",
"Understanding Anatomy Classification Through Visualization"
]
| [
"Devinder Kumar [email protected] ",
"Vlado Menkovski [email protected] ",
"\nPhilips Research Eindhoven\nNetherlands\n",
"\nTechnische Universiteit Eindhoven Eindhoven\nNetherlands\n"
]
| [
"Philips Research Eindhoven\nNetherlands",
"Technische Universiteit Eindhoven Eindhoven\nNetherlands"
]
| []
| One of the main challenges for broad adoption of deep convolutional neural network (DCNN) models is the lack of understanding of their decision process. In many applications a simpler less capable model that can be easily understood is favorable to a black-box model that has superior performance. In this paper, we present an approach for designing DCNN models based on visualization of the internal activations of the model. We visualize the model's response using fractional stride convolution technique and compare the results with known imaging landmarks from the medical literature. We show that sufficiently deep and capable models can be successfully trained to use the same medical landmarks a human expert would use. The presented approach allows for communicating the model decision process well, but also offers insight towards detecting biases. | null | [
"https://arxiv.org/pdf/1611.06284v2.pdf"
]
| 5,194,590 | 1611.06284 | 4f58774979dabe80467a9f713e58bdd7de0b9ecc |
Understanding Anatomy Classification Through Visualization
Devinder Kumar [email protected]
Vlado Menkovski [email protected]
Philips Research Eindhoven
Netherlands
Technische Universiteit Eindhoven Eindhoven
Netherlands
Understanding Anatomy Classification Through Visualization
One of the main challenges for broad adoption of deep convolutional neural network (DCNN) models is the lack of understanding of their decision process. In many applications a simpler less capable model that can be easily understood is favorable to a black-box model that has superior performance. In this paper, we present an approach for designing DCNN models based on visualization of the internal activations of the model. We visualize the model's response using fractional stride convolution technique and compare the results with known imaging landmarks from the medical literature. We show that sufficiently deep and capable models can be successfully trained to use the same medical landmarks a human expert would use. The presented approach allows for communicating the model decision process well, but also offers insight towards detecting biases.
Introduction
Understanding the decision process of a deep neural network model for classification can be challenging due to the very large number of parameters and model's tendency to represent the information internally in a distributed manner. Distributed representations have significant advantages for the capability of the model to generalize well [4], however the trade-off is the difficulty in communicating the model's reasoning. In other words, it is difficult to represent what information was used by the model and how to come to its particular output. In certain applications such as healthcare, understanding the decision process of a model can be a vital requirement.
One direction towards understanding how Convolutional Neural Netowrks (CNN) processes the information internally is through visualization. The work of Zeiler et. al [5], Mahendran et. al [1], and Zintgraf et.al. [6] have shown that the inner working of the CNN can be projected back to the image space in a comprehensible way to a human expert. We build on the work of [5] and present an approach to understand the decision making process of these networks through visualizing the information used as a part of this process. Our approach is based on fractionally strided convolutional technique [5], which we apply to the anatomy classification problem using X-ray images [2]. However, rather than examining the model over the whole dataset and trying to understand the sensitivity of the model to the data, we examine the model's response to individual data points. We also found that existing methods that present different saliency maps of the sensitivity of the model's output still not provide a clear representation that can be used to communicate with the experts. We based our approach on visualizing the maximally activated feature maps from the last convolutional layer in an overlayed image. This depiction provided the most informative and effective way to communicate the information from the image used in the decision process of the model. Furthermore, we compared this information to medically relevant landmarks in the images such as anatomical features that an expert would use to identify an organ. In this comparison, we found that shallow models that do not have sufficient capacity fail to use relevant landmarks. Additionally, we found that even deep models that generally perform well on test data do not necessarily use accurate landmarks. Finally, we show that adjusting training and augmentation hyperparameters based on the insight from visualization leads to models that use medically relevant landmarks while attaining superior performance on test data and give indication of robustness in terms of generalization.
Approach
In order to understand the decision making process of deep CNNs and to construct an informed approach to designing models, we build three different deep CNN models with different architectures and hyper-parameters: shallow CNN, deeper CNN without data augmentation and deeper CNN with data augmentation inspired by the work of Razavian et al. [3]. The network architecture for each model is described in Appendix (Section 5). After successfully training the above mentioned multiple networks, we examine which part of a particular input image from an anatomy class, particularly the spatially distributed information, is used in the decision processing of CNN. It is done by visualizing the top n most activated activation filters of the last convolutional layer in the above described models. Top n filters are used to visualize the parts of the input image that the network considers important. The visualization itself is done by projecting the top filter activations back to image space. The back projection to input space is achieved by using the fractionally strided convolutional technique [5] through another parallel network. The parallel network is constructed similarly to the one being analyzed but with transposed convolution filters and switches for un-pooling. This leads to unsupervised construction of hierarchical image presentation in the visual domain for any filter in a particular layer of the network.
Next, we then examine the correlation of those regions obtained through visualization with identified regions and shapes of image landmarks that are mentioned in the medical radiology literature. With the qualitative assessment, we can establish that the same landmarks that are described in the medical image literature are also used by the deep CNN. For example, we observe that the particular outlines of bones are used to detect the organ in the image rather than some background information. We use this to guide the decisions for the model architecture and training. We can furthermore use this method to detect biases in the models. In certain examples of mis-classification (Fig. 4) , we can observe that the information used for making decisions is part of an artifact rather than the object in the image. This understanding can inform us about the possible adjustments to the pre-processing of data augmentation procedures needed to remove the bias from the model.
Experiments and Results
To visualize and understand the decision working of a deep neural network, we used anatomy classification from X-ray images as an example use-case. To train our three different convolutional neural networks, radiographs from the ImageClef 2009 -Medical Image Annotation task 3 was used. This data set consists of a wide range of X-rays images for clinical routine, along with a detailed anatomical classification. For uniform training without any bias, we removed the hierarchical class representation and removed the classes consisting of less than 50 examples. Using this, we ended up with 24 unique classes e.g. foot, knee, hand, cranium,thoracic spine etc from the full body anatomy.
For training the three network described in Section 2, we resized the images to 224 × 224. For evaluation, we divided the ImageClef dataset (14676) images into randomly selected training and test set with 90 % and 10 % data respectively. For the third (deeper) network specifically, we used various data augmentation techniques ranging from cropping, rotation, translation, shearing, stretching and flipping. We train the three networks for all the 24 classes simultaneously. The results obtained by training the three models are shown in Table 1.
We visualized the internal activations of the models on test data. More particularly, we combined the visualization of the top n = 25 filter responses from the last convolutional layer and overlay them on the original image. In this way we construct the focused heats maps that can be easily examined by a human expert. The n = 25 was chosen empirically as it produced heat maps closer to the anatomical overlayed on the original images from the last conv layer of the deeper network with no data augmentation overlayed on the original images for the foot and hand class. It can be observed that this network fails to use the same landmarks as a human experts for anatomy classification, as shown in Fig. 1 (a) & (c). Best viewed in color. landmarks with least number of filters. The results thus obtained are shown in Fig. 1, 2 and 3 for foot and hand classes from ImageClef dataset.
In Fig. 1 we show a correspondence between the obtained heat maps and the anatomical landmarks from medical literature 4 . More particularly for the foot image we can observe that the edges of the metatarsals' shaft has been used together with the distal phalangies, navicular, cuboid, tibia, and fibula. Similarly for the hand, three of the distal phalanxes, many of the heads of joints, metacarpals' shafts as well certain carpals. In contrast to this in Fig 3 and Fig 2 we can see that the shallow and deep network trained without specific data augmentation fails to learn such specific landmarks. These models use broader ranges that are clearly not as specific as the information used in the first model. From the above visual results 5 as well as the performance of the final model we come to the conclusion that sufficiently deep neural network model can be successfully trained to use the same medical landmarks as a human expert while attaining superior performance.
Conclusion
We propose an approach that allows for evaluating the decision making process of CNNs. We show that the design of the model architectures for deep CNN and the training procedure does not necessarily need to be a trial-and-error process, solely focused on optimizing the test set accuracy. Through visualization we managed to incorporate domain knowledge and overall managed to achieve a much more informed decision process, which finally resulted in a model with superior performance. This approach is applicable to many different image analysis applications of deep learning that are unable to easily leverage the potentially large amount of available domain knowledge. Furthermore, visually understanding the information involved in the model decision allows for more confidence in its performance on unseen data.
Appendix
map from top 25 filters from last conv layer of network overlayed on original image. (d) Anatomy description of hand found in literature, unique bone structures pertaining to the class are indicated. (e) Hand X-ray image from Im-ageClef dataset. (f) Heat map from top 25 filters from last conv layer of network overlayed on original image.
Figure 1 :
1Figure showing the correspondence between anatomical description of found in literature that are used by human experts ((a) & (d)) and the heat maps overlayed on the original images ((b) & (e)) from the last conv layer of the deeper network with data augmentation ((c) & (f)) for the foot and hand class. It can be observed that the deeper neural network uses the same landmarks as a human experts for anatomy classification. Best viewed in color. (a) Heat map from top 25 filters from last conv layer of network for the hand class. (b) Heat map from top 25 filters from last conv layer of network for the foot class.
Figure 2 :
2Figure showing heat maps (a) & (b)
Figure 3 :
3Figure showing the focus area of the top 5 last conv layer of the shallow network. For clarity, instead of top 25 only top 5 filters are shown separately. It is evident that the network doesn't learn any medically relevant landmark. Best viewed in color.
Figure 4 :
4Figure showingthe results from the last conv layer of the deeper network with augmentation for an example from the hand class mis-classified as cranium. From the figure, it is evident that the top 9 most activated filters are focusing on wrong information present in the signal. Best viewed in color.
Table 1 :
1Results: Accuracy in percent for three different networks trained on the imageClef 2009annotation task
Shallow Network Deeper Network Deeper Network+data aug
71.1
90.36
95.62
Table 2 :
2Architecture of Shallow Network
Conv Layer
(3x3, 32x)
Conv Layer
(3x3, 64x)
MaxPool Layer + Dropout (2x2, 2x2 stride)
FC Layer + Dropout
(128, 0.25p)
Softmax
(24)
Table 3 :
3Architecture of Deeper Network
without data augmentation
Conv Layer
(3x3, 32x)
Conv Layer
(3x3, 16x)
MaxPool Layer (2x2, 2x2 stride)
Conv Layer
(3x3, 64x)
Conv Layer
(3x3, 32x)
MaxPool Layer (2x2, 2x2 stride)
Conv Layer
(3x3, 128x)
Conv Layer
(3x3, 128x)
Conv Layer
(3x3, 64x)
MaxPool Layer (2x2, 2x2 stride)
Conv Layer
(3x3, 256x)
Conv Layer
(3x3, 256x)
Conv Layer
(3x3, 128x)
MaxPool Layer (2x2, 2x2 stride)
FC Layer
(128x)
FC Layer
(128x)
Softmax
(24)
Table 4 :
4Architecture of Deeper Network withdata augmentation
Conv Layer + BN
(3x3, 32x)
Conv Layer + BN
(3x3, 16x)
MaxPool Layer
(2x2, 2x2 stride)
Conv Layer + BN
(3x3, 64x)
Conv Layer + BN
(3x3, 32x)
MaxPool Layer
(2x2, 2x2 stride)
Conv Layer + BN
(3x3, 128x)
Conv Layer + BN
(3x3, 128x)
Conv Layer + BN
(3x3, 64x)
MaxPool Layer
(2x2, 2x2 stride)
Conv Layer + BN
(3x3, 256x)
Conv Layer + BN
(3x3, 256x)
Conv Layer + BN
(3x3, 128x)
MaxPool Layer+Dropout (2x2, 2x2 stride, 0.5p)
FC Layer + Dropout
(128x, 0.5p)
FC Layer + Dropout
(128x, 0.5p)
Softmax
(24)
http://www.imageclef.org/2009/medanno
http://www.meddean.luc.edu/lumen/meded/radio/curriculum/bones/Strcture_Bone_ teach_f.htm5 We obtained similar results for the other classes as well, but due to space constraint only two classes results are shown.
Understanding deep image representations by inverting them. Aravindh Mahendran, Andrea Vedaldi, 2015 IEEE conference on computer vision and pattern recognition (CVPR). IEEEAravindh Mahendran and Andrea Vedaldi. Understanding deep image representations by inverting them. In 2015 IEEE conference on computer vision and pattern recognition (CVPR), pages 5188-5196. IEEE, 2015.
Axel Saalbach, and Hannes Nickisch. Can pretrained neural networks detect anatomy?. Vlado Menkovski, Zharko Aleksovski, arXiv:1512.05986arXiv preprintVlado Menkovski, Zharko Aleksovski, Axel Saalbach, and Hannes Nickisch. Can pretrained neural networks detect anatomy? arXiv preprint arXiv:1512.05986, 2015.
A baseline for visual instance retrieval with deep convolutional networks. Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, Stefan Carlsson, arXiv:1412.6574arXiv preprintAli Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson. A baseline for visual instance retrieval with deep convolutional networks. arXiv preprint arXiv:1412.6574, 2014.
Dropout: a simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, Journal of Machine Learning Research. 151Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958, 2014.
Visualizing and understanding convolutional networks. D Matthew, Rob Zeiler, Fergus, European Conference on Computer Vision. SpringerMatthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European Conference on Computer Vision, pages 818-833. Springer, 2014.
Luisa M Zintgraf, S Taco, Max Cohen, Welling, arXiv:1603.02518A new method to visualize deep neural networks. arXiv preprintLuisa M Zintgraf, Taco S Cohen, and Max Welling. A new method to visualize deep neural networks. arXiv preprint arXiv:1603.02518, 2016.
| []
|
[
"Poncelet Spatio-Temporal Surfaces and Tangles",
"Poncelet Spatio-Temporal Surfaces and Tangles"
]
| [
"Claudio Esperança [email protected] \nPESC, Fed. Univ. Rio\nRio de JaneiroBrazil\n",
"Ronaldo Garcia [email protected] \nMath and Stats dept., Fed. Univ. Goiás\nGoiânia\n",
"Dan Reznik [email protected] \nData Science Consulting\nRio de JaneiroBrazil\n"
]
| [
"PESC, Fed. Univ. Rio\nRio de JaneiroBrazil",
"Math and Stats dept., Fed. Univ. Goiás\nGoiânia",
"Data Science Consulting\nRio de JaneiroBrazil"
]
| []
| We explore geometric properties of 3d surfaces swept by a family of Poncelet triangles, as well as tangles produced by space curves they define. | null | [
"https://arxiv.org/pdf/2201.09300v1.pdf"
]
| 246,240,993 | 2201.09300 | a0fe46dc870e25941565b43ab37b0fc3e94c5807 |
Poncelet Spatio-Temporal Surfaces and Tangles
Claudio Esperança [email protected]
PESC, Fed. Univ. Rio
Rio de JaneiroBrazil
Ronaldo Garcia [email protected]
Math and Stats dept., Fed. Univ. Goiás
Goiânia
Dan Reznik [email protected]
Data Science Consulting
Rio de JaneiroBrazil
Poncelet Spatio-Temporal Surfaces and Tangles
We explore geometric properties of 3d surfaces swept by a family of Poncelet triangles, as well as tangles produced by space curves they define.
Introduction
Depicted in Figure 1(left) is Poncelet's closure theorem in the special case of triangles. The theorem states that two conics E and E are chosen so that a polygon can be drawn with all vertices on E and all sides tangent to E , then a porism of such polygons exists: any point on E can be used as an initial vertex for a polygon with identical incidence/tangency properties with respect to E, E . For more details, see [5,6,7].
Referring to Figure 1(right), a property-rich choice for E, E is when they are confocal ellipses, i.e., with shared foci. If such a pair admits a Poncelet porism , two immediate consequences ensue: (i) consecutive sides are bisected by the normal to E and Poncelet polygons can therefore be regarded as the periodic path of a particle bouncing elastically against E (this is known as the "elliptic billiard", see [16]), and (ii) all polygons in the porism have the same perimeter [16]. Dozens of other properties and invariants can be derived from these an interesting one being constant sum of the internal angle cosines, proved in [1,4]. For more properties of the confocal family, see [10,12].
Summary: in Section 2 we define a ruled surface based on Poncelet triangles, and discuss properties of its curvature. In Section 3 we study the link topology of space curves swept by points of contact, and triangle centers. In Section 4, we list several unexplored experimental alternatives. To facilitate reproducibility, in Appendices A and B we include explicit expressions for both the Poncelet triangle parametrization and Gaussian and mean curvature. The pages listed in Table 1 allow for live interaction with some objects mentioned herein.
A Poncelet Spatio-Temporal Surface (PSTS)
To achieve a homogeneous traversal of the Poncelet family, we parametrize it with Jacobi elliptic functions, as explained in Appendix A. Let be its parameter, ∈ [0, ], where is the period. Let ( ) be a vertex Recall these can be ellipses, hyperbolas, parabolas, and other degenerate specimens, see [8, chapter 5]. In general, finding such a pair requires that a certain "Cayley" determinant vanish, see [7]. title http://observablehq.com/<.> PSTS Visualization (live) @esperanc/3-periodic-elliptical-billiards-3d-sweep go PSTS Visualization (static) @dan-reznik/elliptic-billiard-triangle go Poncelet's Closure Theorem @dan-reznik/poncelet-iteration go Jacobi's Elliptic Functions @dan-reznik/jacobi-elliptic-functions go Recall that the Gaussian K (resp. mean H ) curvature) of a surface is the product (resp. average) of its principal curvatures, see [14]. Referring to Figure 2(right), and using the expressions in Appendix B, we have derived rather long analytic expressions for both curvatures. Laborious analysis reveals that: Proposition 1. The three facets S of S are hyperbolic, i.e., each has negative Gaussian curvature everywhere.
Consider one facet S 1 of S. Let 1 = (0, 1/2), 2 = ( /4, 1/2), 3 = ( /2, 1/2), 4 = (3 /4, 1/2). These four points correspond to the isosceles configurations shown in Figure 4(right). Referring to Figure 5: Proposition 2. 1 and 3 (resp. 2 and 4 ) are non degenerate (Morse type) local minima (resp. saddlepoints) of H . Conversely, 2 and 4 (resp. 1 and 3 ) are non degenerate (Morse type) local minima (resp. saddlepoints) of K.
Analogous statements can be made for facets 2 and 3 . It is worth noting that in general the critical points of Gaussian and Mean curvatures do not coincide. We currently think this is a feature of any Poncelet triangle family defined between a pair of concentric, axis-aligned ellipses.
Space Curve Tangles
Consider the surface obtained by identifying the = 0 and = cross sections of S, shown in Figure 3. Each contact point of Figure 1(right) will sweep a wiggly ring; their union will form a tangle known as a 3-link of "Hopf" rings [2, 13], distinct from the Borromean tangle, see Figure 6. Indeed, the same tangle is swept by the 3 vertices of the family, and it is independent of the family being a Poncelet one. The surface whose boundary is a 3-link tangle is a type of Seifert surface [17].
Referring to Figure 7, more tangle topologies are obtained if one also considers the relative motion of notable points of the triangle (e.g., the incenter, the barycenter, etc.) over the Poncelet family. See [11] for 2d analysis of such loci. 2
vertex on either the top (T), bottom (B), left (L) or right (R) vertices of the outer ellipse E. The Gaussian (resp. mean) curvature of the spatio-temporal surface have minima when its cross section is one of said isosceles triangles. The critical point occurs at the midpoint of the base (thick segment) when the apex is on L or R (resp. T or B).
Next Steps
To continue this exploration one could consider:
1. Different Poncelet families, see examples in Figure 8; 2. Picking a hyperbola or parabola for either E, E , e.g., as in this video; 3. Non-closing Poncelet polylines Figure 9(left); 4. Poncelet -gons, > 3, including self-intersected ones as in Figure 9(right). See [9]. 5. Families of derived triangles, e.g., the excentral, orthic, medial triangles [18] A Jacobi Parametrization
We parametrize Poncelet triangles using Jacobi elliptic functions since an application of the Poncelet map corresponds to a unit translations in the argument of said functions. As seen in Figure 4(left), this entails that the angular position of vertices are identical, time-delayed functions.
Following the notation in [3], ∈ [0, 1] denote the elliptic modulus :
Definition 2. The incomplete elliptic integral of the first kind ( , ) is given by:
( , ) = ∫ 0 √︁ 1 − 2 sin 2(1)
Mathematica (resp. Maple) expects = 2 (resp. ) for the second parameter to its elliptic functions.
6
The complete elliptic integral of the first kind ( ) is simply ( /2, ). Definition 3. The elliptic sine sn, cosine cn, and delta-amplitude dn are given by: sn( , ) = sin , cn( , ) = cos , dn( , ) = √︃ 1 − 2 sin 2 Where = am( , ) as the amplitude, i.e., the upper-limit in the integral in Equation (1) such that ( , ) = .
As derived in [15]: Theorem 1. A billiard N-periodic trajectory ( = 1, . . . , ) of period with turning number , where gcd( , ) = 1 can be parametrized on with period 4 where:
= [− sn ( + Δ , ) , cn ( + Δ , )] with: = 2 = 2 − 2 2 , Δ = 4 , = √︃ 2 + 2 − 2 , = cn( Δ 2 , )
B Review: Gaussian and Mean Curvatures
Let : → R 3 be a smooth immersion or embedding of a smooth oriented surface. The differential of , * is defined by * ( ) = ( ) = . The induced metric , known as the first fundamental form is given by:
( , ) = * ( ), * ( ) = , .
Here , denotes the canonical inner product defining the Euclidean metric of R 3 . Consider an unit normal field N to the map . The second fundamental form S : → is defined by:
N = N ( ) = − * (S )
The map S : → is symmetric relative to the induced metric = , , i.e., S , = , S . The eigenvalues 1 ≤ 2 of S are called the principal curvatures relative to N and the eigenspaces are called principal directions. The mean curvature H and Gaussian curvature K are given by [14]:
H = (1/2) (S) = ( 1 + 2 )/2, K = (S) = 1 2
In a local chart ( , ) it follows that:
H = − 2 + 2( − 2 ) , K = − 2 − 2
where = 2 + 2 + 2 and = 2 + 2 + 2 are the first and second fundamental forms of the surface. Consider the Poncelet spatio-temporal surface S 1 . It follows that H = ( Δ − 3 2 )/2, and K = −( /Δ) 2 . Explicitly:
Figure 1 :
1Left: Poncelet's closure theorem for triangles. Right: Ellipses E, E are confocal. Consecutive sides of the Poncelet family (blue) are bisected by the normalsˆ, and the perimeter is constant. Also shown are the three points of contact with the inner ellipse, or caustic. of the family and ( , ) be a point on edge ( ) +1 ( ), namely, ( , ) = (1 − ) ( ) + +1 ( ), ∈ [0, 1]. Referring to Figure 2(left): Definition 1. The Poncelet Spatio-Temporal Surface S (SPTS) is the union of the 3 parametric ruled surfaces S = [ , ( , )], = 1, 2, 3. Note that the parameter is periodic.
Figure 2 :
2Left: the PSTS swept by Poncelet triangles in the confocal family. It is a union of three ruled surfaces (red, green, and blue), each of which has negative curvature. Also shown is the 3d curve swept by the points of contact (white). Right: the SPTS colored by Gaussian curvature. The center of the blue areas represent minima.
Figure 3 :Figure 4 :
34Left: identifying the = 0 with = cross-sections of the PSTS, obtain an orientable Seifert surface[17].Also shown is the path of the contact points (white) with the caustic. Right: The same surface now colored by the torsion of straight lines elements sweeping the surface. Logo applications accepted. Left top: under the "standard parametrization" 1 ( ) = [ cos , sin ] the angular position of vertices of Poncelet triangles in the confocal pair are three different curves. Left bottom: under Jacobi's parametrization, the curves become 120-degree delayed copies of one another. Right: The confocal family has four isosceles triangles, with a
Figure 5 :
5Left: Gaussian curvature of Jacobi-parametrized Poncelet triangles in the confocal family (horizontal is the position along a given side, and vertical is one revolution of the family. Points (resp. ) denote the curvature minima (saddle points). Right: The mean curvature, with , as before.
Figure 6 :
6Left: The contact points of the identified PSTS sweep a triad of "Hopf" rings forming a 3-link tangle [2]. Right: Two type of 3-ring tangles: Borromean (left), and the Hopf 3-link (right) [2], homeomorphic to the curves swept by the contact points. Note that by removing one of the rings in the former (resp. latter) case, the other two are free (resp. remain tangled).
Figure 7 :
7Left: the confocal family (rotated 90 ), and the locus of the incenter 1 (green) and barycenter 2 (red). Also shown are the three contact points with the caustic. Right: In the endpoint-identified PSTS, the space curves swept by 1 (green) forms an individual a 2-link tangle with each individual contact point ring (gray). The same is true for the 2 space curve (red). 1 and 2 form a link thrice twisted about each other.
Figure 8 :
8From left to right, three additional examples of Poncelet triangle families in (i) a homothetic pair of ellipses, (ii) inscribed in a circle and circumscribing a concentric ellipse, and (iii) interscribed between two non-concentric circles (aka., the "bicentric" pair). Video
Figure 9 :
9Left: a non-closing Poncelet 3-polyline. Right: a self-intersected = 5 Poncelet family (pentagrams).
= 2 − 8 4
28( 4 + 8 ) ( 4 8 + 4 8 − 1) [((( 2 − 2 ) + 1] 2 − 2 8 − 2 8 4 − 2 4 + 2] 2 + 2 ( 8 − 4 ) 2 = − ( 4 + 8 ) ( 4 8 + 4 8 − 1)where = dn( /3 + , ), = sn( /3 + , ), = cn( /3 + , ), = 4, 8, and is one quarter of the period in Theorem 1, i.e., the complete elliptic integral of the first kind, see Equation(1).
Table 1 :
1Pages with interactive simulations to the various phenomena mentioned in the article.
Billiards in ellipses revisited. A , A , S , R , T , S , Eur. J. Math. 9A , A., S , R., T , S. Billiards in ellipses revisited. Eur. J. Math. (9
Borromean entanglement of the GHZ state. In Potentiality, entanglement and passion-at-a-distance. A , P , J. S. R. Cohen, M. HorneSpringerBerlinA , P. Borromean entanglement of the GHZ state. In Potentiality, entanglement and passion-at-a-distance, . J. S. R. Cohen, M. Horne, Ed. Springer, Berlin, 1997, pp. 53--59.
. A , J V , E , W F Elliptic Functions, Cambridge University PressLondonA , J. V., E , W. F. Elliptic Functions. Cambridge University Press, London, 2006.
Reznik's identities and more. B , M , T , S Dan, Eur. J. Math. B , M., T , S. Dan Reznik's identities and more. Eur. J. Math. (9 2020).
Poncelet's closure theorem. B , H J M , K , F , R , D W , Exposition. Math. 5B , H. J. M., K , C., O , F., R , D. W. Poncelet's closure theorem. Exposition. Math. 5, 4 (1987), 289-364.
Poncelet's porism: a long story of renewed discoveries i. C , A , Arch. Hist. Exact Sci. 70C , A. Poncelet's porism: a long story of renewed discoveries i. Arch. Hist. Exact Sci. 70, 2 (2016), 1-122.
D , V , R , M , Poncelet Porisms and Beyond: Integrable Billiards, Hyperelliptic Jacobians and Pencils of Quadrics. BaselSpringerD , V., R , M. Poncelet Porisms and Beyond: Integrable Billiards, Hyperelliptic Jacobians and Pencils of Quadrics. Frontiers in Mathematics. Springer, Basel, 2011.
Geometric Methods and Applications for Computer Science and Engineering. G , J , SpringerBasel2nd editionG , J. Geometric Methods and Applications for Computer Science and Engineering (2nd edition). Springer, Basel, 2011.
Invariants of self-intersected and inversive n-periodics in the elliptic billiard. G , R , R , D , 10G , R., R , D. Invariants of self-intersected and inversive n-periodics in the elliptic billiard, 10 2020.
New properties of triangular orbits in elliptic billiards. G , R , R , D , K , J , G , R., R , D., K , J. New properties of triangular orbits in elliptic billiards.
. Amer. Math. Monthly. 128Amer. Math. Monthly 128, 10 (2021), 898-910.
Can the elliptic billiard still surprise us?. R , D , G , R , K , J , Math Intelligencer. 42R , D., G , R., K , J. Can the elliptic billiard still surprise us? Math Intelligencer 42 (2020), 6-17.
Fifty new invariants of n-periodics in the elliptic billiard. R , D , G , R , K , J , R , D., G , R., K , J. Fifty new invariants of n-periodics in the elliptic billiard.
. Arnold Math, . J , Arnold Math. J. (2 2021).
Knots and links. R , D , Mathematics Lecture Series. 7Publish or Perish, IncR , D. Knots and links. Mathematics Lecture Series, No. 7. Publish or Perish, Inc., Berkeley, Calif., 1976.
A comprehensive introduction to differential geometry. S , M , Publish or Perish, IncIIIWilmington, DelS , M. A comprehensive introduction to differential geometry. Vol. III, second ed. Publish or Perish, Inc., Wilmington, Del., 1979.
The geometry of billiards in ellipses and their Poncelet grids. S , H , J. Geom. 11229Paper No. 40S , H. The geometry of billiards in ellipses and their Poncelet grids. J. Geom. 112, 3 (2021), Paper No. 40, 29.
of Student Mathematical Library. T , S Geometry, Billiards, Mathematics Advanced Study Semesters. 30American Mathematical SocietyT , S. Geometry and Billiards, vol. 30 of Student Mathematical Library. American Mathematical Society, Providence, RI, 2005. Mathematics Advanced Study Semesters, University Park, PA.
Visualization of seifert surfaces. W , J , C , A , IEEE Trans. on Vis. and Comp. Graphics. 124W , J., C , A. Visualization of seifert surfaces. IEEE Trans. on Vis. and Comp. Graphics 12, 4 (2006), 485-496.
. W , E Mathworld, MathWorld-A Wolfram Web ResourceW , E. Mathworld. MathWorld-A Wolfram Web Resource (2019).
| []
|
[
"\"Twist-Controlled\" force amplification & Spinning tension transition in yarn",
"\"Twist-Controlled\" force amplification & Spinning tension transition in yarn"
]
| [
"Antoine Seguin \nLaboratoire FAST\nUniversité Paris-Saclay\nCNRS\nF-91405OrsayFrance\n",
"Jérôme Crassous *[email protected] \nIPR (Institut de Physique de Rennes) -UMR 6251\nUniv Rennes\nCNRS\nF-35000RennesFrance\n"
]
| [
"Laboratoire FAST\nUniversité Paris-Saclay\nCNRS\nF-91405OrsayFrance",
"IPR (Institut de Physique de Rennes) -UMR 6251\nUniv Rennes\nCNRS\nF-35000RennesFrance"
]
| []
| Combining experiments and numerical simulations with a mechanical/statistical model of twisted yarns, we discuss the spinning transition between a cohesion-less assembly of fibers into a yarn. We show that this transition is continuous but very sharp due to a giant amplification of frictional forces which scales as exp θ 2 , where θ is the twist angle. We demonstrate that this transition is controlled solely by a non-dimensional number H involving twist, friction coefficient, and geometric lengths. A critical value of this number Hc ≃ 30 can be linked to a locking of the fibers together as the tensile strength is reached. This critical value imposes that yarns must be very slender structures with a given pitch. It also induces the existence of an optimal yarn radius. Predictions of our theory are successfully compared to yarns made from natural cotton fibers. | 10.1103/physrevlett.128.078002 | [
"https://arxiv.org/pdf/2110.04206v2.pdf"
]
| 238,531,372 | 2110.04206 | ee725ed9ab9c180ff4cd66fb440c1d07453c6f4b |
"Twist-Controlled" force amplification & Spinning tension transition in yarn
15 Dec 2021
Antoine Seguin
Laboratoire FAST
Université Paris-Saclay
CNRS
F-91405OrsayFrance
Jérôme Crassous *[email protected]
IPR (Institut de Physique de Rennes) -UMR 6251
Univ Rennes
CNRS
F-35000RennesFrance
"Twist-Controlled" force amplification & Spinning tension transition in yarn
15 Dec 2021(Dated: December 16, 2021)arXiv:2110.04206v2 [cond-mat.soft]
Combining experiments and numerical simulations with a mechanical/statistical model of twisted yarns, we discuss the spinning transition between a cohesion-less assembly of fibers into a yarn. We show that this transition is continuous but very sharp due to a giant amplification of frictional forces which scales as exp θ 2 , where θ is the twist angle. We demonstrate that this transition is controlled solely by a non-dimensional number H involving twist, friction coefficient, and geometric lengths. A critical value of this number Hc ≃ 30 can be linked to a locking of the fibers together as the tensile strength is reached. This critical value imposes that yarns must be very slender structures with a given pitch. It also induces the existence of an optimal yarn radius. Predictions of our theory are successfully compared to yarns made from natural cotton fibers.
Yarns made from natural fibers are one of the first materials ever processed by humans, including Neanderthals [1]. They are done by making bundles of initially aligned fibers which are then stuck together by twisting. The fact that many individual fibers a few centimeters may form yarns of tens of meters drew early attention from scientists. Galileo [2] argued that the twist "binds" the filaments together, but do not discuss the origin of this cohesion. We now know that the binding forces are created by the tension throughout the filaments which creates normal forces due to the curvatures of the fibers, and that tangential frictional forces prevents sliding of fibers [3][4][5]. If the twist is large enough, the relative sliding of fibers are totally blocked, and the rupture of the yarn is then a problem of statistic of rupture of individual fibers [6,7]. The description of the transition between fibers which are "free to slide" without spinning, to "blocked by spinning" is still a open problem. Experimentally, only very few studies addressed the dependence of yarn strength with twist level [8]. Theoretically, despite numerous attempts, the mechanism linking twist and strengthening has not been clearly understood [9][10][11][12][13]. Recently, an analogy with the percolation transition had been suggested [14]. Assembly of fibers is an example of assembly of objects that interact through numerous frictional contacts. For such systems, the geometrical arrangement of the contact points may generate huge stress throughout the system. Some examples of such systems are granular materials in proximity to a solid wall (Jansen effect [15,16]), assembly of parallel sheets in contact (Interleaved phone book experiment [17,18]), or contact points distributed around a cylinder (Capstan). In all those examples, the proportionality between the tangential and the normal stress at contact means that the mechanical stress in the system decreases exponentially with the distance to applied load, and then have drastic effects of the mechanical equilibrium of such system. * [email protected] We show in this letter that an assembly of fibers belongs to the same class of system. For this we consider model yarns made of entangled twisted fibers. The tension necessary to unravel the fibers is shown to vary continuously, but very rapidly with the twist. This sharp evolution of the disentanglement force creates a phase transition like transition between free fibers and stuck fibers phases. A simple mechanical model of frictional helicoidal fibers allows us to define a non dimensional number whose value characterizes this transition. These results can be successfully applied to real yarns. Experimental model yarn system. Our starting point is the demonstrating experiment of friction force in yarns as proposed by Bouasse [4]. We consider two brushes of N/2 identical fibers (see fig.1(a)). The fibers are passed . The twist of the elementary yarns composing each string are always very large compared to the twist that we apply. We first prepare the entanglement by alternately aligning the brushes roughly parallel. The brushes are then zipped together with two plastic cable clamps, and twisted by a angle θ ( fig.1(b,c)). The puller jaws are attached to a traction measurement apparatus (Instron 5965, 5 kN force sensor) and elongated at fixed velocity 50 mm.min −1 . Fig.1(d) shows the force variations for two different twist angles. If the twist angle is low enough, the force first increases, reaches a peak value (noted T M ) and then decreases slowly. Such variations are associated to a smooth relative sliding of the two brushes. For large enough twist, a force drop is measured after the maximum force (noted T r ). This is associated to the rupture of one or many strings that we may observe by postmortem inspection. The figure 1(e) shows the evolution of T M as a function of the twist angle. This value is likely constant up to θ/2π ≃ 5 revolutions for this yarn, and increases rapidly up to 9 revolutions where T M reaches T r at point C.
L / R (c) (a) (b) (d) (f) (g)(e)
Scaling laws for maximum traction. We first limit our analysis to the maximum force T M and we do not discuss rupture. Since we expect that the maximum force is dependant of friction, T M should depend on µ m and of geometric characteristics of the yarn : θ, L, R and N . We define the twist rate γ = Rθ/L ≪ 1.
We first discuss the γ dependence of T M . Noting T 0 the traction force at vanishing twist, we must have T M (γ) = T 0 F (γ), or ln(T M ) = ln(T 0 ) + f (γ) with f = ln(F ) an even function vanishing at γ = 0. Leading term of expansion at small twist is f ∼ γ 2 . This dependence is experimentally verified as shown on fig.2(a). It follows that:
ln(T M /T 0 ) = γ 2 g(L/R, N, µ m )(1)
where g is a non-dimensional function of non-dimensional parameters. The L/R dependence of g is obtained by considering the evolution of traction force at fixed θ, R and N and of various lengths L. We found (see fig.2
(b)) that g(L/R, N, µ m ) ∼ L/R, so that ln(T M /T 0 ) = (γ 2 L/R) h(N, µ m ).
Numerical yarn. We use discrete element method simulations [19] to obtain the function h. Fibers are modeled as set of point masses connected with elongational spring/dashpot without torsional or bending restoring forces. Successive masses are connected with cylinders of diameter d. The contact points between cylinders (belonging to same or different fibers) are calculated, and the contact forces are calculated considering normal stiffness and damping, and tangential stiffness with Coulomb friction coefficient µ m . Equations of motion are integrated using a Verlet algorithm. The steps for making numerical yarns is depicted in fig.2. We first stretch the N fibers under a force t 0 ( fig.2(d)) such that the strain of each fiber is 10 −4 . A torque is then applied to the yarn by submitting both ends of fibers to orthoradial forces s ( fig.2.(e)). During this preloading phase, µ m is kept to a low value 0.05 which ensure an uniform twist along the yarn ( fig.2(g)). First, we obtain that ln (t M /t 0 ) ∼ θ 2 as for experimental data. We have also checked (data not shown here) that: g ∼ L/R. The friction coefficient µ m is varied, and the N -dependency is obtained from simulations of N fibers of radius a N such that R = a N N/φ (with φ = 0.80 the packing fraction) ensuring fixed string radius R. We did not identify significant variations with N between N = 20 and N = 100 ( fig.2(c)).
Finally, fig.2(c) shows that all the experimental and numerical data may be collapsed using the single law:
T M /T 0 = exp (0.75 µ θ 2 R L )(2)
with µ = 0.63 µ m for laboratory and µ = 1.13 µ m for numerical experiments. The experimental dependence on µ m may be viewed on fig.2(c) where data for flax and cotton collapse when plotted as function of µθ 2 R/L. Finally, the amplification of the tension in the yarn is thus exponential, and only related to a dimensionless number H = µθ 2 R/L that we name "Hercules twist Number". Mechanical model. We develop a mechanical model for deriving (2). We consider a yarn made of N helicoidal fibers ( fig.3(a)) with some rising and descending fibers. We consider first a twisted fiber at a distance r from the axis: r = re ρ + ze z in cylindrical coordinates (ρ, ϕ, z) ( fig.3(b)). The geometry of the helix of constant pitch P gives ϕ/2π = z/P and we define the reduced pitch as p = P/2π. For pitch large compared to r, the tangent vector of the fiber is e t (z) ≃ r/p e ϕ + e z . The tension is t(z) = t(z)e t (z), and:
dt dz = dt dz e t − r p 2 t(z)e ρ ≃ dt dz e z − r p 2 t(z)e ρ(3)
We first consider the force equilibrium, in a section of the yarn, for a portion of fiber between z and z + dz. The force −(rdz/p 2 )t(z)e ρ is a linear restoring force towards the axis of the yarn: the torsion of the yarn is then equivalent putting the fiber into a twist-controlled harmonic potential V (r) = t(z)dz (r 2 /2p 2 ). At mechanical equilibrium, contact forces must balance this confining force. The equilibrium of forces in the plane perpendicular to the fiber writes:
r p 2 t(z)e ρ = j=N j=1 f (j) n e (j) n(4)
with N the number of contacts, f have random orientations, r.h.s. of (4) may be viewed as a 2d random walk in force space, and we should have: t(z)r/p 2 ∼ √ N f n . We now consider the force along z of the rising fiber due to the N /2 fibers that do not rise. Each contact exerts a sliding force ≃ µ m f n , and then (dt/dz) ≃ (N /2)µ m f n ≃ ( √ N /2)µ m t(z)r/p 2 . We finally obtain:
dt dz = µ r p 2 t(z)(5)
with µ = ( √ N /2)µ m . The coordination number for a random close packing of disks being 4 [20], we should have µ ≃ µ m , in agreement with laboratory and numerical experiments. Integrating (5) along z gives t(L) = t 0 exp µrL/p 2 . Using θ = L/p, and dN (r)/dr = N r/R 2 the density of rising fibers, the force on the yarn section is:
T M = r=R r=0 t 0 exp (µθ 2 r L ) dN (r) (6a) = T 0 2[(H − 1) exp H + 1] H 2 (6b)
where T 0 = N t 0 /2, and with H the Hercules twist Number H previously defined. Since t 0 is only in prefactor of the exponential amplification, the scaling ln(T M /T 0 ) ∼ H is expected to hold if (6a) is extended to a radius dependant tension t 0 (r), as it is the case for dense packing of twisted fibers [21], or if there is disorder on the values of t 0 . Staples yarn. We now apply our results to a yarn made of an assembly of fibers of length L as shown in fig.4. Fig.4(c) shows a yarn which separates in two parts from an arbitrary plane z = 0. A fiber with center located above this plane rises. Let's z e the distance between the end of the fiber and the plane, and t 0 the tension at the end of the fiber. Integrating (5) from −z e to 0 gives t(z = 0) = t 0 exp (µrz e /p 2 ). By symmetry, the relation is the same for a descending fiber. Noting P(z e ) dz e the probability that fiber ends at a distance between z e and z e + dz e , the total separating force is then:
T M = r=R r=0 dN (r) ze t 0 exp (µrz e /p 2 ) P(z e )dz e (7a) = N t 0 2[exp (H/2) − (1 + (H/2))] (H/2) 2 (7b)
where H = µRL/p 2 . We used dN (r) = 2N rdr/R 2 with N the number of fiber in one section, and assumed an uniform distribution of ends of fibers P(z e ) = 2/L for 0 ≤ z e ≤ L/2. The tension N t 0 that the yarn may support without twist is then amplified by a factor A(H) = 2[exp (H/2) − (1 + (H/2))]/(H/2) 2 . We expect that the exponential amplification still occurs for various distribution P(z e ): i.e. taking P(z e ) as a Dirac distribution δ(z e − L) in (7a), we recover (6b). Exponential amplification should also occurs in case of disordered values of t 0 , or if fibers trajectories are not perfectly helicoidal. Critical Hercules twist Number and Spinning Transition. This amplification factor A(H) increases nearly exponentially with H. However, the maximum traction T M cannot be larger that the force T r for which the rupture of the fibers occurs. We note H c the critical value of the Hercules twist Number which verifies T r = N t 0 A(H c ). It occurs at a point C on fig.1(e). H c separates weakly twisted yarns (H < H c ) that fail by sliding of fibers, from highly twisted yarns (H > H c ) that fail by breaking of fibers.
A typical value of H c for a yarn made of identical fibers of diameter d and of length L may be evaluated. Noting E the Young Modulus, and ε r the deformation of fibers at rupture, and dropping constant numerical factor, the rupture tension is t r ∼ ε r Ed 2 for a fiber, and T r = N t r for a yarn. Since fibers are slender objects, we take t 0 as the force necessary to straighten into a yarn the fibers that are initially bent. Noting ξ the initial flexion of the fibers ( fig.4b) we have t 0 ∼ Ed 4 ξ/L 3 . It follows that A(H c ) = t r /t 0 ∼ ε r L 3 /ξd 2 . For cotton fibers with L = 30 mm, d = 16 µm, µ m = 0.48 [22,23], ε r ≃ 0.08, and ξ ∼ L/3: A(H c ) ∼ 10 5 , and H c ≃ 33. The associated pitch for a yarn of radius R = 80 µm is P = 2π µRL/H c ≃ 1.2 mm. From a microscopic inspection of the yarn, we measured a similar value of the pitch P ≃ 1.5 mm. For fibers made of an identical material with ξ ∼ L, and dropping non exponential term in A(H) ∼ exp(H/2), we obtain the simple scaling H c ∼ 4 ln(L/d): H c is in the range 20 − 40 when L/d varies between 10 2 to 10 4 .
Optimal yarn. The maximum resistance of a yarn is attain for H ≥ H c , but it is possible to attain this value? Indeed, twisting a yarn elongates the fibers which may break: twisting too much a yarn reduces its strength, a fact already noticed by Galileo [2]. The elongation may be evaluated: a length dz of an initially straight fiber at r = R becomes ds = dz 1 + γ 2 after the twist of the yarn. The deformation ε = (ds − dz)/dz ≃ γ 2 /2 should be lower than ε r , so that the twist must verify γ 2 < 2ε r . The maximum attainable value of H without breaking of fibers is then H r = 2µε r L/R. For a maximal resistance without breaking due to twist we must have H c ≤ H ≤ H r , so that:
R ≤ R opt = 2µε r L/H c(8)
where we introduced R opt as the value of the yarn radius R which verifies H r = H c . R opt is the largest radius of yarn which may reach H c without breaking of fibers. For cotton fibers, with H c ≃ 30, we obtain R opt ≃ 80 µm which is the value of the radius that we measure for our cotton yarn. Thicker simple yarns may be processed, but will not reach their maximal resistance. Making larger yarns with maximal resistance must be done by putting together elementary yarns of radius R opt as it is done in practice [24,25]. Concluding remarks. From our experiments and our statistical model, a relatively simple picture emerges to properly describe the spinning transition of yarn: the twist on the fiber creates a confining potential. The tangential force variations are then proportional to tension, creating exponential decay of the tension. Although the model is very simple, the experimental variations on model yarns are very well captured. This means that a more refined description of the disorder in the fibers arrays, potential deviations from helicoidal structures of fibers, or non-linearity arising from non-small curvature (r ≪ p) are presumably of weak importance.
A crucial result of our study is that the force amplification may be properly described with a single nondimensional number H that we named Hercules twist Number. Although it appears to be a quantity of fundamental interest for the yarn processing, this nondimensional Number has apparently not be previously defined. This name echoes to the situation of interleaved phone book experiment [17,18]. In those studies the authors considered a "Hercules Number" 2µM 2 ε/d, with µ the friction coefficient, M the number of pages, ε the sheet thickness and d the distance of overlap between leaves. Writing H as µθ 2 R/L, the structure of these two non-dimensional numbers appears similar, but with the noticeable difference that θ is controlled by the de-formation of the yarn, whereas M is fixed. It should be interesting to investigate in details if the assembly of frictional objects with different symmetries, such as packing of non-aligned fibers [26] or twisted sheets [27] show similar exponential force variations. Also, is should be interesting to see if recent results on friction effects on bending of layered structures [28] may be extended to fibrous structures.
Finally, It should be note that our theory is not only qualitative, but also quantitative since H c ≃ 30 corresponds to the twist value for real yarns. The exponential increases of the force amplification factor A(H), together with the quadratic dependence with the twist angle H ∼ θ 2 induces that the spinning process appears in practice as a sharp twist-controlled phase transition.
FIG. 1 .
1(a,b) Preparation of the model yarn before (a) and after (b) twisting. (c) Photo of a yarn made from cotton strings after twisting. (d) Traction forces as function of displacement for cotton yarn L = 800 mm: (blue) θ/2π = 11, (red) θ/2π = 3. (e) Symbol: Maximum traction force as full twist angles (cotton yarn, L = 800 mm), dotted line is for eye guide, and dashed line is the rupture force.
FIG. 2 .
2(a) Scaling law f (γ 2 ) for cotton yarn at fixed R and L. Line is linear fit. (b) Scaling law g(L/R) for flax yarn at fixed twist θ = 2.5 turns. Line is linear fit. (c) ln(TM /T0) as function H. Dashed line is eq.(2), plain curve is eq.(6b). For (a,b,c) Crosses and open symbols are experimental data. Cotton, N = 20, R = 3.15 mm: L = 200 mm (▽), L = 400 mm (♦), L = 200 mm (▽). Flax, N = 20, R = 4.15 mm: L = 400 mm, various θ ( ), θ = 2.5 turns, various L (×). Plain symbols are numerical data with L/R = 60: µm = 1, N = 40 ( ), µm = 0.5, N = 40 (•), µm = 0.5, N = 20 (◭), µm = 0.5, N = 100 (◮), µm = 0.2, N = 40 ( ). (d,e,f) Schematic drawing of the preparation of the numerical yarn: (d) uniform tension t0 is applied; (e) shear force s is applied to twist the yarn; (f) Tension is increased to t on the top of half fibers, and on the bottom of the other fibers. (g) Snapshot of a brush of fibers after twisting, and during the increase of t (N = 20, L/R = 60). Note the difference of vertical and horizontal scales. through rings which are connected to puller jaws (N/2 fibers in each jaw). The model fibers are of flexible strings of cotton (diameter d = 1 mm, linear density λ = 0.48 g.m −1 , friction coefficient µ m = 0.35, bending modulus B ∼ 10 −6 N.m 2 ), or flax (d = 1 mm, λ = 1.03 g.m −1 , µ m = 0.53, B ∼ 4.10 −6 N.m 2 )
FIG. 3 .
3(a) Section of a yarn of radius R composed of fibers of diameter d. Gray fibers go downwards and white fibers go upwards. (b) A fiber twisted on a cylinder of radius r.
Finally, while keeping forces t 0 and s applied, the tension t of half the fibers on the bottom and to the other half at the top (fig.2(f)) is slowly increased until a value t = t M where the brush separates. Full symbols of Fig.2(c) shows the evolution of t M /t 0 with the twist angle for different values of µ m and N .
contact force between z and z + dz exerted by fiber j, and e (j) n the normal vectors at contact points. Let f n be the order of magnitude of normal forces f
FIG. 4 .
4(a)Fibers of cotton. (b)Length L and tortuosity ξ of fiber. (c) Separation of a yarn at a plane z = 0. Arrows show the directions relatively to the plane z = 0.
Direct evidence of neanderthal fibre technology and its cognitive and behavioral implications. B L Hardy, M.-H Moncel, C Kerfant, M Lebon, L Bellot-Gurlet, N Mélard, Scientific Reports. 1014889B. L. Hardy, M.-H. Moncel, C. Kerfant, M. Lebon, L. Bellot-Gurlet, and N. Mélard. Direct evidence of nean- derthal fibre technology and its cognitive and behavioral implications. Scientific Reports, 10(1):4889, Apr 2020.
Discorsi e dimostrazioni matematiche, intornoà due nuove scienze attenenti alla mecanica & i movimenti locali. G Galilei, Ludovico Elseviro. 1638G. Galilei. Discorsi e dimostrazioni matematiche, intornoà due nuove scienze attenenti alla mecanica & i movimenti locali. Ludovico Elseviro, Leida, Spain, 1638.
Résistance des fibres végétales filées ou commises. Birebent, Annales de la Faculté des sciences de Toulouse : Mathématiques, 3e série. 21Birebent. Résistance des fibres végétales filées ou com- mises. Annales de la Faculté des sciences de Toulouse : Mathématiques, 3e série, 21:43-137, 1929.
Cordes et membranes. Bouasse Henri, Paris, DelagraveBouasse Henri. Cordes et membranes. Paris, Delagrave, 1926.
Cohesion phenomena in cotton rovings and yarns: Part i: General study. Alberto Barella, Antonio Sust, Textile Research Journal. 323Alberto Barella and Antonio Sust. Cohesion phenom- ena in cotton rovings and yarns: Part i: General study. Textile Research Journal, 32(3):217-226, 1962.
Tensile tests for cotton yarns, part v: "the weakest link" theorems on the strength of long and of composite specimens. F T Peirce, Journal of the Textile Institute Transactions. 177F.T. Peirce. Tensile tests for cotton yarns, part v: "the weakest link" theorems on the strength of long and of composite specimens. Journal of the Textile Institute Transactions, 17(7):T355-T368, 1926.
Failure processes in elastic fiber bundles. Alex Srutarshi Pradhan, Bikas K Hansen, Chakrabarti, Rev. Mod. Phys. 82Srutarshi Pradhan, Alex Hansen, and Bikas K. Chakrabarti. Failure processes in elastic fiber bundles. Rev. Mod. Phys., 82:499-555, Mar 2010.
Force et elasticité des filés en coton. G Gegauff, Bulletin de la Société industrielle de Mulhouse. 77153G Gegauff. Force et elasticité des filés en coton. Bulletin de la Société industrielle de Mulhouse, 77:153, 1907.
A theoretical approach to the problem of yarn strength. R R Sullivan, Journal of Applied Physics. 133R. R. Sullivan. A theoretical approach to the problem of yarn strength. Journal of Applied Physics, 13(3):157- 167, 1942.
Theoretical analysis of the mechanics of twisted staple fiber yarns. J W S Hearle, Textile Research Journal. 3512J.W.S. Hearle. Theoretical analysis of the mechanics of twisted staple fiber yarns. Textile Research Journal, 35(12):1060-1071, 1965.
Mechanism of yarn failure. Roy M BroughtonJr, Jr Yehia El Mogahzy, Jr D M Hall, Textile Research Journal. 623Jr. Roy M. Broughton, Jr. Yehia El Mogahzy, and Jr. D. M. Hall. Mechanism of yarn failure. Textile Research Journal, 62(3):131-134, 1992.
Development of a constitutive theory for short fiber yarns: Mechanics of staple yarn without slippage effect. Ning Pan, Textile Research Journal. 6212Ning Pan. Development of a constitutive theory for short fiber yarns: Mechanics of staple yarn without slippage effect. Textile Research Journal, 62(12):749-765, 1992.
Development of a constitutive theory for short fiber yarns part ii: Mechanics of staple yarn with slippage effect. Ning Pan, Textile Research Journal. 639Ning Pan. Development of a constitutive theory for short fiber yarns part ii: Mechanics of staple yarn with slippage effect. Textile Research Journal, 63(9):504-514, 1993.
Why clothes don't fall apart: Tension transmission in staple yarns. Patrick B Warren, Robin C Ball, Raymond E Goldstein, Phys. Rev. Lett. 120158001Patrick B. Warren, Robin C. Ball, and Raymond E. Gold- stein. Why clothes don't fall apart: Tension transmission in staple yarns. Phys. Rev. Lett., 120:158001, Apr 2018.
Granular Media: Between Fluid and Solid. Bruno Andreotti, Yoël Forterre, Olivier Pouliquen, Cambridge University PressBruno Andreotti, Yoël Forterre, and Olivier Pouliquen. Granular Media: Between Fluid and Solid. Cambridge University Press, 2013.
Dynamical janssen effect on granular packing with moving walls. Yann Bertho, Frédérique Giorgiutti-Dauphiné, Jean-Pierre Hulin, Physical review letters. 9014144301Yann Bertho, Frédérique Giorgiutti-Dauphiné, and Jean- Pierre Hulin. Dynamical janssen effect on granular packing with moving walls. Physical review letters, 90(14):144301, 2003.
Self-amplification of solid friction in interleaved assemblies. Thomas Héctor Alarcón, Christophe Salez, Jean-Francis Poulard, Élie Bloch, Kari Raphaël, Frédéric Dalnoki-Veress, Restagno, Phys. Rev. Lett. 11615502Héctor Alarcón, Thomas Salez, Christophe Poulard, Jean-Francis Bloch,Élie Raphaël, Kari Dalnoki-Veress, and Frédéric Restagno. Self-amplification of solid friction in interleaved assemblies. Phys. Rev. Lett., 116:015502, Jan 2016.
Nonlinear amplification of adhesion forces in interleaved books. Raphaelle Taub, Salez, Thomas, Alarcòn, Hector, Raphaël, Élie, Christophe Poulard, Frédéric Restagno, Eur. Phys. J. E. 44571Taub, Raphaelle, Salez, Thomas, Alarcòn, Hector, Raphaël,Élie, Poulard, Christophe, and Restagno, Frédéric. Nonlinear amplification of adhesion forces in interleaved books. Eur. Phys. J. E, 44(5):71, 2021.
. J Crassous, to be publishedJ. Crassous. to be published.
Jamming of soft particles: geometry, mechanics, scaling and isostaticity. Martin Van Hecke, Journal of Physics: Condensed Matter. 22333101Martin van Hecke. Jamming of soft particles: geometry, mechanics, scaling and isostaticity. Journal of Physics: Condensed Matter, 22(3):033101, 2009.
Measuring geometric frustration in twisted inextensible filament bundles. Andreea Panaitescu, Gregory M Grason, Arshad Kudrolli, Phys. Rev. E. 9552503Andreea Panaitescu, Gregory M. Grason, and Arshad Kudrolli. Measuring geometric frustration in twisted in- extensible filament bundles. Phys. Rev. E, 95:052503, May 2017.
Woodhead Publishing Series in Textiles. W E Morton, J W S Hearle, Woodhead Publishingfourth edition editionW.E. Morton and J.W.S. Hearle. Woodhead Publishing Series in Textiles. Woodhead Publishing, fourth edition edition, 2008.
Fictional properties of cotton fibers. Richard Baker Belser, James Lester Taylor, Richard Baker Belser and James Lester Taylor. Fictional properties of cotton fibers, 1969.
Structural mechanics of fibers, yarns, and fabrics. W S John, Percy Hearle, Stanley Grosberg, Backer, John Wiley & Sons IncJohn WS Hearle, Percy Grosberg, and Stanley Backer. Structural mechanics of fibers, yarns, and fabrics. John Wiley & Sons Inc., 1969.
Exploring the significance of structural hierarchy in material systems-a review. Ning Pan, Applied Physics Reviews. 1221302Ning Pan. Exploring the significance of structural hier- archy in material systems-a review. Applied Physics Reviews, 1(2):021302, 2014.
Structure and mechanics of aegagropilae fiber network. Gautier Verhille, Sébastien Moulinet, Nicolas Vandenberghe, Mokhtar Adda-Bedia, Patrice Le Gal, Proceedings of the National Academy of Sciences. 11418Gautier Verhille, Sébastien Moulinet, Nicolas Vanden- berghe, Mokhtar Adda-Bedia, and Patrice Le Gal. Structure and mechanics of aegagropilae fiber net- work. Proceedings of the National Academy of Sciences, 114(18):4607-4612, 2017.
Tensional twistfolding of sheets into multilayered architectures and scrolled yarns. Julien Chopin, Arshad Kudrolli, Julien Chopin and Arshad Kudrolli. Tensional twist- folding of sheets into multilayered architectures and scrolled yarns, 2020.
Bending response of a book with internal friction. Samuel Poincloux, Tian Chen, Basile Audoly, Pedro M Reis, Phys. Rev. Lett. 126218004Samuel Poincloux, Tian Chen, Basile Audoly, and Pe- dro M. Reis. Bending response of a book with internal friction. Phys. Rev. Lett., 126:218004, May 2021.
| []
|
[
"Extended finite operator calculus -an example of algebraization of analysis",
"Extended finite operator calculus -an example of algebraization of analysis"
]
| [
"A K Kwaśniewski \nHigher School of Mathematics and Applied Informatics\nul.Kamienna 17BialystokPoland\n\nInstitute of Computer Science\nBia lystok, ul.Sosnowa 64\nBia lystok University\nPOLAND\n",
"E Borak ",
"Herman Weyl ",
"\nIntroduction\n\n"
]
| [
"Higher School of Mathematics and Applied Informatics\nul.Kamienna 17BialystokPoland",
"Institute of Computer Science\nBia lystok, ul.Sosnowa 64\nBia lystok University\nPOLAND",
"Introduction\n"
]
| [
"Central European Journal of Mathematics"
]
| A Calculus of Sequences" started in 1936 by Ward constitutes the general scheme for extensions of classical operator calculus of Rota -Mullin considered by many afterwards and after Ward. Because of the notation we shall call the Ward's calculus of sequences in its afterwards elaborated form -a ψ-calculus.The ψ-calculus in parts appears to be almost automatic, natural extension of classical operator calculus of Rota -Mullin or equivalently -of umbral calculus of Roman and Rota.At the same time this calculus is an example of the algebraization of the analysis -here restricted to the algebra of polynomials. Many of the results of ψ-calculus may be extended to Markowsky Q-umbral calculus where Q stands for a generalized difference operator, i.e. the one lowering the degree of any polynomial by one. This is a review article based on the recent first author contributions [1]. As the survey article it is supplemented by the short indicatory glossaries of notation and terms used by Ward [2], Viskov[7,8], Markowsky [12], Roman [28-32] on one side and the Rota-oriented notation on the other side [9-11,1,3,4,35] (see also[33]).We shall call the Wards calculus of sequences [2] in its afterwards last century elaborated form -a ψ-calculus because of the Viskov's efficient notation [3]-[8]-adopted from Boas and Buck . The efficiency of the Rota oriented language and our notation used has been already exemplified by easy proving of ψ-extended counterparts of all representation independent statements of ψ-calculus [2]. Here these are ψlabelled representations of Graves-Heisenberg-Weyl (GHW)[3],[1],[16],[17] algebra of linear operators acting on the algebra P of polynomials.As a matter of fact ψ-calculus becomes in parts almost automatic extension of Rota -Mullin calculus[9]or equivalently -of umbral calculus of Roman and Rota [9, 10, 11]. The ψ-extension relies on the notion of ∂ ψ -shift invariance of operators with ψ-derivatives ∂ ψ staying for equivalence classes representatives of special differential operators lowering degree of polynomials by one[7,8,12]. Many of the results of ψ-calculus may be extended to Markowsky Q-umbral calculus[12]where Q stands for arbitrary generalized difference operator, i.e. the one lowering the degree of any polynomial by one. Q-umbral calculus [12] -as we call it -includes also those generalized difference operators, which are not series in ψ-derivative ∂ ψ whatever an admissible ψ sequence would be (for -"admissible" -see next section).The survey proposed here reviews the operator formulation of "A Calculus of Sequences" started in 1936 by Ward [2] with the indication of the decisive role the ψ-representations of Graves-Heisenberg-Weyl (GHW) algebra account for formulation and derivation of principal statements of the ψ-extension of finite operator calculus of Rota and its extensions.Restating what was said above let us underline that all statements of standard finite operator calculus of Rota are valid also in the case of ψ-extension under the almost mnemonic , automatic replacement of {D,x, id} generators of GHW by their ψ-representation correspondents {∂ ψ ,x ψ , id} -see definitions 2.1 and 2.5. Naturally any specification of admissible ψ -for example the famous one defining | null | [
"https://arxiv.org/pdf/math/0412233v1.pdf"
]
| 7,864,816 | math/0412233 | 5f4a1457253ded827587e5ec247ce18c595a8422 |
Extended finite operator calculus -an example of algebraization of analysis
2005
A K Kwaśniewski
Higher School of Mathematics and Applied Informatics
ul.Kamienna 17BialystokPoland
Institute of Computer Science
Bia lystok, ul.Sosnowa 64
Bia lystok University
POLAND
E Borak
Herman Weyl
Introduction
Extended finite operator calculus -an example of algebraization of analysis
Central European Journal of Mathematics
2005arXiv:math/0412233v1 [math.CO] "The modern evolution... has on the whole been marked by a trend of algebraization. "extended umbral calculus , Graves-Heisenberg-Weyl algebra
A Calculus of Sequences" started in 1936 by Ward constitutes the general scheme for extensions of classical operator calculus of Rota -Mullin considered by many afterwards and after Ward. Because of the notation we shall call the Ward's calculus of sequences in its afterwards elaborated form -a ψ-calculus.The ψ-calculus in parts appears to be almost automatic, natural extension of classical operator calculus of Rota -Mullin or equivalently -of umbral calculus of Roman and Rota.At the same time this calculus is an example of the algebraization of the analysis -here restricted to the algebra of polynomials. Many of the results of ψ-calculus may be extended to Markowsky Q-umbral calculus where Q stands for a generalized difference operator, i.e. the one lowering the degree of any polynomial by one. This is a review article based on the recent first author contributions [1]. As the survey article it is supplemented by the short indicatory glossaries of notation and terms used by Ward [2], Viskov[7,8], Markowsky [12], Roman [28-32] on one side and the Rota-oriented notation on the other side [9-11,1,3,4,35] (see also[33]).We shall call the Wards calculus of sequences [2] in its afterwards last century elaborated form -a ψ-calculus because of the Viskov's efficient notation [3]-[8]-adopted from Boas and Buck . The efficiency of the Rota oriented language and our notation used has been already exemplified by easy proving of ψ-extended counterparts of all representation independent statements of ψ-calculus [2]. Here these are ψlabelled representations of Graves-Heisenberg-Weyl (GHW)[3],[1],[16],[17] algebra of linear operators acting on the algebra P of polynomials.As a matter of fact ψ-calculus becomes in parts almost automatic extension of Rota -Mullin calculus[9]or equivalently -of umbral calculus of Roman and Rota [9, 10, 11]. The ψ-extension relies on the notion of ∂ ψ -shift invariance of operators with ψ-derivatives ∂ ψ staying for equivalence classes representatives of special differential operators lowering degree of polynomials by one[7,8,12]. Many of the results of ψ-calculus may be extended to Markowsky Q-umbral calculus[12]where Q stands for arbitrary generalized difference operator, i.e. the one lowering the degree of any polynomial by one. Q-umbral calculus [12] -as we call it -includes also those generalized difference operators, which are not series in ψ-derivative ∂ ψ whatever an admissible ψ sequence would be (for -"admissible" -see next section).The survey proposed here reviews the operator formulation of "A Calculus of Sequences" started in 1936 by Ward [2] with the indication of the decisive role the ψ-representations of Graves-Heisenberg-Weyl (GHW) algebra account for formulation and derivation of principal statements of the ψ-extension of finite operator calculus of Rota and its extensions.Restating what was said above let us underline that all statements of standard finite operator calculus of Rota are valid also in the case of ψ-extension under the almost mnemonic , automatic replacement of {D,x, id} generators of GHW by their ψ-representation correspondents {∂ ψ ,x ψ , id} -see definitions 2.1 and 2.5. Naturally any specification of admissible ψ -for example the famous one defining
q-calculus -has its own characteristic properties not pertaining to the standard case of Rota calculus realization. Nevertheless the overall picture and system of statements depending only on GHW algebra is the same modulo some automatic replacements in formulas demonstrated in the sequel. The large part of that kind of job was already done in [1,3,35].
The aim of this presentation is to give a general picture ( see: Section 3) of the algebra of linear operators on polynomial algebra. The picture that emerges discloses the fact that any ψ-representation of finite operator calculus or equivalently -any ψ-representation of GHW algebra makes up an example of the algebraization of the analysis with generalized differential operators [12] acting on the algebra of polynomials.
We shall delimit all our considerations to the algebra P of polynomials or sometimes to the algebra of formal series. Therefore the distinction between difference and differentiation operators disappears. All linear operators on P are both difference and differentiation operators if the degree of differentiation or difference operator is unlimited.
If all this is extended to Markowsky Q-umbral calculus [12] then many of the results of ψ-calculus may be extended to Q-umbral calculus [12]. This is achieved under the almost automatic replacement of {D,x, id} generators of GHW or their ψ-representation {∂ ψ ,x ψ , id} by their Q-representation correspondents {Q,x Q , id} -see definition 2.5.
The article is supplemented by the short indicatory glossaries of notation and terms used by Ward [1], Viskov [7], [8], Markowsky [12], Roman [28]- [31] on one side and the Rota-oriented [9]- [11] notation on the other side [3], [4,35,1].
Primary definitions, notation and general observations
In the following we shall consider the algebra P of polynomials P =F[x] over the field F of characteristic zero. All operators or functionals studied here are to be understood as linear operators on P . It shall be easy to see that they are always well defined. Throughout the note while saying "polynomial sequence {p n } ∞ 0 " we mean deg p n = n; n ≥ 0 and we adopt also the convention that deg p n < 0 iff p n ≡ 0.
Consider ℑ -the family of functions' sequences (in conformity with Viskov [7], [8], [3] notation ) such that: ℑ = {ψ; R ⊃ [a, b] ; q ∈ [a, b] ; ψ (q) : Z → F ; ψ 0 (q) = 1 ; ψ n (q) = 0; ψ −n (q) = 0; n ∈ N }. We shall call ψ = {ψ n (q)} n≥0 ; ψ n (q) = 0; n ≥ 0 and ψ 0 (q) = 1 an admissible sequence. Let now n ψ denotes [3,4] n ψ ≡ ψ n−1 (q) ψ −1 n (q) , n ≥ 0.
Then (note that for admissible ψ, 0 ψ = 0)
n ψ ! ≡ ψ −1 n (q) ≡ n ψ (n − 1) ψ (n − 2) ψ (n − 3) ψ ....2 ψ 1 ψ ; 0 ψ ! = 1 n k ψ = n ψ (n − 1) ψ ... (n − k + 1) ψ , n k ψ ≡ n k ψ k ψ ! and exp ψ {y} = ∞ k=0 y k k ψ ! .
Definition 2.1. Let ψ be admissible. Let ∂ ψ be the linear operator lowering degree of polynomials by one defined according to ∂ ψ x n = n ψ x n−1 ; n ≥ 0. Then ∂ ψ is called the ψ-derivative.
Remark 2.1. a) For any rational function R the corresponding factorial R (q n )! of the sequence R(q n ) is defined naturally [3,4,1] as it is defined for n ψ sequence , i.e.
:
R(q n )! = R(q n )R(q n−1 )...R(q 1 ) The choice ψ n (q)=[R (q n )!] −1 and R (x) = 1−x 1−q
results in the well known q-factorial n q ! = n q (n − 1) q !; 1 q ! = 0 q ! = 1 while the ψ-derivative ∂ ψ becomes now (n ψ = n q ) the Jackson's derivative [25,26,27,2,3] ∂ q :
(∂ q ϕ) (x) = ϕ(x)−ϕ(qx) (1−q)x .
b) Note also that if ψ = {ψ n (q)} n≥0 and ϕ = {ϕ n (q)} n≥0 are two admissible sequences then [∂ ψ , ∂ ϕ ]= 0 iff ψ = ϕ. Here [,] denotes the commutator of operators.
Definition 2.2. Let E y (∂ ψ ) ≡ exp ψ {y∂ ψ } = ∞ k=0 y k ∂ k ψ k ψ ! . E y (∂ ψ )
is called the generalized translation operator.
Note 2.1. [3, 4, 1] E a (∂ ψ ) f (x) ≡ f (x + ψ a) ; (x + ψ a) n ≡ E a (∂ ψ ) x n ; E a (∂ ψ ) f = n≥0 a n n ψ ! ∂ n ψ f ; and in general (x + ψ a) n = (x + ψ a) n−1 (x + ψ a).
Note also [1] that in general (1+ ψ (−1)) 2n+1 = 0 ; n ≥ 0 though (1+ ψ (−1)) 2n = 0; n ≥ 1.
Note 2.2. [1] exp ψ (x + ψ y) ≡ E x (∂ ψ ) exp ψ {y} -while in general exp ψ {x+y} = exp ψ {x} exp ψ {y}.
Possible consequent use of the identity exp ψ (x + ψ y) ≡ exp ψ {x} exp ψ {y} is quite encouraging. It leads among others to "ψ-trigonometry" either ψ-elliptic or ψ-hyperbolic via introducing cos ψ , sin ψ [1], cosh ψ , sinh ψ or in general ψ-hyperbolic functions of m-th order h (ψ) j (α) j∈Zm defined according to [13]
R ∋ α → h (ψ) j (α) = 1 m k∈Zm ω −kj exp ψ ω k α ; j ∈ Z m , ω = exp i 2π m .
where 1 < m ∈ N and Z m = {0, 1, ..., m − 1}.
Definition 2.3. A polynomial sequence {p n } ∞ o is of ψ -binomial type if it satisfies the recurrence E y (∂ ψ ) p n (x) ≡ p n (x + ψ y) ≡ k≥0 n k ψ p k (x) p n−k (y) .
Polynomial sequences of ψ-binomial type [3,4,1] are known to correspond in one-to-one manner to special generalized differential operators Q, namely to those Q = Q (∂ ψ ) which are ∂ ψ -shift invariant operators [3,4,1]. We shall deal in this note mostly with this special case,i.e. with ψ-umbral calculus. However before to proceed let us supply a basic information referring to this general case of Q-umbral calculus.
Definition 2.4. Let P = F[x]
. Let Q be a linear map Q : P → P such that: ∀ p∈P deg (Qp) = (deg p) − 1 (with the convention deg p = −1 means p = const = 0). Q is then called a generalized difference-tial operator [12] or Gel'fond-Leontiev [7] operator.
Right from the above definitions we infer that the following holds.
Q = ∂ ψ + k≥2 q k ∂ k ψ (2.1) if and only if b n,k = n k ψ b k,k ; n ≥ k ≥ 1; b n,1 = 0; b 1,1 = 1. (2.2)
If {q k } k ≥ 2 and an admissible ψ exist then these are unique.
Notation 2.1. In the case (2.2) is true we shall write : Q = Q (∂ ψ ) because then and only then the generalized differential operator Q is a series in powers of ∂ ψ .
Remark 2.2. Note that operators of the (2.1) form constitute a group under superposition of formal power series (compare with the formula (S) in [13]). Of course not all generalized difference-tial operators satisfy (2.1) i.e. are series just only in corresponding ψ-derivative ∂ ψ (see Proposition 3.1 ). For example [15] let Q = 1 2 DxD − 1 3 D 3 . Then Qx n = 1 2 n 2 x n−1 − 1 3 n 3 x n−3 so according to Observation 2.1 n ψ = 1 2 n 2 and there exists no admissible ψ such that Q = Q (∂ ψ ).Herex denotes the operator of multiplication by x while n k is a special case of n k ψ for the choice n ψ = n.
Observation 2.2. From theorem 3.1 in [12] we infer that generalized differential operators give rise to subalgebras Q of linear maps (plus zero map of course) commuting with a given generalized difference-tial operator Q. The intersection of two different algebras Q 1 and Q 2 is just zero map added.
The importance of the above Observation 2.2 as well as the definition below may be further fully appreciated in the context of the Theorem 2.1 and the Proposition 3.1 to come. Definition 2.5. Let {p n } n≥0 be the normal polynomial sequence [12] ,i.e. p 0 (x) = 1 and p n (0) = 0 ; n ≥ 1. Then we call it the ψ-basic sequence of the generalized difference-tial operator Q if in addition Q p n = n ψ p n−1 . In parallel we define a linear mapx Q : P → P such thatx Q p n = (n+1) (n+1) ψ p n+1 ; n ≥ 0. We call the operatorx Q the dual to Q operator.
When Q = Q (∂ ψ ) = ∂ ψ we write for short:x Q(∂ ψ ) ≡x ∂ ψ ≡x ψ (see: Definition 2.9).
Of course [Q,x Q ]= id therefore {Q,x Q , id} provide us with a continuous family of generators of GHW in -as we call it -Q-representation of Graves-Heisenberg-Weyl algebra. In the following we shall restrict to special case of generalized differential operators Q, namely to those Q = Q (∂ ψ ) which are ∂ ψ -shift invariant operators [3, 4, 1] (see: Definition 2.6).
At first let us start with appropriate ψ-Leibnitz rules for corresponding ψderivatives. ψ-Leibnitz rules:
It is easy to see that the following hold for any formal series f and g:
for ∂ q : ∂ q (f · g) = (∂ q f ) · g + Q f · (∂ q g), where Q f (x) = f (qx); for ∂ R = R qQ ∂ 0 : ∂ R (f · g)(z) = R qQ {(∂ 0 f )(z) · g(z) + f (0)(∂ 0 g)(z)}
where -note -R qQ x n−1 = n R x n−1 ; (n ψ = n R = n R(q) = R (q n )) and finally for ∂ ψ =n ψ ∂ 0 :
∂ ψ (f · g)(z) =n ψ {(∂ o f )(z) · g(z) + f (0)(∂ 0 g)(z)} wheren ψ x n−1 = n ψ x n−1 ; n ≥ 1. Example 2.1. Let Q (∂ ψ ) = DxD, wherexf (x) = xf (x) and D = d dx . Then ψ = n 2 ! −1 n≥0 and Q = ∂ ψ . Let Q (∂ ψ ) R(qQ) ∂ 0 ≡ ∂ R . Then ψ = [R (q n )!] −1 n≥0 and Q = ∂ ψ ≡ ∂ R .
Here R(z) is any formal Laurent series;
Qf (x) = f (qx) and n ψ = R(q n ). ∂ 0 is q = 0 Jackson derivative which as a matter of fact -being a difference operator is the differential operator of infinite order at the same time:
∂ 0 = ∞ n=1 (−1) n+1 x n−1 n! d n dx n . (2.3) Naturally with the choice ψ n (q) = [R (q n )!] −1 and R (x) = 1−x 1−q the ψ-derivative ∂ ψ becomes the Jackson's derivative [25, 26, 27, 2, 3] ∂ q : (∂ q ϕ) (x) = 1 − qQ (1 − q) ∂ 0 ϕ (x) .
The equivalent to (2.3) form of Bernoulli-Taylor expansion one may find [16] in Acta Eruditorum from November 1694 under the name "series univeralissima".
(Taylor's expansion was presented in his "Methodus incrementorum directa et inversa" in 1715 -edited in London). Definition 2.6. Let us denote by End(P ) the algebra of all linear operators acting on the algebra P of polynomials. Let
ψ = {T ∈ End(P ); ∀ α ∈ F; [T, E α (∂ ψ )] = 0}.
Then ψ is a commutative subalgebra of End(P ) of F-linear operators. We shall call these operators T : ∂ ψ -shift invariant operators.
We are now in a position to define further basic objects of "ψ-umbral calculus" [3,4,1].
Definition 2.7. Let Q (∂ ψ ) : P → P ; the linear operator Q (∂ ψ ) is a ∂ ψ -delta operator iff a) Q (∂ ψ ) is ∂ ψ -shift invariant; b) Q (∂ ψ ) (id) = const = 0 where id(x)=x.
The strictly related notion is that of the ∂ ψ -basic polynomial sequence:
Definition 2.8. Let Q (∂ ψ ) : P → P ; be the ∂ ψ -delta operator. A polynomial sequence {p n } n≥0 ; deg p n = n such that: 1) p 0 (x) = 1; 2) p n (0) = 0; n > 0; 3) Q (∂ ψ ) p n = n ψ p n−1 ,∂ ψ -delta operator Q (∂ ψ )is called the ∂ ψ -basic polyno- mial sequence of the ∂ ψ -delta operator.
Identification 2.1. It is easy to see that the following identification takes place:
∂ ψ -delta operator Q (∂ ψ ) = ∂ ψ -shift invariant generalized differential operator Q.
Of course not every generalized differential operator might be considered to be such.
Note 2.3. Let Φ (x; λ) = n≥0 λ n n ψ ! p n (x) denotes the ψ-exponential generating function of the ∂ ψ -basic polynomial sequence {p n } n≥0 of the ∂ ψ -delta operator Q ≡ Q (∂ ψ ) and let Φ (0; λ) = 1. Then QΦ (x; λ) = λΦ (x; λ)
and Φ is the unique solution of this eigenvalue problem. If in addition (2.2) is satisfied then there exists such an admissible sequence ϕ that Φ (x; λ) = exp ϕ {λx} (see Example 3.1).
The notation and naming established by Definitions 2.7 and 2.8 serve the target to preserve and to broaden simplicity of Rota's finite operator calculus also in its extended "ψ-umbral calculus" case [3,4,1]. As a matter of illustration of such notation efficiency let us quote after [3] the important Theorem 2.1 which might be proved using the fact that ∀ Q (∂ ψ ) ∃! invertible S ∈ Σ ψ such that Q (∂ ψ ) = ∂ ψ S. ( For Theorem 2.1 see also Theorem 4.3. in [12], which holds for operators, introduced by the Definition 2.5). Let us define at first what follows.
Definition 2.9. (compare with (17) in [8]) The Pincherle ψ-derivative is the linear map ' : Σ ψ → Σ ψ ; T ' = Tx ψ -x ψ T ≡[T ,x ψ ]
where the linear mapx ψ : P → P ; is defined in the basis {x n } n≥0 as followŝ
x ψ x n = ψ n+1 (q) (n + 1) ψ n (q) x n+1 = (n + 1) (n + 1) ψ x n+1 ; n ≥ 0.
Then the following theorem is true [3] Theorem 2.1. (ψ-Lagrange and ψ-Rodrigues formulas [34,11,12,23,3])
Let {p n (x)} ∞ n=0 be ∂ ψ -basic polynomial sequence of the ∂ ψ -delta operator Q (∂ ψ ). Let Q (∂ ψ ) = ∂ ψ S. Then for n > 0: (1) p n (x) = Q (∂ ψ )' S −n−1 x n ; (2) p n (x) = S −n x n − n ψ n (S −n )'x n−1 ; (3) p n (x) = n ψ nx ψ S −n x n−1 ; (4) p n (x) = n ψ nx ψ (Q (∂ ψ )' ) −1 p n−1 (x) (← Rodrigues ψ-formula ).
For the proof one uses typical properties of the Pincherle ψ-derivative [3].Because ∂ ψ ' = id we arrive at the simple and crucial observation. One derives the above ψ-Leibnitz rule from ψ-Heisenberg-Weyl exponential commutation rules exactly the same way as in {D,x, id} GHW representation -(compare with 2.2.1 Proposition in [18] ). ψ-Heisenberg-Weyl exponential commutation relations read:
exp{t∂ ψ } exp{ax ψ } = exp{at} exp{ax ψ } exp{t∂ ψ }. (2.5)
To this end let us introduce a pertinent ψ-multiplication * ψ of functions as specified below.
Notation 2.2. x * ψ x n =x ψ (x n ) = (n+1) (n+1) ψ x n+1 ; n ≥ 0 hence x * ψ 1 = 1 −1 ψ x ≡ x x n * ψ x =x n ψ (x) = 1 ψ (n+1)! (n+1) ψ ! x n+1 ; n ≥ 0 hence 1 * ψ x = x therefore x * ψ α1 = x * ψ α = α1 −1 ψ x and α1 * ψ x = α * ψ x = αx and ∀x, α ∈ F; f (x) * ψ x n = f (x ψ )x n .
For k = n x n * ψ x k = x k * ψ x n as well as x n * ψ x k = x n+k -in general i.e. for arbitrary admissible ψ; compare this with (x + ψ a) n = (x + ψ a) n−1 (x + ψ a). In order to facilitate in the future formulation of observations accounted for on the basis of ψ-calculus representation of GHW algebra we shall use what follows.
Definition 2.10. With Notation 2.2 adopted let us define the * ψ powers of x according to
x n * ψ ≡ x * ψ x (n−1) * ψ =x ψ (x (n−1) * ψ ) = x * ψ x * ψ ... * ψ x = n! n ψ ! x n ; n ≥ 0.
Note that x n * ψ * ψ x k * ψ = n! n ψ ! x (n+k) * ψ = x k * ψ * ψ x n * ψ = k! k ψ ! x (n+k) * ψ for k = n and x 0 * ψ = 1. This noncommutative ψ-product * ψ is deviced so as to ensure the following observations. which is the unique solution (up to a constant factor) of the ∂ ψ -difference equations systems
Observation 2.5. (a) ∂ ψ x n * ψ = nx (n−1) * ψ ; n ≥ 0 (b) exp ψ [αx] ≡ exp{αx ψ }1 (c) exp[αx] * ψ (exp ψ {βx ψ }1) = (exp ψ {[α + β]x ψ })1 (d) ∂ ψ (x k * ψ x n * ψ ) = (Dx k ) * ψ x n * ψ + x k * ψ (∂ ψ x n * ψ ) hence (e) ∂ ψ (f * ψ g) = (Df ) * ψ g + f * ψ (∂ ψ g) ; f, g -formal series (f) f (x ψ )g(x ψ ) 1 = f (x) * ψg (x) ;g(x) = g(x ψ )1.∂ ψ p m (x) + λp m (x) = λp m−1 (x) m > 0 ; ∂ ψ p 0 (x) = −λp 0 (x) (2.7) Naturally N (λ, x) = exp[λx] * ψ exp ψ [−λx].
As announced -the rules of ψ -product * ψ are accounted for on the basis of ψ-calculus representation of GHW algebra. Indeed,it is enough to consult Observation 2.5 and to introduce ψ-Pincherle derivation∂ ψ of series in powers of the symbolx ψ as below. Then the correspondence between generic relative formulas turns out evident.
Observation 2.6. Let∂ ψ ≡ ∂ ∂x ψ be defined according to∂ ψ f (x ψ ) = [∂ ψ , f (x ψ )]. Then∂ ψx n ψ = nx n−1 ψ ; n ≥ 0 and∂ ψx n ψ 1 = ∂ ψ x n * ψ hence [∂ ψ f (x ψ )]1 = ∂ ψ f (x)
where f is a formal series in powers ofx ψ or equivalently in * ψ powers of x.
As an example of application note how the solution of 2.7 is obtained from the obvious solution p m (x ψ ) of the∂ ψ -Pincherle differential equation 2.8 formulated within G-H-W algebra generated by {∂ ψ ,x ψ , id}
∂ ψ p m (x ψ ) + λp m (x ψ ) = λp m−1 (x ψ ) m > 0 ; ∂ ψ p 0 (x ψ ) = −λp 0 (x ψ .) (2.8) Namely : due to Observation 2.5 (f) p m (x) = p m (x ψ )1, where p m (x ψ ) = (λx ψ ) m m! exp ψ [−λx ψ ]. (2.9)
3 The general picture of the algebra End(P ) from GHW algebra point of view
The general picture from the title above relates to the general picture of the algebra End(P ) of operators on P as in the following we shall consider the algebra P of polynomials P = F[x] over the field F of characteristic zero. With series of Propositions from [1,3,35,21] we shall draw an over view picture of the situation distinguished by possibility to develop further umbral calculus in its operator form for any polynomial sequences {p n } ∞ 0 [12] instead of those of traditional binomial type only.
In 1901 it was proved [20] that every linear operator mapping P into P may be represented as infinite series in operatorsx and D. In 1986 the authors of [21] supplied the explicit expression for such series in most general case of polynomials in one variable ( for many variables see: [22] ). Thus according to Proposition 1 from [21] one has: Proposition 3.1. Let Q be a linear operator that reduces by one the degree of each polynomial. Let {q n (x)} n≥0 be an arbitrary sequence of polynomials in the operatorx. ThenT = n≥0 q n (x)Q n defines a linear operator that maps polynomials into polynomials. Conversely, ifT is linear operator that maps polynomials into polynomials then there exists a unique expansion of the form
T = n≥0 q n (x)Q n .
It is also a rather matter of an easy exercise to prove the Proposition 2 from [21]: QΦ (x; λ) = λΦ (x; λ) .
Then also P (x; λ) = Φ (x; λ) −1T Φ (x; λ). Example 3.1. Note that ∂ ψ exp ψ {λx} = λ exp ψ {λx}; exp ψ [x] | x=0 = 1. (*)
Hence for indicator ofT ;T = n≥0 q n (x)∂ n ψ we have:
P (x; λ) = [exp ψ {λx }] −1T exp ψ {λx }. (**)
After choosing ψ n (q) = [n q !] −1 we get exp ψ {x } = exp q {x }. In this connection note that exp 0 (x) = 1 1−x and exp(x) are mutual limit deformations for |x| < 1 due to:
exp 0 (z)−1 z = exp o (z) ⇒ exp 0 (z) = 1 1−z = ∞ k=0 z k ; |z| < 1 , i.e. exp (x) ←− 1←q exp q (x) = ∞ n=0 x n n q ! −→ q→0 1 1 − x .
Therefore corresponding specifications of (*) such as exp 0 (λx) = 1 1−λx or exp(λx) lead to corresponding specifications of (**) for divided difference operator ∂ 0 and D operator including special cases from [21].
To be complete let us still introduce [3,4] an important operatorx Q(∂ ψ ) dual to Q (∂ ψ ). It is now obvious that the following holds.
= n≥0 q n x Q(∂ ψ ) Q (∂ ψ ) n . (3.1) Comment 3.2. The pair Q (∂ ψ ) ,x Q(∂ ψ )
of dual operators is expected to play a role in the description of quantum-like processes apart from the q-case now vastly exploited [3,4].
Naturally the Proposition 3.2 for Q (∂ ψ ) andx Q(∂ ψ ) dual operators is also valid. Summing up: we have the following picture for End(P ) -the algebra of all linear operators acting on the algebra P of polynomials.
Q(P ) ≡ Q Q ⊂ End(P )
and of course Q(P ) = End(P ) where the subfamily Q(P ) (with zero map) breaks up into sum of subalgebras Q according to commutativity of these generalized difference-tial operators Q (see Definition 2.4 and Observation 2.2). Also to each subalgebra ψ i.e. to each Q (∂ ψ ) operator there corresponds its dual operator
x Q(∂ ψ )x Q(∂ ψ ) / ∈ ψ and both Q (∂ ψ ) &x Q(∂ ψ )
operators are sufficient to build up the whole algebra End(P ) according to unique representation given by (3.1) including the ∂ ψ andx ψ case. Summarising: for any admissible ψ we have the following general statement. General statement:
End(P ) =[{∂ ψ ,x ψ }] = [{Q (∂ ψ ) ,x Q(∂ ψ ) }] = [{Q ,x Q }]
i.e. the algebra End(P ) is generated by any dual pair {Q ,x Q } including any dual pair {Q (∂ ψ ) ,x Q(∂ ψ ) } or specifically by {∂ ψ ,x ψ } which in turn is determined by a choice of any admissible sequence ψ.
As a matter of fact and in another words: we have bijective correspondences between different commutation classes of ∂ ψ -shift invariant operators from End(P ), different abelian subalgebras ψ , distinct ψ-representations of GHW algebra, different ψ-representations of the reduced incidence algebra R(L(S)) -isomorphic to the algebra Φ ψ of ψ-exponential formal power series [3] and finally -distinct ψumbral calculi [8,12,15,24,34,3,35]. These bijective correspondences may be naturally extended to encompass also Q-umbral calculi [12,1], Q-representations of GHW algebra [1] and abelian subalgebras Q . (Recall: R(L(S)) is the reduced incidence algebra of L(S) where L(S)={A; A⊂S; |A| < ∞}; S is countable and (L(S); ⊆) is partially ordered set ordered by inclusion [11,3] ). This is the way the Rota's devise has been carried into effect. The devise "much is the iteration of the few" [11] -much of the properties of literally all polynomial sequences -as well as GHW algebra representations -is the application of few basic principles of the ψ-umbral difference operator calculus [3,35,1]. ψ− Integration Remark :
Recall : ∂ o x n = x n−1 . ∂ o is identical with divided difference operator. ∂ o is iden- tical with ∂ ψ for ψ = {ψ (q) n } n≥0 ; ψ (q) n = 1 ; n ≥ 0 . LetQf (x)f (qx).
Recall also that there corresponds to the "∂ q difference-ization" the q-integration [25,26,27] which is a right inverse operation to "q-difference-ization" [35,1]. Namely
F (z) :≡ q ϕ (z) := (1 − q) z ∞ k=0 ϕ q k z q k (3.2)
i.e.
F (z) ≡ q ϕ (z) = (1 − q) z ∞ k=0 q kQk ϕ (z) = = (1 − q) z 1 1 − qQ ϕ (z) . (3.3) Of course ∂ q • q = id (3.4) as 1 − qQ (1 − q) ∂ 0 (1 − q)ẑ 1 1 − qQ = id. (3.5)
Naturally (3.5) might serve to define a right inverse operation to "q-differenceization"
(∂ q ϕ) (x) = 1−qQ (1−q) ∂ 0 ϕ (x)
and consequently the "q-integration " as represented by (3.2) and (3.3). As it is well known the definite q-integral is an numerical approximation of the definite integral obtained in the q → 1 limit. Following the q-case example we introduce now an R-integration (consult Remark 2.1).
R x n = x 1 R qQ x n = 1 R (q n+1 ) x n+1 ; n ≥ 0 (3.6) Of course ∂ R • R = id as R qQ ∂ o x 1 R qQ = id. (3.7)
Let us then finally introduce the analogous representation for ∂ ψ difference-ization ∂ ψ =n ψ ∂ o ;n ψ x n−1 = n ψ x n−1 ; n ≥ 1.
(3.8)
Then ψ x n = x 1 n ψ x n = 1 (n + 1) ψ x n+1 ; n ≥ 0 (3.9) and of course
∂ ψ • ψ = id (3.10)
Closing Remark:
The picture that emerges discloses the fact that any ψ-representation of finite operator calculus or equivalently -any ψ-representation of GHW algebra makes up an example of the algebraization of the analysis -naturally when constrained to the algebra of polynomials. We did restricted all our considerations to the algebra P of polynomials. Therefore the distinction in-between difference and differentiation operators disappears. All linear operators on P are both difference and differentiation operators if the degree of differentiation or difference operator is unlimited.
For example d dx = k≥1 d k k! ∆ k where d k = d dx x k x=0 = (−1) k−1 (k − 1)! or ∆ = n≥1 δn n! d n dx n where δ n = [∆x n ] x=0 = 1.
Thus the difference and differential operators and equations are treated on the same footing. For new applications -due to the first author see [4,1,[36][37][38][39][40][41]. Our goal here was to deliver the general scheme of "ψ-umbral" algebraization of the analysis of general differential operators [12]. Most of the general features presented here are known to be pertinent to the Q representation of finite operator calculus (Viskov, Markowsky, Roman) where Q is any linear operator lowering degree of any polynomial by one . So it is most general example of the algebraization of the analysis for general differential operators [12].
Glossary
In order to facilitate the reader a simultaneous access to quoted references of classic Masters of umbral calculus -here now follow short indicatory glossaries of notation used by Ward [2], Viskov [7,8], Markowsky [11], Roman [28]- [32] on one side and the Rota-oriented notation on the other side. See also [33].
Ward
Rota -oriented (this note)
k ψ ! D = D x -the operator D ∂ ψ -the ψ-derivative D x n = [n] x n−1 ∂ ψ x n = n ψ x n−1 (x + y) n (x + ψ y) n (x + y) n ≡ n r=0
[n, r] x n−r y r (x + ψ y) n = n k=0 n k ψ x k y n−k
Ward
Rota -oriented (this note) basic displacement symbol generalized shift operator
E t ; t ∈ Z E y (∂ ψ ) ≡ exp ψ {y∂ ψ }; y ∈ F Eϕ(x) = ϕ(x + 1) E(∂ ψ )ϕ(x) = ϕ(x + ψ 1) E t ϕ(x) = ϕ x + t E y (∂ ψ )x n ≡ (x + ψ y) n basic difference operator ψ-difference delta operator ∆ = E − id ∆ ψ = E y (∂ ψ ) − id ∆ = ε(D) − id = ∞ n=0 D n [n]! − id
Roman
Rota -oriented (this note) t; tx n = nx n−1 ∂ ψ -the ψ-derivative
∂ ψ x n = n ψ x n−1 t k |p(x) = p (k) (0) [∂ k ψ p(x)]| x=0
Roman
Rota -oriented (this note) evaluation functional generalized shift operator
ǫ y (t) = exp {yt} E y (∂ ψ ) = exp ψ {y∂ ψ } t k |x n = n!δ n,k ǫ y (t)|p(x) = p(y) [E y (∂ ψ )p n (x)]| x=0 = p n (y) ǫ y (t)x n = k≥0 n k x k y n−k E y (∂ ψ )p n (x) = k≥0 n k ψ p k (x)p n−k (y) formal derivative Pincherle derivative f ′ (t) ≡ d dt f (t) [Q(∂ ψ )]'≡ d d∂ ψ Q(∂ ψ ) f (t) compositional inverse of Q −1 (∂ ψ ) compositional inverse of
formal power series f (t) formal power series Q(∂ ψ )
θ t ; θ t x n = x n+1 ; n ≥ 0x ψ ;x ψ x n = n+1 (n+1) ψ x n+1 ; n ≥ 0 θ t t =xDx ψ ∂ ψ =xD =N k≥0 s k (x) k ψ ! t k = k≥0 s k (x) k ψ ! z k = [g(f (z))] −1 exp {xf (t)} s(q −1 (z)) exp ψ {xq −1 (z)} {s n (x)} n≥0 -Sheffer sequence q(t), s(t) indicators
for (g(t), f (t)) of Q(∂ ψ ) and S ∂ ψ
Roman
Rota -oriented (this note)
g(t) s n (x) = q n (x) -sequence s n (x) = S −1 ∂ ψ q n (x) -∂ ψ -basic associated for f (t) sequence of Q(∂ ψ )
The expansion theorem:
The First Expansion Theorem
h(t) = ∞ k=0 h(t)|p k (x) k! f (t) k T = n≥0 [T pn(z)]| z=0 n ψ Q(∂ ψ ) n p n (x) -sequence associated for f (t) ∂ ψ -basic polynomial sequence {p n } ∞ 0 exp{yf (t)} = ∞ k=0 p k (y) k! t k exp ψ {xQ −1 (x)} = k≥0 p k (y) k! z k
The Sheffer Identity:
The Sheffer ψ-Binomial Theorem:
s n (x + y) = n k=0 n k p n (y)s n−k (x) s n (x + ψ y) = k≥0 n k ψ s k (x)q n−k (y)
Viskov
Rota -oriented (this note) θ ψ -the ψ-derivative ∂ ψ -the ψ-derivative θ ψ x n = ψ n−1 ψn x n−1 ∂ ψ x n = n ψ x n−1
Viskov
Rota -oriented (this note)
A p (p = {p n } ∞ 0 ) Q A p p n = p n−1 Q p n = n ψ p n−1
B p (p = {p n } ∞ 0 )x Q B p p n = (n + 1) p n+1xQ p n = n+1 (n+1) ψ p n+1 E y p (p = {p n } ∞ 0 ) E y (∂ ψ ) ≡ exp ψ {y∂ ψ } E y p p n (x) = n k=0 p n−k (x)p k (y) E y (∂ ψ ) p n (x) = = k≥0 n k ψ p k (x)p n−k (y)
T − ε p -operator: E y -shift operator:
T A p = A p T E y ϕ(x) = ϕ(x + ψ y)
T -∂ ψ -shift invariant operator:
∀ y∈F T E y p = E y p T ∀ α∈F [T, E α (∂ ψ )] = 0
Qδ ψ -operator: Q(∂ ψ ) -∂ ψ -delta-operator:
Qǫ p -operator and Q(∂ ψ ) -∂ ψ -shift-invariant and Qx = const = 0 Q(∂ ψ )(id) = const = 0
Observation 2 . 1 .
21Let Q be as in Definition 2.4. Let Qx n = n k=1 b n,k x n−k where b n,1 = 0 of course. Without loose of generality take b 1,1 = 1. Then ∃ {q k } k≥2 ⊂ F and there exists admissible ψ such that
triples {∂ ψ ,x ψ , id} for any admissible ψ-constitute the set of generators of the ψ-labelled representations of Graves-Heisenberg-Weyl (GHW) algebra[17,18,19,35,1]. Namely, as easily seen [∂ ψ ,x ψ ] = id. (compare with Definition 2.5) Observation 2.4. In view of the Observation 2.3 the general Leibnitz rule in ψrepresentation of Graves-Heisenberg-Weyl algebra may be written (compare with 2.2.2 Proposition in[18]
Now the consequences of Leibniz rule (e) for difference-ization of the product are easily feasible. For example the Poisson ψ-process distribution π m (x
Proposition 3. 2 .
2Let Q be a linear operator that reduces by one the degree of each polynomial. Let {q n (x)} n≥0 be an arbitrary sequence of polynomials in the operatorx. Let a linear operator that maps polynomials into polynomials be given byT = n≥0 q n (x)Q n .Let P (x; λ) = n≥0 q n (x)λ n denotes indicator ofT . Then there exists a unique formal series Φ (x; λ); Φ (0; λ) = 1 such that:
{p n } n≥0 be the ∂ ψ -basic polynomial sequence of the ∂ ψ -delta operator Q (∂ ψ ).A linear mapx Q(∂ ψ ) : P → P ;x Q(∂ ψ ) p n = (n+1) (n+1) ψ p n+1 ; n ≥ 0 is called the operator dual to Q (∂ ψ ).
Comment 3 . 1 .
31Dual in the above sense corresponds to adjoint in ψ-umbral calculus language of linear functionals' umbral algebra (compare with Proposition 1.1.21 in[23] ).
Proposition 3 . 3 .
33Let {q n x Q(∂ ψ ) } n≥0 be an arbitrary sequence of polynomials in the operatorx Q(∂ ψ ) . Then T = n≥0 q n x Q(∂ ψ ) Q (∂ ψ ) n defines a linear operator that maps polynomials into polynomials. Conversely, if T is linear operator that maps polynomials into polynomials then there exists a unique expansion of the form T
Rota -oriented (this note){p n (x), n ≥ 0} -(Q, ψ)-basic {p n } n≥0 -∂ ψ -basic polynomial sequence of the polynomial sequence of theδ ψ -operator Q ∂ ψ -delta-operator Q(∂ ψ )ψ-binomiality property ψ-binomiality propertyΨ y s n (x) = E y (∂ ψ )p n (p k (x)p n−k (y) E a -shift-operator: E y -∂ ψ -shift operator: E a f (x) = f (x + a) E y ϕ(x) = ϕ(x + ψ y) G -shift-invariant operator: T -∂ ψ -shift invariant operator: EG = GE ∀ α∈F [T, E(Q)] = 0 G -delta-operator: L = L(Q) -Q ψ -delta operator:G -shift-invariant and [L, Q] = 0 andGx = const = 0 L(id) = const = 0 D L (G) G ′ = [G(Q),x Q ] L -Pincherle derivative of G Q -Pincherle derivative D L (G) = [G, M ]{Q 0 , Q 1 , ...} -basic family {p n } n≥0 -ψ-basic for differential operator L polynomial sequence of the generalized difference operator Q binomiality property Qψ-binomiality property P n (x + y) = E y (Q)p n (p k (x)p n−k (y)
Acknowledgements: The authors thank the Referee for suggestions , which have led us to improve the presentation of the paper. The authors express also their gratitude to Katarzyna Kwaśniewska for preparation the L A T E Xversion of this contribution.
Bulletin de la Soc. des Sciences et des Letters de Lódź 52 SERIE Reserchers sur les deformations. A K Kwaśniewski, ArXiv: math.CO/03123973645A. K. Kwaśniewski: Bulletin de la Soc. des Sciences et des Letters de Lódź 52 SERIE Reserchers sur les deformations 36, 45 (2002).ArXiv: math.CO/0312397
. M Ward, Amer. J. Math. 58255M. Ward: Amer. J. Math. 58, 255 (1936).
. A K Kwaśniewski, ArXiv: math.CO/0402078Rep. Math. Phys. 483305A. K. Kwaśniewski: Rep. Math. Phys. 48 (3), 305 (2001) ArXiv: math.CO/0402078 Feb 2004
. A K Kwaśniewski, Integral Transforms and Special. 24333A. K. Kwaśniewski: Integral Transforms and Special Functions2 (4), 333 (2001)
. R P R C BoasJr, Buck, Am. Math. Monthly. 63626R. P. Boas and Jr. R. C. Buck: Am. Math. Monthly 63, 626 (1959).
R P R C BoasJr, Buck, Polynomial Expansions of Analytic Functions. BerlinSpringerR. P. Boas and Jr. R. C. Buck: Polynomial Expansions of Analytic Functions, Springer, Berlin 1964.
. O V Viskov, Soviet Math. Dokl. 161521O.V. Viskov: Soviet Math. Dokl. 16, 1521 (1975).
. O V Viskov, Soviet Math. Dokl. 19250O.V. Viskov: Soviet Math. Dokl. 19, 250 (1978).
On the foundations of combinatorial theory, III. Theory of Binomial Enumeration. G.-C Rota, R Mullin, Graph Theory and Its Applications. New YorkAcademic PressG.-C. Rota and R. Mullin: On the foundations of combinatorial theory, III. Theory of Binomial Enumeration in "Graph Theory and Its Applications", Academic Press, New York 1970.
. G C Rota, D Kahaner, A Odlyzko, J. Math. Anal. Appl. 42684G. C. Rota, D.Kahaner and A. Odlyzko: J. Math. Anal. Appl. 42, 684 (1973).
Rota: Finite Operator Calculus. G C , Academic PressNew YorkG. C. Rota: Finite Operator Calculus, Academic Press, New York 1975.
. G Markowsky, J. Math. Anal. Appl. 63145G. Markowsky: J. Math. Anal. Appl. 63, 145 (1978).
A K Kwaśniewski, Advances in Applied Clifford Algebras. 941A. K. Kwaśniewski: Advances in Applied Clifford Algebras 9, 41 (1999).
. O V Viskov, Trudy Matiematicz'eskovo Instituta AN SSSR. 17721O.V. Viskov: Trudy Matiematicz'eskovo Instituta AN SSSR 177, 21 (1986).
. A , Di Bucchianico, D Loeb, J. Math. Anal. Appl. 921A. Di Bucchianico and D.Loeb: J. Math. Anal. Appl. 92, 1 (1994).
. N Ya, Sonin: Izw. Akad. Nauk. 7337N. Ya. Sonin: Izw. Akad. Nauk 7, 337 (1897).
C Graves, Proc. Royal Irish Academy. Royal Irish Academy6C. Graves: Proc. Royal Irish Academy 6, 144 (1853-1857).
Algebraic Structures and Operator Calculus. P Feinsilver, R Schott, Kluwer Academic PublishersNew YorkP. Feinsilver and R. Schott: Algebraic Structures and Operator Calculus, Kluwer Academic Publishers, New York 1993.
. O V Viskov, Integral Transforms and Special Functions. 12O.V. Viskov: Integral Transforms and Special Functions 1, 2 (1997).
. S Pincherle, U Amaldi, Le operazioni distributive e le loro applicazioni all'analisi, N. ZanichelliBolognaS. Pincherle and U. Amaldi: Le operazioni distributive e le loro applicazioni all'analisi, N. Zanichelli, Bologna 1901.
. S G Kurbanov, V M Maximov, Dokl. Akad. Nauk Uz. SSSR. 4S. G. Kurbanov and V. M. Maximov: Dokl. Akad. Nauk Uz. SSSR 4, 8 (1986).
A , Di Bucchianico, D Loeb, Integral Transforms and Special Functions. 449A. Di Bucchianico and D.Loeb: Integral Transforms and Special Functions 4, 49 (1996).
. P Kirschenhofer, Sitzunber. Abt. II Oster. Ackad. Wiss. Math. Naturw. Kl. 188263P. Kirschenhofer: Sitzunber. Abt. II Oster. Ackad. Wiss. Math. Naturw. Kl. 188, 263 (1979).
. A , Di Bucchianico, D Loeb, J. Math. Anal. Appl. 19939A. Di Bucchianico and D.Loeb: J. Math. Anal. Appl. 199, 39 (1996).
. F H Jackson, Quart. J. Pure and Appl. Math. 41193F. H. Jackson: Quart. J. Pure and Appl. Math. 41, 193 (1910).
. F H Jackson, Messenger of Math4757F. H. Jackson: Messenger of Math. 47, 57 (1917).
. F H Jackson, Quart. J. Math. 21F. H. Jackson: Quart. J. Math. 2, 1 (1951).
. S M Roman, J. Math. Anal. Appl. 8758S. M. Roman: J. Math. Anal. Appl. 87, 58 (1982).
. S M Roman, J. Math. Anal. Appl. 89290S. M. Roman: J. Math. Anal. Appl. 89, 290 (1982).
. S M Roman, J. Math. Anal. Appl. 95528S. M. Roman: J. Math. Anal. Appl. 95, 528 (1983).
S M Roman, The umbral calculus. New YorkAcademic PressS. M. Roman: The umbral calculus, Academic Press, New York 1984.
. S R Roman, J. Math. Anal. Appl. 107222S. R. Roman: J. Math. Anal. Appl. 107, 222 (1985).
. A K Kwasniewski, E Gradzka, Rendiconti del Circolo Matematico di Palermo Serie II. 69117Suppl.A. K. Kwasniewski and E. Gradzka: Rendiconti del Circolo Matematico di Palermo Serie II , Suppl. 69, 117(2002).
. J F , Steffensen Acta Mathematica. 73333J. F. Steffensen Acta Mathematica 73, 333 (1944).
. A K Kwasniewski, Integral Transforms and Special Functions. 14499A. K. Kwasniewski: Integral Transforms and Special Functions 14, 499(2003).
Kwasniewski The logarithmic Fib-binomial formula Advan. Stud. Contemp. A K , ArXiv: math.CO/0406258 13Math. 91A.K.Kwasniewski The logarithmic Fib-binomial formula Advan. Stud. Con- temp. Math. 9 No 1 (2004):19-26 ArXiv: math.CO/0406258 13 June 2004
A K Kwasniewski, ArXiv: math.CO/0405577On basic Bernoulli-Ward polynomials Bulletin de la Societe des Sciences et des Lettres de Lodz 54 Serie: Recherches sur les Deformations. 45A. K. Kwasniewski On basic Bernoulli-Ward polynomials Bulletin de la So- ciete des Sciences et des Lettres de Lodz 54 Serie: Recherches sur les Defor- mations Vol. 45 (2004) : 5-10 ArXiv: math.CO/0405577 30 May 2004
Kwasniewski ψ-Appell polynomials' solutions of the -difference calculus nonhomogeneous equation Bulletin de la Societe des. A K , 11-15 in print ArXiv: math.CO/0405578Recherches sur les Deformations. 45A. K. Kwasniewski ψ-Appell polynomials' solutions of the -difference calculus nonhomogeneous equation Bulletin de la Societe des Sciences et des Lettres de Lodz 54 Serie: Recherches sur les Deformations Vol. 45 (2004) : 11-15 in print ArXiv: math.CO/0405578 30 May 2004
Kwasniewski On ψ-umbral difference Bernoulli-Taylor formula with Cauchy type remainder Bulletin de la Societe des. A K , ArXiv: math.GM/0312401Recherches sur les Deformations. 44A. K. Kwasniewski On ψ-umbral difference Bernoulli-Taylor formula with Cauchy type remainder Bulletin de la Societe des Sciences et des Lettres de Lodz 54 Serie: Recherches sur les Deformations Vol. 44 (2004) :21-29 ArXiv: math.GM/0312401 December 2003
Kwasniewski First contact remarks on umbra difference calculus references streams. A K , Bull. Soc. Sci. Lett. Lodz to appear ArXiv: math.CO/0403139 v1 8A. K. Kwasniewski First contact remarks on umbra difference calculus refer- ences streams , Bull. Soc. Sci. Lett. Lodz to appear ArXiv: math.CO/0403139 v1 8 March 2004
On extended umbral calculus, oscillator-like algebras and Generalized Clifford Algebra. A K Kwasniewski, ArXiv: math.QA/0401083Advances in Applied Clifford Algebras. A.K.Kwasniewski On extended umbral calculus, oscillator-like algebras and Generalized Clifford Algebra, Advances in Applied Clifford Algebras , 11 No2 (2001):267-279 , ArXiv: math.QA/0401083 January 2004
| []
|
[
"Clamp cell with in situ pressure monitoring for low-temperature neutron scattering measurements",
"Clamp cell with in situ pressure monitoring for low-temperature neutron scattering measurements"
]
| [
"A Podlesnyak \nNeutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA\n",
"M Loguillo \nNeutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA\n",
"G M Rucker \nNeutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA\n",
"B Haberl \nNeutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA\n",
"R Boehler \nNeutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA\n",
"G Ehlers \nNeutron Technologies Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA\n",
"L L Daemen \nNeutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA\n",
"D Armitage \nNeutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA\n",
"M. DFrontzek \nNeutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA\n",
"M Lumsden \nNeutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA\n"
]
| [
"Neutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA",
"Neutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA",
"Neutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA",
"Neutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA",
"Neutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA",
"Neutron Technologies Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA",
"Neutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA",
"Neutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA",
"Neutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA",
"Neutron Scattering Division\nOak Ridge National Laboratory\n37831Oak RidgeTennesseeUSA"
]
| []
| A clamp pressure cell for neutron scattering experiments at low temperatures and in external magnetic fields under pressure up to 2 GPa has been fabricated and tested. The cell provides optical access to the sample space that allows instantaneous pressure determination during sample loading, cooling and measuring using ruby and/or samarium doped strontium tetraborate fluorescence monitoring. A new calibration curve of the pressure-induced shift of the 7 D0 − 5 F0 (0-0) line in the fluorescent spectrum of SrB4O7:Sm 2+ for moderate pressures, P 2 GPa, is given. | 10.1080/08957959.2018.1519560 | [
"https://arxiv.org/pdf/1904.11529v1.pdf"
]
| 104,669,503 | 1904.11529 | 565f0a896b4b846ae4b09923cbb9822f09f2ec64 |
Clamp cell with in situ pressure monitoring for low-temperature neutron scattering measurements
A Podlesnyak
Neutron Scattering Division
Oak Ridge National Laboratory
37831Oak RidgeTennesseeUSA
M Loguillo
Neutron Scattering Division
Oak Ridge National Laboratory
37831Oak RidgeTennesseeUSA
G M Rucker
Neutron Scattering Division
Oak Ridge National Laboratory
37831Oak RidgeTennesseeUSA
B Haberl
Neutron Scattering Division
Oak Ridge National Laboratory
37831Oak RidgeTennesseeUSA
R Boehler
Neutron Scattering Division
Oak Ridge National Laboratory
37831Oak RidgeTennesseeUSA
G Ehlers
Neutron Technologies Division
Oak Ridge National Laboratory
37831Oak RidgeTennesseeUSA
L L Daemen
Neutron Scattering Division
Oak Ridge National Laboratory
37831Oak RidgeTennesseeUSA
D Armitage
Neutron Scattering Division
Oak Ridge National Laboratory
37831Oak RidgeTennesseeUSA
M. DFrontzek
Neutron Scattering Division
Oak Ridge National Laboratory
37831Oak RidgeTennesseeUSA
M Lumsden
Neutron Scattering Division
Oak Ridge National Laboratory
37831Oak RidgeTennesseeUSA
Clamp cell with in situ pressure monitoring for low-temperature neutron scattering measurements
ARTICLE HISTORY Compiled April 29, 2019neutron scattering, pressure measurementsclamp cellfluorescence
A clamp pressure cell for neutron scattering experiments at low temperatures and in external magnetic fields under pressure up to 2 GPa has been fabricated and tested. The cell provides optical access to the sample space that allows instantaneous pressure determination during sample loading, cooling and measuring using ruby and/or samarium doped strontium tetraborate fluorescence monitoring. A new calibration curve of the pressure-induced shift of the 7 D0 − 5 F0 (0-0) line in the fluorescent spectrum of SrB4O7:Sm 2+ for moderate pressures, P 2 GPa, is given.
Introduction
High-pressure investigations by means of neutron scattering give valuable information about crystal and magnetic structure transformations, as well as lattice and magnetic dynamics of the materials under study. Pressure, as an external parameter, can tune quantum fluctuations and causes quantum phase transitions, in contrast to a classical phase transition induced by thermal fluctuations. Pressures up to ∼ 90 GPa have been reached in neutron diffraction experiments using state-of-the-art diamond anvil cells (DAC) [1][2][3]. However, alternative techniques for applying pressure in a more moderate range, P 2 GPa, are still widely used in many areas of solid and soft matter physics, chemistry and biology, simply because pressure in this range still has a large impact on many physical properties of matter. Examples include, but are not limited to, unconventional superconductivity [4,5], exotic quantum states and quantum phase transitions [6][7][8], colossal magnetoresistance [9], insulator-metal transitions [10], spin crossover [11], and others. The study of many of these physical phenomena demands cryogenic temperatures and high magnetic fields. Therefore, pressure cells with pressures up to ∼ 2 GPa and suitable for neutron scattering techniques in combination with low temperatures and external magnetic fields open up a wide experimental area CONTACT A. Podlesnyak. Email: [email protected] and are an essential tool for condensed matter studies.
Pressures up to 2 GPa can be achieved in neutron scattering using piston-cylinder (or clamp) type pressure cells. High pressure neutron-diffraction and especially inelastic neutron scattering (INS) measurements traditionally require large sample volumes. In a clamp cell the sample volume can be as large as 1 cm 3 , that is much larger than in a DAC (typically less than 1 mm 3 ) [12]. On the other hand, the clamp cell can obviously reach higher pressures than a gas pressure cell (typically less than 0.8 GPa). One of the main drawbacks of the clamp cell is that the sample is loaded, pressurized and locked at room temperature outside the neutron instrument. An accurate pressure monitoring at all stages of a measurement, and not only at sample loading time, is vital for a successful experiment. A traditional way to determine the pressure in a clamp cell during an experiment is to measure the unit-cell parameters of a material with a well characterized equation of state, such as NaCl, MgO, or Pb, mixed with the sample [12,13]. This method has several disadvantages: i) The calibrant reduces much needed sample volume; ii) might absorb neutrons; and iii) it can react with the sample. Also, such diffraction measurements are difficult with a neutron spectrometer, which may have insufficient Q-resolution, and require therefore an additional time consuming study using a neutron diffractometer.
The ruby fluorescence method [14] is widely used for in-situ pressure determination in a DAC, where optical access to the sample is available. For the measurements, a small amount of ruby chip is added to the sample together with the pressure medium. The pressure shift of a ruby R 1 laser-excited line is well calibrated at room temperature [15]. The major challenge in ruby fluorescence measurements is that the temperature coefficient dλ/dT is large, that is, the wavelength of the excitation depends not only on the pressure but also on the temperature [16][17][18]. Besides, the R 1 ruby line belongs to the R 1 &R 2 doublet (∼6942 and 6928Å, respectively) and the excited peaks are broadened under non-hydrostatic stress and change of temperature. This leads to overlapping of the R 1 &R 2 peaks, significantly reducing the accuracy of the pressure measurement.
Samarium-doped strontium tetraborate SrB 4 O 7 :Sm 2+ is an alternative material that largely avoids the limitations of ruby, owing to the very small temperature dependence of the wavelength of the 7 D 0 − 5 F 0 excited line (0-0 hereafter), dλ/dT ∼ −0.001Å/K comparing to dλ/dT ∼ 0.068Å/K for the ruby excitation [19][20][21]. The 0-0 Sm 2+ excitation is a single line which is well isolated from the other fluorescence peaks [20], that makes it more suitable for accurate measurements. The excitation is also little sensitive to non-hydrostatic stress [19].
The primary goal of our work was to build a reliable clamp pressure cell with an optical access to the sample space for accurate (∆P < 0.1 GPa) in-situ pressure monitoring during inelastic neutron scattering (INS) experiments using the suite of the time-of-flight instruments on the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) [22], namely the cold neutron chopper spectrometer CNCS [23,24], the hybrid spectrometer HYSPEC [25], the fine resolution Fermi chopper spectrometer SEQUOIA [26], the Vibrational Spectrometer VISION [27], and the wide angular-range chopper spectrometer ARCS [28]. Our pressure cells can also be used for diffraction measurements at low temperature, but with limitation due to strong elastic background scattering. To increase the reliability and accuracy of the pressure determination, we use two calibrants, ruby and Sr-borate, simultaneously. The measuring system allows us to monitor pressure in the clamp cell during the loading process as well as during low temperature neutron scattering experiments. We also present here a new calibration curve of the pressure-induced shift of 0-0 line of SrB 4 O 7 :Sm 2+ for moderate pressures p 2.0 GPa. A schematic view of the clamp cell, that has been designed with the particular needs for neutron scattering experiments, is shown in Fig. 1(a). The outer body, made from Al-alloy (7075 T651), is small enough (diameter 32 mm) to fit a 3 He insert, that would allow to cool the sample down to ∼0.3 K. The inner sleeve is made of either copper beryllium alloy [12] or of Ni-Cr-Al alloy [29,30], allowing pressures up to 1.6 GPa and 2.0 GPa, respectively. The inner sleeve is slightly conical and pushed into the outer body to produce a radial support.
Experiment
The piston ( Fig. 1(b)) has a hole (1.8 mm diameter) in order to provide optical access to the sample space. The piston is made of Bohler S390PM which is a micrograin tool steel hardened to Rockwell Rc-67. It is produced through powder metallurgy techniques and machined to tight tolerances. The diamond seat is machined at a 60-degree included angle and honed to precisely fit the diamond. The hole through the piston is sized to position the fiber optic probe for good optical access with the sample. Installation of the diamond is accomplished by squarely seating the diamond and applying a small amount of optically clear Stycast epoxy around the top edge of the diamond where it contacts the piston.
The sample is placed in a teflon tube of ∼ 4.5 mm diameter and ∼ 15 mm length, allowing about 250 mm 3 of sample volume. During the measurements the top and bottom edges of the pressure cell are covered with cadmium to reduce the background from residual scattering. As a pressure transmitting medium we use either Fluorinert FC-770 or deuterated methanol. Both fluids ensure quasi-hydrostatic conditions in a pressure range of our interest [31,32] and we did not find any difference between them. Samples to be measured can be either oriented single crystals or poly-crystalline material. All materials used in the construction of the pressure cell are nonmagnetic. Both pressure cells (with either Cu-Be or Ni-Cr-Al sleeve) were successfully tested in a cryomagnet in fields up to 8 T and temperatures down to 1.7 K.
The ruby powder is commercially available. The Sr-borate was prepared by reacting strontium carbonate with metaboric acid at high temperature. Metaboric acid was prepared by heating boric acid in air for 24 hours at a temperature of
130 − 150 • C, 3B(OH) 3 −→ (BOH) 3 O 3 +3H 2 O.
In a typical synthesis 5 grams of boric acid were placed in a tall cylindrical alumina crucible heated in air in a convection oven. The metaboric acid thus prepared was mixed with strontium carbonate and a small amount of samarium oxide corresponding to the desired level of doping, 3SrCO 3 + 4(BOH) 3
O 3 −→ 3SrB 4 O 7 +6H 2 O+3CO 2 .
In a typical synthesis, 4.86 g of SrCO 3 (20 mmol; MW 147.63) were mixed with 3.51 g of (BOH) 3 O 3 (27 mmol; MW 131.43). After mixing and grinding the powders together, the samarium dopant was added and grinding was continued. The resulting mixture was pressed into several pellets (1 cm diameter die; 6 tons). The pellets were placed in a cylindrical alumina crucible (3 cm diameter × 5 cm height), and heated in a programmable muffle furnace. The temperature was ramped from room temperature to 800 • C over 60 minutes. The furnace temperature was then held at 800 • C for 6 hours. The furnace temperature was then further raised to 850 • C over a period of 30 minutes and held at 850 • C for 12 hours. The furnace temperature was then finally increased again to 880 • C over 30 minutes and held at that temperature for another 6 hours, at which point the sample was cooled to room temperature over a period of 3 hours (about 5 • C/min). The solid mass recovered in the crucible was ground up and x-ray diffraction showed the expected strontium tetraborate structure. An additional annealing step above 900 • C (but below the 994 • C melting point) after grinding improved the crystallinity slightly.
The excitation light was provided by a 5320Å line of a LRS-0532 Diode-Pumped Solid-State (DPSS) Laser from Laserglow [33] with a maximum power of 200 mW. For detection we use a HR4000 spectrometer from Ocean Optics [34] calibrated for the wavelength 6800 − 7170Å with Toshiba TCD1304AP linear CCD array (3648 pixels). The optical fiber line is incorporated into the standard sample stick that allows us to monitor pressure in the cryostat, the cryomagnet or the closed cycle refrigerator (CCR).
We used a DAC for the calibration of the pressure-induced shift of the 0-0 line of SrB 4 O 7 :Sm 2+ . The DAC allowed us to cover a pressure range exceeding the clamp cell limit in order to ensure the accuracy of the obtained calibration curve.
Results and Discussion
Empty cell background
The materials used to construct the pressure cell absorb and scatter neutrons (elastically and inelastically; coherently and incoherently), reducing the incident neutron flux on the sample and contributing to the scattered signal. This beam attenuation and background scattering increases counting time, reduces signal to noise ratio and produces overlapping peaks. In order to estimate the background and scattering profile of the sample environment, we performed both neutron diffraction and INS measurements of the empty pressure cells. Neutron diffraction test measurements were done using the wide-angle neutron diffractometer WAND 2 [35] at the High Flux Isotope Reactor (HFIR) reactor at ORNL. An incident neutron beam with a wavelength of 1.4827Å (37.2 meV) was selected with a Ge (113) monochromator. Fig. 2 shows the room temperature neutron diffraction patterns of the empty pressure cell with (a) Cu-Be and (b) Ni-Cr-Al inner sleeve. Strong Bragg peaks at scattering angles 2Θ > 40 degrees make diffraction measurements and the Rietveld refinement challenging. However, the low-angle background 2Θ < 40 degrees is almost free of elastic reflections, except scattering from the pressure transmitting medium. Therefore, the pressure cell can be used for structural studies of magnetic materials provided the empty cell is measured under identical experimental conditions and the scattering signal subtracted. We are planning to build a clamp cell specifically for neutron diffraction, made of titanium- zirconium alloy with null scattering composition [12], which will not produce Bragg reflections in a neutron beam.
A study of neutron beam attenuation by the pressure cells and inelastic scattering background measurements were performed at CNCS. The aluminum alloys of the pressure body have high neutron transparency due to the small absorption and relatively small incoherent and coherent cross sections of Al (σ tot = 1.503 barn, σ abs = 0.231 barn at E i = 25.5 meV) [36]. It is Ni (σ tot = 18.5 barn, σ abs = 4.49 barn) and Cu (σ tot = 8.03 barn, σ abs = 3.78 barn) that gives the major contribution to the beam attenuation. To determine the neutron transmission of the pressure cell we measured the Bragg reflections of a powder sample of the yttrium iron garnet Y 3 Fe 5 O 12 (YIG) outside and in the pressure cell at room temperature. Figure 3 compares the integrated intensities of the (420) Bragg peak of the YIG sample, measured without and in the cells with Ni-Cr-Al and Cu-Be sleeves. At the incident energy E i = 3.3 meV (λ = 4.96Å), the ratio of the integrated intensity for the Cu-Be (Ni-Cr-Al) cell turns out to be I Cu-Be /I no cell = 0.35 (I Ni-Cr-Al /I no cell = 0.24), indicating a reasonable neutron transmission for both pressure cells.
Since experiments with magnetic materials often require cryogenic temperatures, we carried out background measurements at two temperatures T = 10 and 200 K. Figure 4 summarizes the INS background of the two empty cells with Cu-Be and Ni-Cr-Al sleeves for two incident neutron energies E i = 3.3 meV (λ = 4.96Å) and E i = 12.0 meV (λ = 2.61Å) often used in magnon and phonon measurements. We conclude that the inelastic background is rather smooth and strongly dependent on the temperature. The clear asymmetry of the elastic line is instrumental and also due to multiple scattering.
Pressure calibration at room temperature
The pressure shift of the ruby excitation line ∆λ R2 at room temperature is well calibrated in a wide pressure range [15,37] by the following equation:
P = 248.4 λ R1 (P ) λ R1 (0) 7.665 − 1 ,(1)
where λ R2 (0) = 6942.2Å is a wavelength at ambient pressure, P is expressed in GPa and λ inÅ. Note, that the nonlinearity of the variation of λ R1 as a function of pressure is negligible for P < 2.0 GPa. We also assume that the slope of the temperature dependence of the ruby R 1 line is pressure independent in our pressure range [18]. Therefore, for all practical use we can adopt the following linear equation, ∆λ R1 /dP = 3.65 ± 0.05Å/GPa [19,38], which is undistinguished from Equation 1 (shown as a dashed line in Figure 5). [19,38]. The dotted line is the linear extrapolation from high pressure data obtained by Lacam et al. [20], ∆λ R 1 /dP = 2.55Å/GPa. The inset shows a typical luminescence spectrum of 0-0 Sm 2+ and R 1 &R 2 ruby excitations.
The calibration of the samarium-doped strontium tetraborate fluorescence was conducted against the ruby fluorescence in a DAC. Ruby chips and Sr-borate powder were placed on the top anvil of a Boehler plate DAC [39] (650 µm diameter culets, 301 stainless steel gasket with chamber diameter 300 µm and height 75 µm). The cell was then filled with deuterated glycerin as hydrostatic pressure transmitting medium [40]. The pressure was gradually increased using the plate DAC gears. The fluorescence was measured using the SNAP Raman stand equipped with a 532 nm laser and a 1800 mm −1 grating. Spectra were typically acquired within 0.5 s. A spot in the cell was identified that allowed for good quality detection of the ruby and Sr-borate excitations simultaneously (see inset in Figure 5). The same spot was used for all measurements. Ruby as well as Sr-borate shifts are given relative to a measurement acquired prior to compression. We found that for pressures P < 2.0 GPa the wavelength pressure shift of the 0 − 0 line is well fitted by a ∆λ 0−0 /dP = 2.41(1)Å/GPa linear relation, see Figure 5. Note, that the linear coefficient we determined in this work is slightly different from the one obtained by Lacam et al. [20] and Datchi et al. [41].
Temperature effect on the pressure
The temperature effect on the R-line emission of ruby has been known for decades [16,42]. A detailed analysis of the physical properties of ruby and of the origin of the R-line shift with temperature and pressure has been performed by K. Syassen [17]. We used the pressure cell with CuBe sleeve for the temperature dependent measurements. For the pressure application the sample space of the cell was filled with the pressure medium and a mixture of ruby chips and SrB 4 O 7 :Sm 2+ powder. Pressure was applied by a hydraulic press and monitored during the load. The deviation of the real pressure compared to the nominal pressure is about 20-30%. For the temperature measurements the pressure cell was mounted on the cold plate of a standard Close Cycle Refrigerator. Fig. 6 shows the temperature effect on both, R 1 and 0-0 lines, at ambient pressure as well as at P = 1.3 GPa. With decreasing temperature, the R 1 -line shifts to lower wavelength while the 0-0 line does not shift. Our data for the R 1 -line agrees well with an earlier published model [17], where the temperature dependence of the shift is fitted by an analytical expression
ν(T ) = ν 0 − α ν exp(Θ/T ) − 1 ,(2)
with parameters ν 0 = 14423.4 cm −1 (that correspond to λ 0 = 6933.2Å), α ν = 76.6 cm −1 and Θ = 482 K. It follows from Fig. 6(b) that the pressure drop at P = 1.3 GPa is negligible between ambient temperature and 10 K.
Conclusions
To conclude, we fabricated and tested nonmagnetic clamp pressure cells with Cu-Be (P 1.6 GPa) and Ni-Cr-Al (P 2.0 GPa) inner sleeves, which are suitable for low temperature neutron diffraction and inelastic neutron scattering experiments including measurements in external magnetic fields.
Summarizing our neutron scattering measurements of the pressure cell background, we observe significant coherent elastic scattering, as expected. This limits an applicability of the cells for diffraction measurements. However, the refinement of the magnetic structure transformation, based mainly on the low-angle magnetic Bragg peaks, is often possible when the empty cell is measured under identical conditions and its signal is subtracted. The neutron transmission is reasonably good for a large energy interval down to E i ∼ 3 meV. The inelastic background scattering, although high, is smooth at low energy transfer up to ω < 10 meV. There are a large number of scientific subjects, which fit well to these parameters, for example, low-energy phonons, magnons and crystal-field excitations, as well as quantum phenomena such as quantum tunneling and quantum spin fluctuations. In general, measurements of the empty cell and a subtraction from the experimental data is highly recommended for both types, diffraction and INS experiments.
The pressure cells have an optical access to the sample space for fluorescence measurements. The temperature independent linear pressure shift ∆λ 0−0 /dP = 2.41(1)Å/GPa of the 0-0 fluorescent line of SrB 4 O 7 :Sm 2+ allows an accurate in-situ pressure determination in the entire temperature range from room temperature down to 1.5 K. Cross-checking the 0-0 excitation versus the ruby line increases the reliability of the measurements. The pressure applied to the sample can be determined with an accuracy better than ±0.05 GPa.
Currently this equipment is accessible at ORNL's neutron scattering research facilities -the Spallation Neutron Source and the High Flux Isotope Reactor, through ORNL's general user program [43].
Figure 1 .
1a) Schematic view and full assembly of the clamp cell used in the current study. (1) Cryostat sample stick; (2) Optical fiber; (3) Locking nut (Al-alloy); (4,11) Pistons (WC). The left piston has a hole for an optical fiber and (5) a diamond anvil; (6) Anti-extrusion rings (Cu); (7,8) Sample container (Teflon); (9) Inner sleeve (CuBe or NiCrAl alloy); (10) Main body (Al-alloy); (12) Support spacers (WC). b) Enlarged view of the piston (4).
Figure 2 .
2The room temperature neutron diffraction patterns of a) the empty cell with Cu-Be sleeve (the blue bottom pattern) and the empty cell loaded with a Teflon container and fluorinert (the red upper pattern); b) The empty cell with Ni-Cr-Al sleeve. In (a) the diffraction patterns are shifted for clarity. Note the low angle scattering, 2Θ ∼ 15 degrees, from the pressure transmitting medium.
Figure 3 .
3Integrated intensities of the (420) Bragg reflection of the YIG polycrystalline sample inside and outside of the pressure cell, that characterized the neutron transmission. The sample was measured without the cell (open squares) and inside the cell with Cu-Be (circles) and Ni-Cr-Al sleeve (triangles) at the E i = 3.3 meV (λ = 4.96Å). Solid lines are the Gaussian fit of the data. The pressure cell background was subtracted.
Figure 4 .
4The INS spectra measured from the empty pressure cells with Cu-Be sleeve (a,b) and Ni-Cr-Al sleeve (c,d) at temperatures T = 10 and 200 K. The spectra obtained with the incident energy E i = 3.3 meV (a,c) and E i = 12.0 meV (b,d) by integration in a low-1.0 < Q < 1.5Å −1 and high-Q 3.0 < Q < 3.5Å −1 range, respectively.
Figure 5 .
5Calibration of the Sr-borate 0-0 line wavelength shift with pressure at room temperature. The circles and squares are the experimental data for the R 1 and the 0-0 excitations, respectively. The solid red line is a linear fit of the 0-0 line shift P = ∆λ/x with x = 2.41. The dashed line is a linear equation for the R 1 ruby excitation, ∆λ R 1 /dP = 3.65Å/GPa
Figure 6 .
6A shift of the R 1 (a) and 0-0 (b) emission lines as a function of temperature at ambient pressure (black circles) and P = 1.3 GPa (red squares). Solid lines are a fit of the experimental data to Equation 2, see text.
AcknowledgementThis research used resources at the Spallation Neutron Source and the High Flux Isotope Reactor, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory.Disclosure statementNo potential conflict of interest was reported by the authors.
Large-volume diamond cells for neutron diffraction above 90 GPa. R Boehler, M Guthrie, J Molaison, High Press Res33Boehler R, Guthrie M, Molaison J, et al. Large-volume diamond cells for neutron diffrac- tion above 90 GPa. High Press Res. 2013;33(3):546-554.
Neutron diffraction observations of interstitial protons in dense ice. M Guthrie, R Boehler, C A Tulk, Proc Natl Acad Sci U S A. 11026Guthrie M, Boehler R, Tulk CA, et al. Neutron diffraction observations of interstitial protons in dense ice. Proc Natl Acad Sci U S A. 2013;110(26):10552-10556.
Novel diamond cells for neutron diffraction using multi-carat CVD anvils. R Boehler, J J Molaison, B Haberl, Rev Sci Instrum. 88883905Boehler R, Molaison JJ, Haberl B. Novel diamond cells for neutron diffraction using multi-carat CVD anvils. Rev Sci Instrum. 2017;88(8):083905.
Sudden reversal in the pressure dependence of T c in the iron-based superconductor KFe 2 As 2. F F Tafti, A Juneau-Fecteau, M E Delage, Nature Physics. 9Tafti FF, Juneau-Fecteau A, Delage ME, et al. Sudden reversal in the pressure dependence of T c in the iron-based superconductor KFe 2 As 2 . Nature Physics. 2013;9:349-352.
Reemergent Superconductivity and Avoided Quantum Criticality in Cd-Doped CeIrIn 5 under Pressure. Y Chen, W B Jiang, C Y Guo, Phys Rev Lett. 114146403Chen Y, Jiang WB, Guo CY, et al. Reemergent Superconductivity and Avoided Quantum Criticality in Cd-Doped CeIrIn 5 under Pressure. Phys Rev Lett. 2015;114:146403.
Pressure-tuned quantum criticality in the antiferromagnetic Kondo semimetal CeNi 2−δ As 2. Y Luo, F Ronning, N Wakeham, Proc Natl Acad Sci. 11244Luo Y, Ronning F, Wakeham N, et al. Pressure-tuned quantum criticality in the antiferro- magnetic Kondo semimetal CeNi 2−δ As 2 . Proc Natl Acad Sci U S A. 2015;112(44):13520- 13524.
Quantum phase transition under pressure in the heavily hydrogen-doped iron-based superconductor LaFeAsO. N Fujiwara, N Kawaguchi, S Iimura, Phys Rev B. 96140507Fujiwara N, Kawaguchi N, IImura S, et al. Quantum phase transition under pressure in the heavily hydrogen-doped iron-based superconductor LaFeAsO. Phys Rev B. 2017; 96:140507.
Pressure-induced topological quantum phase transition in Sb 2 Se 3. W Li, X Y Wei, J X Zhu, Phys Rev B. 8935101Li W, Wei XY, Zhu JX, et al. Pressure-induced topological quantum phase transition in Sb 2 Se 3 . Phys Rev B. 2014;89:035101.
Drastic Pressure Effect on the Extremely Large Magnetoresistance in WTe 2 : Quantum Oscillation Study. P L Cai, J Hu, L P He, Phys Rev Lett. 11557202Cai PL, Hu J, He LP, et al. Drastic Pressure Effect on the Extremely Large Magnetore- sistance in WTe 2 : Quantum Oscillation Study. Phys Rev Lett. 2015;115:057202.
Charge disproportionation and the pressureinduced insulator-metal transition in cubic perovskite PbCrO 3. J Cheng, K E Kweon, S A Larregola, Proc Natl Acad Sci. 1126Cheng J, Kweon KE, Larregola SA, et al. Charge disproportionation and the pressure- induced insulator-metal transition in cubic perovskite PbCrO 3 . Proc Natl Acad Sci U S A. 2015;112(6):1670-1674.
Pressure-Induced Spin-State Transition in BiCoO 3. K Oka, M Azuma, W Chen, J Am Chem Soc. 13227Oka K, Azuma M, Chen W, et al. Pressure-Induced Spin-State Transition in BiCoO 3 . J Am Chem Soc. 2010;132(27):9438-9443.
Techniques in High Pressure Neutron Scattering. S Klotz, CRC PressKlotz S. Techniques in High Pressure Neutron Scattering. CRC Press; 2013.
High-Pressure Equation of State for NaCl, KCl, and CsCl. D L Decker, J Appl Phys. 428Decker DL. High-Pressure Equation of State for NaCl, KCl, and CsCl. J Appl Phys. 1971; 42(8):3239-3244.
Pressure Measurement Made by the Utilization of Ruby Sharp-Line Luminescence. R A Forman, G J Piermarini, J D Barnett, Science. 1764032Forman RA, Piermarini GJ, Barnett JD, et al. Pressure Measurement Made by the Uti- lization of Ruby Sharp-Line Luminescence. Science. 1972;176(4032):284-285.
Calibration of the ruby pressure gauge to 800 kbar under quasi-hydrostatic conditions. H K Mao, J Xu, P M Bell, J Geophys Res Solid Earth. 91B5Mao HK, Xu J, Bell PM. Calibration of the ruby pressure gauge to 800 kbar under quasi-hydrostatic conditions. J Geophys Res Solid Earth. 1986;91(B5):4673-4676.
On the temperature correction to the ruby pressure scale. W L Vos, J A Schouten, J Appl Phys. 699Vos WL, Schouten JA. On the temperature correction to the ruby pressure scale. J Appl Phys. 1991;69(9):6744-6746.
Ruby under pressure. K Syassen, High Press Res28Syassen K. Ruby under pressure. High Press Res. 2008;28(2):75-126.
Optical calibration of pressure sensors for high pressures and temperatures. A F Goncharov, J M Zaug, J C Crowhurst, J Appl Phys. 97994917Goncharov AF, Zaug JM, Crowhurst JC, et al. Optical calibration of pressure sensors for high pressures and temperatures. J Appl Phys. 2005;97(9):094917.
Improved calibration of the SrB 4 O 7 :Sm 2+ optical pressure gauge: Advantages at very high pressures and high temperatures. F Datchi, R Letoullec, P Loubeyre, J Appl Phys. 818Datchi F, LeToullec R, Loubeyre P. Improved calibration of the SrB 4 O 7 :Sm 2+ optical pressure gauge: Advantages at very high pressures and high temperatures. J Appl Phys. 1997;81(8):3333-3339.
High-pressure measurements at moderate temperatures in a diamond anvil cell with a new optical sensor: SrB 4 O 7 :Sm 2+. A Lacam, C Chateau, J Appl Phys. 661Lacam A, Chateau C. High-pressure measurements at moderate temperatures in a dia- mond anvil cell with a new optical sensor: SrB 4 O 7 :Sm 2+ . J Appl Phys. 1989;66(1):366- 372.
An Optical Fluorescence System for Quantitative Pressure Measurement in the Diamond-Anvil Cell. J D Barnett, S Block, G J Piermarini, Rev Sci Instrum. 441Barnett JD, Block S, Piermarini GJ. An Optical Fluorescence System for Quantitative Pressure Measurement in the Diamond-Anvil Cell. Rev Sci Instrum. 1973;44(1):1-9.
A comparison of four direct geometry timeof-flight spectrometers at the Spallation Neutron Source. M B Stone, J L Niedziela, D L Abernathy, Rev Sci Instr. 85445113Stone MB, Niedziela JL, Abernathy DL, et al. A comparison of four direct geometry time- of-flight spectrometers at the Spallation Neutron Source. Rev Sci Instr. 2014;85(4):045113.
The new cold neutron chopper spectrometer at the Spallation Neutron Source: design and performance. G Ehlers, A Podlesnyak, J L Niedziela, Rev Sci Instrum. 8285108Ehlers G, Podlesnyak A, Niedziela JL, et al. The new cold neutron chopper spectrom- eter at the Spallation Neutron Source: design and performance. Rev Sci Instrum. 2011; 82:085108.
The cold neutron chopper spectrometer at the Spallation Neutron Source -A review of the first 8 years of operation. G Ehlers, A Podlesnyak, A I Kolesnikov, Rev Sci Instrum. 8793902Ehlers G, Podlesnyak A, Kolesnikov AI. The cold neutron chopper spectrometer at the Spallation Neutron Source -A review of the first 8 years of operation. Rev Sci Instrum. 2016;87:093902.
Recent progress on HYSPEC, and its polarization analysis capabilities. B Winn, U Filges, V Garlea, EPJ Web Conf. 833017Winn, B, Filges, U, Garlea, V O, et al. Recent progress on HYSPEC, and its polarization analysis capabilities. EPJ Web Conf. 2015;83:03017.
SEQUOIA: A newly operating chopper spectrometer at the SNS. G Granroth, A Kolesnikov, T Sherline, J Phys: Conf Ser. 251112058Granroth G, Kolesnikov A, Sherline T, et al. SEQUOIA: A newly operating chopper spectrometer at the SNS. J Phys: Conf Ser. 2010;251(1):012058.
Resolution of VISION, a Crystal-Analyzer Spectrometer. P A Seeger, L L Daemen, J Z Larese, Nucl Instr Meth A. 604Seeger PA, Daemen LL, Larese JZ. Resolution of VISION, a Crystal-Analyzer Spectrom- eter. Nucl Instr Meth A. 2009;604:719-728.
Design and operation of the wide angularrange chopper spectrometer ARCS at the Spallation Neutron Source. D L Abernathy, M B Stone, M J Loguillo, Rev Sci Instrum. 83115114Abernathy DL, Stone MB, Loguillo MJ, et al. Design and operation of the wide angular- range chopper spectrometer ARCS at the Spallation Neutron Source. Rev Sci Instrum. 2012;83(1):015114.
Material properties of NiCrAl alloy and design of a 4 GPa class non-magnetic high-pressure cell. Y Uwatoko, S Todo, K Ueda, J Phys: Condens Matter. 144411291Uwatoko Y, Todo S, Ueda K, et al. Material properties of NiCrAl alloy and design of a 4 GPa class non-magnetic high-pressure cell. J Phys: Condens Matter. 2002;14(44):11291.
Fabrication and efficiency evaluation of a hybrid nicral pressure cell up to 4 gpa. N Fujiwara, T Matsumoto, K Nakazawab, Rev Sci Instrum. 78773905Fujiwara N, Matsumoto T, Nakazawab K, et al. Fabrication and efficiency evaluation of a hybrid nicral pressure cell up to 4 gpa. Rev Sci Instrum. 2007;78(7):073905.
Hydrostatic limits in liquids and solids to 100 kbar. G J Piermarini, S Block, J Barnett, J Appl Phys. 4412Piermarini GJ, Block S, Barnett J. Hydrostatic limits in liquids and solids to 100 kbar. J Appl Phys. 1973;44(12):5377-5382.
Hydrostatic limits of 11 pressure transmitting media. S Klotz, J C Chervin, P Munsch, J Appl Phys. 42775413Klotz S, Chervin JC, Munsch P, et al. Hydrostatic limits of 11 pressure transmitting media. J Appl Phys. 2009;42(7):075413.
. Laserglow Technologies, St, Clair Ave West. LASERGLOW TECHNOLOGIES, 873 St. Clair Ave West, Toronto, ON, Canada, M6C1C4.
OCEAN OPTICS, 8060 Bryan Dairy Rd. Largo FL 33777, USAOCEAN OPTICS, 8060 Bryan Dairy Rd, Largo FL 33777, USA.
The Wide Angle Neutron Diffractometer squared (WAND 2 ) -Possibilities and Future. Physica B: Condens Matter. M Frontzek, K Andrews, A Jones, in pressFrontzek M, Andrews K, Jones A, et al. The Wide Angle Neutron Diffractometer squared (WAND 2 ) -Possibilities and Future. Physica B: Condens Matter. 2017;in press.
Neutron scattering lengths and cross sections. V F Sears, Neutron News. 33Sears VF. Neutron scattering lengths and cross sections. Neutron News. 1992;3(3):26-37.
Calibration of the pressure dependence of the R 1 ruby fluorescence line to 195 kbar. G J Piermarini, S Block, J D Barnett, J Geophys Res Solid Earth. 466Piermarini GJ, Block S, Barnett JD, et al. Calibration of the pressure dependence of the R 1 ruby fluorescence line to 195 kbar. J Geophys Res Solid Earth. 1975;46(6):2774-2780.
SrB 4 O 7 :Sm 2+ pressure optical sensor: Investigations in the megabar range. J M Leger, C Chateau, A Lacam, J Appl Phys. 685Leger JM, Chateau C, Lacam A. SrB 4 O 7 :Sm 2+ pressure optical sensor: Investigations in the megabar range. J Appl Phys. 1990;68(5):2351-2354.
New diamond cell for single-crystal x-ray diffraction. Reinhard Boehler, Rev Sci Instrum. 7711115103Boehler, Reinhard. New diamond cell for single-crystal x-ray diffraction. Rev Sci Instrum. 2006;77(11):115103.
Freezing of glycerolwater mixtures under pressure. S Klotz, K Takemura, T Strässle, J Phys: Condens Matter. 24325103Klotz S, Takemura K, Strässle T, et al. Freezing of glycerolwater mixtures under pressure. J Phys: Condens Matter. 2012;24:325103.
Optical pressure sensors for high-pressurehightemperature studies in a diamond anvil cell. F Datchi, A Dewaele, P Loubeyre, High Press Res27Datchi F, Dewaele A, Loubeyre P, et al. Optical pressure sensors for high-pressurehigh- temperature studies in a diamond anvil cell. High Press Res. 2007;27(4):447-463.
Linewidth and Temperature Shift of the R Lines in Ruby. D E Mccumber, M D Sturge, J Appl Phys. 346McCumber DE, Sturge MD. Linewidth and Temperature Shift of the R Lines in Ruby. J Appl Phys. 1963;34(6):1682-1684.
| []
|
[
"Exploring the Hyperchargeless Higgs Triplet Model up to the Planck Scale",
"Exploring the Hyperchargeless Higgs Triplet Model up to the Planck Scale"
]
| [
"Najimuddin Khan \nDiscipline of Physics\nIndian Institute of Technology Indore\nKhandwa Road, Simrol, Indore -453 552India\n"
]
| [
"Discipline of Physics\nIndian Institute of Technology Indore\nKhandwa Road, Simrol, Indore -453 552India"
]
| []
| We examine an extension of the SM Higgs sector by a Higgs triplet taking into consideration the Higgslike particle discovery at the LHC with mass around 125 GeV. We evaluate the bounds on the scalar potential through the unitarity of the scattering-matrix. Considering with and without Z 2 -symmetry on the extra triplet, we derive constraints on the parameter space. We identify the region of the parameter space that corresponds to the stability and metastability of the electroweak vacuum. We also show that at large field values the scalar potential of this model is suitable to explain inflation. * A detailed study of the HTM (Y = 0) parameter space which is valid up to 1 TeV, has been | 10.1140/epjc/s10052-018-5766-4 | [
"https://arxiv.org/pdf/1610.03178v3.pdf"
]
| 119,275,648 | 1610.03178 | 3df70484f953087d566d86d2ef2984403a7fff1c |
Exploring the Hyperchargeless Higgs Triplet Model up to the Planck Scale
21 May 2018
Najimuddin Khan
Discipline of Physics
Indian Institute of Technology Indore
Khandwa Road, Simrol, Indore -453 552India
Exploring the Hyperchargeless Higgs Triplet Model up to the Planck Scale
21 May 20181
We examine an extension of the SM Higgs sector by a Higgs triplet taking into consideration the Higgslike particle discovery at the LHC with mass around 125 GeV. We evaluate the bounds on the scalar potential through the unitarity of the scattering-matrix. Considering with and without Z 2 -symmetry on the extra triplet, we derive constraints on the parameter space. We identify the region of the parameter space that corresponds to the stability and metastability of the electroweak vacuum. We also show that at large field values the scalar potential of this model is suitable to explain inflation. * A detailed study of the HTM (Y = 0) parameter space which is valid up to 1 TeV, has been
I. INTRODUCTION
The revelation of the Higgs boson [1][2][3] in 2012 at the Large Hadron Collider (LHC), confirmed the existence of all the Standard Model (SM) particles and the Higgs mechanism to be responsible for electroweak symmetry breaking (EWSB). So far, the LHC, operated with pp collision energy at √ s ∼ 8 and 13 TeV has not found any signature of new physics beyond the standard model (BSM). However, various theoretical issues such as the hierarchy problem related to the mass of the Higgs, mass hierarchy and mixing patterns in the leptonic and quark sectors suggest the need for new physics beyond the SM. Different experimental observations such as non-zero neutrino mass, baryon-antibaryon asymmetry in the Universe, mysterious nature of dark matter (DM) and dark energy, inflation in the early Universe indicate the existence of new physics. Moreover, the measured properties of the Higgs boson with mass ∼125 GeV are consistent with those of the scalar doublet as predicted by the SM. However, the experimental data [4] still comfortably allow an extended scalar sector, which may also be responsible for the EWSB.
The present experimental values of the SM parameter of the Lagrangian indicate that if the validity of the SM is extended up to the Planck mass (M Pl = 1.2 × 10 19 GeV), a second, deeper minimum is located near the Planck mass such that the EW vacuum is metastable. The transition lifetime of the EW vacuum to the deeper minimum is finite τ EW ∼ 10 300 years [5][6][7][8][9][10][11][12][13][14][15][16]. The EW vacuum remains metastable even after adding extra scalar particles to the SM which have been discussed in Refs. [15][16][17][18][19].
In this work, we add a real hypercharge Y = 0 scalar triplet to the SM. In the literature, this model is termed as the hyperchargeless Higgs triplet model, HTM (Y = 0) [20]. We consider both the neutral CP -even component of the SM doublet and the extra scalar triplet take part in the EWSB. Including radiative corrections, we check the validity of the parameters of the model up to the Planck mass M Pl . We review various theoretical and experimental bounds of this model. In this work, we especially discuss the unitary bounds of the quartic couplings of the scalar potential. To the best of our knowledge, the unitary bounds of this model were not discussed in the literature. Next, we impose a Z 2 -symmetry such that an odd number of scalar particles of the triplet do not couple with the SM particles. The lightest neutral scalar particle does not decay and becomes stable. This scalar field can be taken as a viable DM candidate which may fulfill the relic abundance of the Universe. In this context, it is instructive to explore whether these extra scalars can also prolong the lifetime of the Universe. In this model, we find new regions in the parameter space of this model in which the EW vacuum remains metastable. We also consider that the extra neutral scalar field (also compatible as a viable dark matter candidate) can act as an inflaton. We show that this scalar field is able to explain the inflationary observables. performed in Refs. [21]. Two different renormalization schemes, electroweak precision, and decoupling of Higgs triplet scenario have been discussed in Ref. [22]. Using the electroweak precision test (EWPT) data and one-loop correction to the ρ parameter, the Higgs mass range has been predicted in Refs. [23][24][25][26][27]. The detailed structure of the vacuum of the scalar potential at the tree-level has been studied in the Ref. [28]. The constraints on the parameter spaces from the recent LHC µ γγ and µ Zγ data have been discussed in Ref. [29]. The LHC and future collider experiments with high luminosity can be used as an useful tool to detect these extra scalar particles through vector bosons scatterings [30]. More recently, the inert scalar triplet has been investigated in the context of dark matter direct and indirect detection [31][32][33]. The heavier inert fields can decay through one-loop via extra Majorana fermions [34,35]. This model has the required ingredients to realize a successful leptogenesis which can explain the matter asymmetry in the Universe [34,35]. The multi-component dark matter have been investigated [36,37] in HTM with extra scalar multiplets of SU (2) representation.
The paper is organized as follows. Section II starts with a detailed descriptions of HTM (Y = 0) model. We discuss detailed constraints in Sec. III. Considering the lightest Z 2 -odd neutral particle as a viable DM, we analyze the scalar potential up to the Planck mass and identify regions of parameter space corresponding to the stable and metastable EW vacuum in Sec. IV. We explain inflation as well in Sec. V. Finally we conclude in Sec. VI.
II. MODEL
We consider a model with a real Higgs doublet, Φ, and a real, isospin I = 1, hypercharge Y = 0 triplet T . The extra scalar triplet consists of a pair of singly-charged fields and a CP -even neutral scalar field. The doublet and triplet scalar are conventionally written as [22]
Φ = G + 1 1 √ 2 (v 1 + h 0 + iG 0 ) , T = η + v 2 + η 0 −η − . (2.1)
The kinetic part of the Lagrangian is given by
L k =| D µ Φ | 2 + 1 2 | D µ T | 2 ,(2.2)
where the covariant derivatives are defined as,
D µ Φ = ∂ µ + i g 2 2 σ a W a + i g 1 2 Y B µ Φ and D µ T = ∂ µ + ig 2 t a W a T ,(2.3)
where, W a µ (a=1,2,3) are the SU (2) L gauge bosons, corresponding to three generators of SU (2) L group and B µ is the U (1) Y gauge boson. σ a (a = 1, 2, 3) are the Pauli matrices, and t a can be written as follows
t 1 = 1 √ 2 0 1 0 1 0 1 0 1 0 , t 2 = 1 √ 2 0 −i 0 i 0 −i 0 i 0 , t 3 = 1 0 0 0 0 0 0 0 −1 . (2.4)
The scalar potential is such that both the neutral CP -even component of the SM doublet and the extra scalar triplet receive vacuum expectation values (VEVs), and thus take part in the EWSB. After EWSB, one of the linear combinations of charged scalar fields of scalar doublet and the triplet is eaten by the W boson which becomes massive, other orthogonal combinations of these fields become massive charged scalar fields. Similarly, a pseudoscalar of scalar doublet become the longitudinal part of massive Z gauge boson. This scalar may give rise to a signature through the scattering of vector bosons [30] in collider experiments. The spontaneous EWSB generates masses for the W and Z bosons as
M 2 W = g 2 2 4 v 2 1 + 4v 2 2 , and M 2 Z = g 2 2 4c 2 θ v 2 1 ,
where, c W ≡ cos θ W = g 2 / g 2 1 + g 2 2 and s W ≡ sin θ W . The scalar doublet VEV v 1 and the triplet VEV v 2 are related to the SM VEV by v SM (≡ 246.221 GeV) = v 2 1 + 4v 2 2 . One can see that this model violates custodial symmetry at tree level
ρ = M 2 W M 2 Z c 2 W = 1 + 4 v 2 2 v 2 1 . (2.5)
The experimental value of ρ is 1.0004 ± 0.00024 [38] at 1σ. Hence, δρ ≈ 0.0004 ± 0.00024 and we will adopt the bound δρ ≤ 0.001. This puts a stringent constraints on v 2 and we get v 2 should be less than 4 GeV.
The tree-level scalar potential with the Higgs doublet and the real scalar triplet is invariant under SU (2) L × U (1) Y transformation. This is given by
V (Φ, T ) = µ 2 1 | Φ | 2 + µ 2 2 2 | T | 2 +λ 1 | Φ | 4 + λ 2 4 | T | 4 + λ 3 2 | Φ | 2 | T | 2 +λ 4 Φ † σ a ΦT a . (2.6)
We have the following minimization conditions of the tree-level scalar potential
µ 2 1 = 1 2 {2λ 4 v 2 − (2λ 1 v 2 1 + λ 3 v 2 2 )}, (2.7) µ 2 2 = 1 2v 2 {λ 4 v 2 1 − λ 3 v 2 1 v 2 − 2λ 2 v 3 2 }. (2.8)
After electroweak symmetry breaking, the squared mass matrix can be expressed as 6 × 6 for the scalar fields (G ± 1 , η ± , η 0 and h 0 ). This matrix is composed of three 2 × 2 submatrices with bases, (G + 1 , η + ), (G − 1 , η − ) and (h 0 , η 0 ). After rotating these fields into the mass basis, we get four physical mass eigenstates (H ± , h, H). The remaining two states (G ± ) and G 0 become the massless Goldstone bosons.
The physical masses of the particles are given by
M 2 h = 1 2 (B + A) − (B − A) 2 + 4C 2 , M 2 H = 1 2 (B + A) + (B − A) 2 + 4C 2 , (2.9) M 2 H ± = λ 4 (v 2 1 + 4v 2 2 ) 2v 2 ,
where,
A = 2λ 1 v 2 1 , B = λ 4 v 2 1 + 4λ 2 v 3 2 2v 2 , and C = −λ 4 v 1 + λ 3 v 1 v 2 .
(2.10)
The mixing between the doublet and triplet in the charged and CP -even scalar sectors are respectively given by
h H = c γ s γ −s γ c γ h 0 η 0 , (2.11) G ± H ± = c β s β −s β c β G ± 1 η ± ,(2.12)
where,
s γ (≡ sin γ) = (B − A) 2 + 4C 2 − (B − A) 2 (B − A) 2 + 4C 2 and tan β = 2v 2 v 1 .
In large µ 2 2 and small v 2 limit, one can express sin γ and sin β as
s γ = 1 2 − 1 2 1 + 16 v 2 2 v 2 1 ≈ 0 and s β = 2v 2 v 2 1 + 4v 2 2 ≈ 0.
In these limits, the quartic λ 1,2,3 and λ 4 can be written as
λ 1 = M 2 h 2v 2 1 , λ 2 = 2(M 2 H − M 2 H ± ) v 2 1 s 2 β , λ 3 = 2(M 2 H ± − (s γ /s β )M 2 H ) v 2 1 , λ 4 = s β M 2 H ± v 1 .
(2.13)
In the same limit, if M H ± and M H are very heavy compared to M h , then M H ± and M H become degenerate (see eqns. 2.9 and 2.10). If the mass difference between M H ± and M H is large, then the quartic couplings λ 2,3 will violate the perturbativity and unitarity (see subsections III B and III C) bounds.
The SM gauge symmetry, SU
III. CONSTRAINTS ON THE HYPERCHARGELESS HIGGS TRIPLET MODEL
The parameter space of this model is constrained by theoretical considerations like the absolute vacuum stability, perturbativity, and unitarity of the scattering matrix. In the following, we will discuss these theoretical bounds and the constraints of the Higgs to diphoton signal strength from the LHC and the electroweak precision measurements.
A. Vacuum stability bounds
A necessary condition for the stability of the vacuum comes from requiring that the scalar potential is bounded from below, i.e, it should not approach negative infinity along any direction of the field space for large field values. For h 0 , η 0,± v 1,2 , the quadratic terms µ 2 1 |Φ| 2 , µ 2 2 2 |T | 2 and λ 4 Φ † σ a ΦT a of the scalar potential in eqn. 2.6 are negligibly small compared to the other quartic terms, so the scalar potential is given by
V (h 0 , η 0 , η ± ) = 1 4 λ 1 (h 0 ) 4 + λ 2 (η 2 + 2η + η − ) 2 + λ 3 (h 0 ) 2 (η 2 + 2η + η − ) . (3.1)
The potential can be written in a symmetric matrix with basis {(h 0 ) 2 , (η 0 ) 2 , η − η + }. Using the copositivity criteria [101], one can calculate the required conditions for the absolute stability/bounded from below of the scalar potential. The tree-level scalar potential V (Φ, T ) ≡ V (h 0 , η 0 , η ± ) is absolutely stable if
λ 1 (Λ) ≥ 0, λ 2 (Λ) ≥ 0, λ 3 (Λ) ≥ −2 λ 1 (Λ)λ 2 (Λ). (3.2)
The coupling constants are evaluated at a scale Λ using RGEs. In this study, we use the SM RGEs up to three-loop which have been given in Refs. [50][51][52][53]. The triplet contributions are taken up to two-loop which are presented in Appendix A. If the quantum corrections are included to the scalar potential, then there is a possibility to form a minimum along the Higgs field direction near the Planck mass M Pl . For negative λ 1 (Λ) the minimum at the energy scale Λ becomes deeper than the EW minimum and vice-versa. In these situations, the above conditions in eqn. 3.2 become more complicated. These modifications will be shown in Subsection IV B. As λ 3 gives a positive contribution to the running of λ 2 , λ 2 remains positive up to the Planck mass M Pl . Hence, it is clear that no extra minimum will develop along the new scalar field directions. The sign and the value of λ 3 can change the Higgs diphoton signal strength and the stability of the EW vacuum. The importance of the sign of λ 3 will be discussed in subsection III E and IV C.
B. Perturbativity bounds
To ensure that the radiatively improved scalar potential V (Φ, T ) remains perturbative at any given energy scale (Λ), one must impose the following conditions,
| λ 1,2,3 | 4π and λ 4 Λ 4π. (3.3)
C. Unitarity bounds
Unitarity bound on the extended scalar sectors can be calculated from the scattering-matrix (Smatrix) of different processes. The technique was developed in Refs. [39,40] for the SM and it can also be applied to the HTM (Y = 0). The S-matrix for the HTM (Y = 0) consists of different scalar-scalar, gauge boson-gauge boson, gauge boson-scalar scattering amplitudes. Using the Born approximation, the scattering cross-section for any process can be written as
σ = 16π s ∞ l=1 (2l + 1)|a l (s)| 2 ,(3.4)
where, s = 4E 2 CM is the Mandelstam variable, E CM is the center of mass energy of the incoming particles. a l is the partial wave coefficients corresponding to specific angular momentum l. This leads to the following unitarity constraint: Re(a l ) < 1 2 . At high energy the dominant contribution to the amplitude a l of the two-body scattering processes a, b → c, d comes from the diagram involving the quartic couplings. Far away from the resonance, the other contributions to the amplitude from the scalar mediated s-,t-, and u-channel processes are negligibly small. Also in the high energy limit, the amplitude of scattering processes involving longitudinal gauge bosons can be approximated by the scalar amplitude in which gauge bosons are replaced by their corresponding Goldstone bosons. For example, the amplitude of
W + L W − L → W + L W − L scattering is equivalent to G + G − → G + G − .
This is known as equivalence theorem [40,41]. So to test the unitarity of HTM (Y = 0), we construct the S-matrix which consists of only the scalar quartic couplings.
The scalar quartic couplings in the physical bases G ± , G 0 , H ± , h and H are complicated functions of λ's, γ, β. The hhhh vertex is 6(λ 1 cos 4 γ + λ 3 cos 2 γ sin 2 γ + λ 2 sin 4 γ). It is difficult to calculate the unitary bounds in the physical bases. One can consider the non-physical scalar fields bases, i.e., G ± 1 , η ± , G 0 , h 0 and η 0 before the EWSB. Here the crucial point is that the S-matrix which is expressed in terms of the physical fields can be transformed into a S-matrix for the non-physical fields by making an unitary transformation [42,43].
Different quartic couplings in non-physical bases are obtained by expanding the scalar potential of eqn. 2.6 which are given by,
{G 0 G 0 G 0 G 0 } = 6λ 1 , G + 1 G + 1 G − 1 G − 1 = 4λ 1 , G + 1 G − 1 h 0 h 0 = 2λ 1 , {G 0 G 0 η 0 η 0 } = λ 3 , {h 0 h 0 η 0 η 0 } = λ 3 , G 0 G 0 η + η − = λ 3 , h 0 h 0 η + η − = λ 3 , G 0 G 0 G + 1 G − 1 = 2λ 1 , {G 0 G 0 h 0 h 0 } = 2λ 1 , {h 0 h 0 h 0 h 0 } = 6λ 1 , G + 1 G − 1 η 0 η 0 = λ 3 , (3.5) {η 0 η 0 η 0 η 0 } = 6λ 2 , G + 1 G − 1 η + η − = λ 3 , η 0 η 0 η + η − = 2λ 2 , η + η + η − η − = 4λ 2 .
The full set of these non-physical scalar scattering processes can be expressed as a 16×16 S-matrix. This matrix is composed of three submatrices of dimensions 6 × 6, 5 × 5, and 5 × 5 which have different initial and final states.
The first 6 × 6 sub-matrix M 1 corresponds to scattering processes whose initial and final states are one of these:
h 0 G + 1 , G 0 G + 1 , η 0 G + 1 , h 0 G + 1 , G 0 η + , and η 0 η + . Using the Feynman rules in eqns. 3.5, one can obtain M 1 =diag( 2λ 1 , 2λ 1 , 2λ 1 , λ 3 , λ 3 , λ 3 ).
The sub-matrix M 2 corresponds to scattering processes with one of the following initial and final states:
h 0 G 0 , G + 1 η − , η + G − 1 , η 0 G 0 , and h 0 η 0 . Similarly, one can calculate M 2 =diag( 2λ 1 , λ 3 , λ 3 , λ 3 , λ 3 ).
The third sub-matrix M 3 corresponds to scattering fields (G
+ 1 G − 1 , η + η − , G 0 G 0 √ 2 , h 0 h 0 √ 2 , and η 0 η 0 √ 2 ).
The factor 1 √ 2 is appeared due to statistics of identical particles. M 3 is given by,
M 3 = 4λ 1 λ 3 √ 2λ 1 √ 2λ 1 λ 3 √ 2 λ 3 4λ 2 λ 3 √ 2 λ 3 √ 2 √ 2λ 2 √ 2λ 1 λ 3 √ 2 3λ 1 λ 1 λ 3 2 √ 2λ 1 λ 3 √ 2 λ 1 3λ 1 λ 3 2 λ 3 √ 2 √ 2λ 2 λ 3 2 λ 3 2 3λ 2 . (3.6)
Eigenvalues of M 3 are 2λ 1 , 2λ 1 , 2λ 2 , and 1 2 6λ 1 + 5λ 2 ± (6λ 1 − 5λ 2 ) 2 + 12λ 2 3 . Unitary constraints of the scattering processes demand that the eigenvalues of the S-matrix should be less than 8π.
D. Bounds from electroweak precision experiments
Electroweak precision data has imposed severe bounds on new physics models via Peskin-Takeuchi S, T, U parameters [44]. The additional contributions from this model are given by [21,26] S 0, (3.7)
T = 1 8π 1 sin 2 θ W cos 2 θ W M 2 H + M 2 H ± M 2 Z − 2M 2 H ± M 2 H M 2 Z (M 2 H − M 2 H ± ) log M 2 H M 2 H ± 1 6π 1 sin 2 θ W cos 2 θ W (∆M ) 2 M 2 Z , (3.8) U = − 1 3π M 4 H log M 2 H M 2 H ± (3M 2 H ± − M 2 H ) (M 2 H − M 2 H ± ) 3 + 5(M 4 H + M 4 H ± ) − 22M 2 H ± M 2 H 6(M 2 H − M 2 H ± ) 2 ∆M 3πM H ± , (3.9) where ∆M = M H ± − M H . S is proportional to sin β.
The experimental value of ρ parameter demands that the triplet VEV v 2 to be less than 4 GeV [38]. Hence, the contributions to the S parameter from the triplet scalar fields are negligible. M H ± and M H are almost degenerate for M H ± ,H M h . The contributions to the T and U parameters from this model are also negligibly small [45].
E. Bounds from LHC diphoton signal strength
As the dominant production cross-section of h at LHC is coming through gluon fusion, the Higgs to diphoton signal strength µ γγ can be written as
µ γγ = σ(gg → h → γγ) HT M σ(gg → h → γγ) SM = σ(gg → h) HT M σ(gg → h) SM Br(h → γγ) HTM Br(h → γγ) SM . (3.10)
We use the narrow width approximation as Γ total h /M h → 0. The Higgs h to ff and V V (V stands for vector bosons) couplings are proportional to cos γ, so µ γγ can be simplified as
µ γγ = cos 2 γ Γ total h,SM Γ total h,HTM Γ(h → γγ) HTM Γ(h → γγ) SM . (3.11)
The charged Higgs H ± will alter the decay width of h → γγ, Zγ through one-loop which implies
Γ(h → γγ, Zγ) Γ total h
. Also, if the mass of the extra scalar particles (HT = H, H ± ) happen to be lighter than M h /2, then they might contribute to the invisible decay of the Higgs boson.
Using the global fit analysis [46] that such an invisible branching ratio is less than ∼ 20%. In eqn. 3.11, the first ratio provides a suppression of ∼ 0
.8 − 1. For M H,H ± > M h /2, the ratio becomes Γ total h,SM Γ total h,HTM ≈ 1 cos 2 γ .
Hence, the Higgs to diphoton signal strength can be written as
µ γγ ≈ Γ(h → γγ) HTM Γ(h → γγ) SM . (3.12)
In HTM, the additional contributions to Γ(h → γγ) at one-loop due to the H ± is given by [47] Γ
(h → γγ) HTM = α 2 M 3 h 256π 3 v 2 f N c f Q 2 f y f F 1/2 (τ f ) + y W F 1 (τ W ) + Q 2 H ± vµ hH + H − 2M 2 H ± F 0 (τ H ± ) 2 , (3.13) where τ i = M 2 h /4M 2 i . Q f , Q H ± denote
electric charges of corresponding particles. N c f is the color factor. y f and y W denote the Higgs couplings to ff and
W + W − . µ hH + H − = {2λ 4 sin βcos βcos γ + cos β 2 (λ 3 v 1 cos γ + 4λ 2 v 2 sin γ) + sin β 2 (λ 4 sin γ + λ 1 v 1 cos γ + λ 3 v 2 sin γ)} ≈ λ 3 v SM stands for the coupling constant of hH + H − vertex.
The loop functions F (0, 1/2, 1) can be found in Ref [47].
Recently, the ATLAS [48] and CMS [49] collaborations have measured the ratio of the diphoton rate µ γγ of the observed Higgs to the SM prediction. The present combined value of µ γγ is 1.14 +0.19 −0.18 from these experiments [4].
In Γ(h → γγ) HTM (see eqn. 3.13), a positive λ 3 leads to a destructive interference between HT and SM contributions and vice versa. One can see from the eqn. 3.13, the contribution to the Higgs diphoton channel is proportional to λ 3 M 2 H ± . If the charged scalar mass is greater than 300 GeV, then the contributions of H ± to the diphoton signal is negligibly small. We take the triplet vev v 2 , λ 4 and the other quartic couplings λ 1,2,3 as input parameters. Hence, depending on these parameters the mixing angle γ can vary in between 0 and π/2. The triplet scalar masses also become arbitrarily heavy. Here, we assume that no new physics shows up below the Planck mass M Pl . We examine the renormalization group (RG) flow of all couplings and establish bounds on the heavy scalar masses under the assumption that the parameters are valid up to the Planck mass M Pl . In this calculation, we use the SM RGEs up to three-loop [50][51][52][53] and the triplet contributions up to two-loop. We first calculate all couplings at M t . To find their values at M t , one needs to take into account different threshold corrections up to M t [5,6,15,16,74,75]. Using the RGEs, we evolve all the coupling constants from M t to the Planck mass M Pl . By this procedure we obtain new parameter regions which are valid up to the Planck mass M Pl .
We show the allowed region (green) in M H ± − M H plane for this model in Fig. 1. We demand that the EW vacuum of the scalar potential remain absolutely stable and do not violate the perturbative-unitarity up to the Planck mass M Pl . One can also obtain the parameter spaces, corresponding to the metastable EW vacuum which are visibly small in this plane. Furthermore, we impose the EWPT constraints on the parameters so that the region between the black-dashed lines survives.
In Fig. 1, we show the allowed region for fixed central values of all the SM parameters. In the left panel, we present the plot for the choice of the quartic couplings λ 2,3 = 0.1 and triplet VEV v 2 = 3 GeV. Whereas in the right panel, we use the value of triplet VEV v 2 = 1 GeV. We vary the quartic coupling λ 1 and dimensionful mass parameter λ 4 to calculate the neutral CP -even Higgs mass M H , the charged Higgs mass M H ± and the mixing angle γ. These scalar masses increase, whereas mixing angle decreases with λ 4 . We find that the EW vacuum becomes unbounded from below for λ 1 0.128. The theory also violates unitarity bounds for λ 1 0.238 before the Planck mass M Pl . One can see from the Fig. 1 (a), the allowed region becomes smaller for the larger values of heavy scalar masses. In most of the parameter space the running couplings either violate unitary or perturbativity bounds before the Planck mass M Pl .
As λ 2,3 stabilize the scalar potential, we will get a wider green region for smaller scalar masses but it will violate the unitarity bound in the higher mass region. We find that the EW vacuum becomes unbounded from below for the values of the quartic couplings λ 1 0.027 and λ 2,3 = 0.285. We also check that the choice of the quartic couplings λ 1 0.05 and λ 2,3 = 0.285 will violate unitary and perturbativity bounds before the Planck mass M Pl . One can also understand from the expressions of eqns. 2.13 that if we decrease the value of v 2 , the area of allowed region from the stability, unitary and perturbativity bounds will increase. We show the plot in Fig. 1 (b) for the choice of v 2 = 1 GeV.
If the vacuum expectation value of the scalar triplet becomes zero, then the minimization condition of the scalar potential given in eqn. 2.8 is no longer valid. The mass parameter µ 2 becomes free and the parameter λ 4 does not play any role in the stability analysis. In the next section, we will show the detailed stability analysis in the presence of extra Z 2 -symmetry in this model.
IV. DARK MATTER IN HTM (Y = 0)
We impose a Z 2 symmetry on this model such that the scalar triplet are odd under this transformation, i.e., T → −T . Whereas SM fields are even under this transformation. In the literature, the HTM including the Z 2 -symmetry is known as inert triplet model (ITM) [31]. In this model, the term λ 4 H † σ a ΦT a is absent in the scalar potential in eqn. 2.6, which implies λ 4 = 0. The Z 2 -symmetry prevents the triplet scalar to acquire a VEV, i.e., v 2 = 0. The potential can have a minimum along the Higgs field direction only. The EWSB driven by the SM Higgs doublet. The scalar fields of the triplet do not mix with the scalar fields of SM doublet. After the EWSB, the scalar potential in eqn. 2.6 is then given by
V (h, H, H ± ) = 1 4 2µ 2 1 (h + v) 2 + λ 1 (h + v) 4 + 2µ 2 2 (H 2 + 2H + H − ) +λ 2 (H 2 + 2H + H − ) 2 + λ 3 (h + v) 2 (H 2 + 2H + H − ) .
(4.1)
Here, v ≡ v SM and the mass (see eqn. 2.9) of these scalar fields 1 h, H and H ± are given by
M 2 h = 2λ 1 v 2 , M 2 H = µ 2 2 + λ 3 2 v 2 , (4.2) M 2 H ± = µ 2 2 + λ 3 2 v 2 .
At the tree-level the mass of the neutral scalar H and the charged particles H ± are degenerate. If we include one-loop radiative correction, the charged particles become slightly heavier [54,55] than the neutral ones. The mass difference between them is given by
∆M = (M H ± − M H ) 1-loop = αM H 4π f M W M H − c 2 W f M Z M H , (4.3) with, f (x) = − x 4 2x 3 log(x) + (x 2 − 4) 3 2 log x 2 −2−x √ x 2 −4 2
. It has been shown in Refs [54,55] that the mass splitting between charged and neutral scalars remains ∼ 150 MeV for M H = 0.1 − 5 TeV. In Fig. 2 (a), we show the variation ∆M (green line) with the M H (≡ M DM ) mass. As the Z 2 -symmetry also prohibits the couplings of an odd number of scalar fields of the triplet with the SM particles, H can serve as a viable DM candidate which may saturate the measured DM relic density of the Universe. In this work, we use the software package FeynRules [58] along with micrOMEGAs [59,60] to calculate the relic density of the DM. As ∆M is very small, the effective annihilation cross-section is dominated by the co-annihilation channels HH ± → SM particles [57]. Although it is dominated by the co-annihilation channel, we need a very small Higgs portal coupling λ 3 to obtain the correct relic density. The effective annihilation cross-section (see the black line in Fig. 2 (a)) decreases rapidly with ∆M for the DM mass below 500 GeV and becomes ∼ 10 −26 cm 3 s −1 around M DM = 2000 GeV. We obtain the relic density in the right ballpark.
In Fig. 2 (b), we present the plot for the relic density as a function of DM mass for the fixed Higgs portal coupling λ 3 (M Z ) = 0.10. The light red band is excluded from the Higgs invisible decay width [56]. There are two deep region in the relic density band (red line). First one is situated near the DM mass M DM ≈ 45 GeV. It is due the resonance of the s-channel HH ± → SM fermions processes which is mediated by the vector bosons W ± . The second one is situated near the DM mass M DM ≈ M h /2 for the Higgs-mediated HH → SM fermions processes. There is another shallower region located around the DM mass M DM = 100 GeV, which is due to the dominant contributions coming from HH ± , HH → gauge bosons channels.
For 500 GeV, we find that the total cross-section σv ∼ 10 −25 cm 3 s −1 , so the relic density becomes ∼ 0.01. In this region, the dominant channel are H, H ± → ZW ± , γW ± (∼ 35%, ∼ 10%) and H ± , H ± → ZW ± (∼ 25%). We also check that the smaller dark matter mass along with the Higgs portal coupling λ 3 (within the perturbative limit) does alter the relic density only in the third decimal place. If we increase the DM masses, then the effective annihilation cross-section decrease. It is mainly due to the mass suppression. We get a DM relic density in the right ballpark for DM masses greater than 1.8 TeV. One can see that the mass splitting ∆M attains saturation for M DM > 700 GeV. Hence, the relic density is mainly regulated by the Higgs-mediated s-channel processes, although the contributions are small. We check that the Higgs portal coupling λ 3 can be varied in between 0 to 1 for the DM mass 1850 GeV to 2200 GeV to get the right relic density. For example, we obtain the relic density Ωh 2 = 0.1198 for λ 3 = 0.001 and M DM = 1894.5 GeV. We get the same relic density for λ 3 = 0.8 and M DM = 2040 GeV. However, the running couplings will violate the unitary and perturbativity bounds for λ 3 0.6.
Non-observation of DM signals in direct detection experiments at XENON 100 [62,63], LUX [64] and LUX-2016 [65] put severe restrictions [33] on the Higgs portal coupling λ 3 for a given DM mass. In this model, we check the parameter regions which are satisfying the relic density, are allowed by the recent LUX-2016 [65] and XENON1T-2017 [66] data.
A. Metastability in ITM (Y = 0)
As in the SM, the EW vacuum is metastable, it is important to explore if ITM has any solution in its reserve. As the scalar WIMP H protected by Z 2 -symmetry can serve as viable DM candidate, it is interesting to explore if they help prolong the lifetime of the Universe. The effective Higgs potential gets modified in the presence of these new extra scalars.
One-loop effective Higgs potential in ms scheme and the Landau gauge is given by
V SM+IT 1 (h) = V SM 1 (h) + V IT 1 (h), (4.4) where [67-71] V SM 1 (h) = 5 i=1 n i 64π 2 M 4 i (h) ln M 2 i (h) µ 2 (t) − c i . (4.5)
n i is the number of degrees of freedom and M 2 i (h) = κ i (t) h 2 (t) − κ i (t). n i , c i , κ i and κ i can be found in Eqn. (4) in Ref. [67]. t is a dimensionless parameter which is expressed in terms of the running parameter µ(t) = M Z exp(t).
The contributions to the effective Higgs potential from the new scalars (H, H ± ) of the inert scalar triplet are given by [21]
V IT 1 (h) = j=H,H + ,H − 1 64π 2 M 4 j (h) ln M 2 j (h) µ 2 (t) − 3 2 , (4.6) where, M 2 j (h) = 1 2 λ j (t) h 2 (t) + µ 2 2 (t), with λ H,H ± (t) = λ 3 (t).
In the present work, in the Higgs effective potential, SM contributions are taken up to two-loop level [5,6,72,73] and the IT scalar contributions are considered up to one-loop only [21].
For h v, the quantum corrections to the Higgs potential are reabsorbed in the effective running coupling λ 1,eff such that the effective potential becomes
V SM+IT eff (h) λ 1,eff (h) h 4 4 ,(4.7)
with
λ 1,eff (h) = λ SM 1,eff (h) + λ IT 1,eff (h) , (4.8)
where, the expression of λ SM 1,eff (h) up to two-loop quantum corrections can be found in Ref. [5] and λ IT 1,eff
(h) = e 4Γ(h) 3λ 2 3 256π 2 ln λ 3 2 − 3 2 , with Γ(h) = h Mt γ(µ) d ln µ.
The wave function renormalization of the Higgs field is taken into account by the anomalous dimension γ(µ). Here, all running coupling constants are evaluated at µ = h, ensuring the potential remains within the perturbative domain.
We first calculate all couplings with the threshold corrections [5,6,15,16,74,75] at M t . Then we evolve all the couplings up to the Planck mass M Pl using our own computer codes incorporating the RG equations. Here, the SM effects in the RGEs are taken up to three-loop [50][51][52][53] and IT contributions are considered up to two-loop (see appendix A). Table I. For this benchmark point, we show the evolution of the running of the quartic couplings (λ 1,2,3 ) in Fig. 3. We find that for this specific choice of benchmark point with the top mass 2 M t = 173.1 GeV and the central values of other SM parameters leads to a metastable EW vacuum. It implies that the βfunction of the Higgs quartic coupling λ 1 becomes zero at very high energy scale and remains positive up to the Planck mass M Pl . We find that a deeper minimum is situated at that high energy scale before the Planck mass M Pl . We also check that the EW vacuum remains metastable (one-sided) for the quartic coupling λ 2 ≤ 0.1, Higgs portal coupling λ 3 ≤ 0.15 and DM mass M DM ≥ 1900 GeV. We obtain the stable (> 99.99% confidence level, one-sided) EW vacuum for the choice of the parameters λ 2 = 0.1, λ 3 = 0.3 and M DM = 1915 GeV. The running couplings will violate the unitary and perturbativity bounds for λ 3 0.6. In the following subsections, we will discuss the metastability of the EW vacuum of the scalar potential.
B. Tunneling Probability
Using the experimentally measured values of the SM parameters at the EW scale, when analyzing the SM scalar potential at higher energy scales, one encounters the so-called metastability of EW Table I vacuum [5-7, 15, 16]. Since a second (true) minimum, deeper than the EW minimum, is situated near the Planck mass, there exists a non-zero probability that the EW minimum will tunnel into the second minimum. The tunneling probability of the EW vacuum to the true vacuum at the present epoch can be expressed as [5,76,77]
P 0 = 0.15 Λ 4 B H 4 e −S(Λ B ) ,(4.9)
where, S(Λ B ) is the minimum action of the Higgs potential of bounce of size R = Λ −1 B and is given by
S(Λ B ) = 8π 2 3|λ 1 (Λ B )| . (4.10)
It becomes minimum when λ 1 (Λ B ) is minimum, i.e., β λ 1 (Λ B ) = 0. In this work, we neglect loop [76] and gravitational corrections [78,79] to the action as in Ref. [15,16]. Finite temperature also affects to EW vacuum stability [76,80,81]. In this work, we consider field theory in the zero-temperature limit.
In the ITM, the additional scalar fields give a positive contribution to β λ 1 (see Eqns. A1, A2). Due to the presence of these extra scalars, a metastable EW vacuum goes towards the stability, i.e., the tunneling probability P 0 becomes smaller. We first calculate the minimum value of λ 1,eff of eqn. 4.8. Putting this minimum value in eqn. 4.10, we compute the tunneling probability P 0 . As the stability of the EW vacuum is very sensitive to the top mass M t , we show the variation of tunneling probability P 0 as a function of M t in Fig. 4(a). The right band in Fig. 4(a) corresponds to the tunneling probability for our benchmark point. We present P 0 for the SM as the left band to see the effect of the additional IT scalar. We also display 1σ error bands in α s (light-grey) and M h (light-red). One can see from this figure that the effect of α s on the tunneling probability is more than the effect of M h . To see the effect of the ITM parameter spaces, we plot P 0 as a function of the Higgs portal coupling λ 3 (M Z ) in Fig. 4(b) for different choices of λ 2 (M Z ). We keep the fixed central values of all SM parameters. Here, DM mass M DM is also varied with λ 3 to get the DM relic density Ωh 2 = 0.1198.
The additional IT scalar fields in the IT model improve the stability of the EW vacuum as
• If 0 > λ 1 (Λ B ) > λ 1,min (Λ B )
, then the vacuum is metastable.
• If λ 1 (Λ B ) < λ 1,min (Λ B ), then the vacuum is unstable.
• If λ 2 < 0, the potential is unbounded from below along the H and H ± -direction.
• If λ 3 (Λ I ) < 0, the potential is unbounded from below along a direction in between H and h also H ± and h.
In the above λ 1,min (Λ B ) = −0.06488 1−0.00986 ln(v/Λ B ) and Λ I represents any energy scale for which λ 1 is negative [15,16].
C. Phase diagrams
In order to show the explicit dependence of the electroweak stability for different parameters of the ITM, we present various kinds of phase diagrams. In Fig. 5 (a), we calculate the confidence level for our bench mark points M DM = 1897 GeV, λ 2 (M Z ) = 0.10 and λ 3 (M Z ) = 0.10 by drawing an ellipse passing through the stability line λ = β λ = 0 in M t − M h plane. If the area of the ellipse is χ times the area of the ellipse which represents the 1 σ-error in the same plane. This factor χ is the confidence level of the stability of EW vacuum. We develop a proper method to calculate this factor and the tangency point for the stability line. In this case, the confidence level of metastability is decreased (one-sided) with α s (M Z ), i.e., the EW vacuum moves towards the stability region. We obtain the similar factor in the α s (M Z ) − M t plane. In this case, the confidence level decreases with M h . One can see from the phase diagrams in Fig. 5 that the stable EW vacuum is excluded at 1.2 σ (one-sided).
If the ITM is valid up to the Planck mass which also saturates the DM abundance of the Universe then the confidence level vs λ 3 (M Z ) phase diagram becomes important to realize where the present Observations of super-horizon ansiotropies in the CMB data, measured by various experiments such as WMAP, Planck have established that the early Universe underwent a period of rapid expansion. This is known as inflation. This can solve a number of cosmological problems such as the horizon problem, the flatness problem and the magnetic monopole problem of the present Universe. If the electroweak vacuum is metastable then the Higgs is unlikely to play the role of inflaton [82][83][84][85][86][87][88][89][90] in the SM. Therefore, extra new degrees of freedom are needed in addition to the SM ones to explain inflation in the early Universe [91][92][93][94][95][96].
Here, we study an extension of the Higgs sector with a real triplet scalar T in the presence of large couplings ζ h,H to Ricci scalar curvature R. This theory can explain inflation in the early Universe at the large field values in the scale invariance Einstein frame.
In this model, the action of the fields in Jordon frame is given by
S j = √ −gd 4 x L SM + 1 2 (∂ µ Φ) † (∂ µ Φ) + 1 2 (∂ µ T ) † (∂ µ T ) − ζ h R|Φ| 2 − ζ H R|T | 2 − V (Φ, T ) ,(5.
1) In the present work, we consider H as an inflaton. The Higgs h can also act as an inflaton for the stable EW vacuum. In order to calculate the infaltionary observables such as the tensor-to-scalar ratio r, spectral index n s and running of the spectral index n rs , we perform a conformal transformation from Jordon frame to the Einstein frame so that the non-minimal coupling ζ H of scalar field to the Ricci scalar disappears.
The transformations is given by [97] g µν = Ω 2 g µν ,
Ω = 1 + ζ H H 2 M 2 Pl (5.2)
The action of eqn. 5.1 in Einstein frame can be written as
S = √ −gd 4 x 1 2 (∂ µ χ) † (∂ µ χ) − V (χ) ,(5.3)
where,
dχ dH = Ω 2 M 2 Pl + 6ζ 2 H H 2 Ω 4 M 2 Pl (5.4)
The scalar potential V (χ) is then given by
V (χ) = λ 2 M 4 Pl 4ζ 2 H 1 + exp − 2χ 3M Pl −2 . (5.5)
We plot this potential in Fig. 8 for the choice of bench mark point ζ H = 1 and λ 2 = 10 −9 . One can also get the same plot for the parameters ζ H = 10 4 and λ 2 = 0.1. However, this choice of the parameters violate the unitary bound. One can see that the potential have the ability to explain slow-roll inflation. One can define the slow-roll parameters , η and ζ in terms of the potential as
= 1 2 1 V dV dχ 2 , η = 1 V d 2 V dχ 2 , and ζ = 1 V 2 dV dχ d 3 V dχ 3 .
The inflationary observable quantities such as the tensor-to-scalar ratio r, the spectral index n s and the running of the spectral index n rs are defined as r = 16 , n s = 1 − 6 + 2η, and n rs = −2ζ − 24 2 + 16η (5.6) and the number of e-folds is given by
N = χ end χstart V dV /dχ dχ (5.7)
where χ start (χ end ) is the initial (final) value when inflation starts (ends). At χ start , is one. We calculate the χ end form the above eqn. 5.7 for N = 60.
At the end of inflation, we get r = 0.0037, n s = 0.9644, and n rs = −6.24 × 10 −4 (5.8) which is allowed by the present experimental data at 1σ [98,99]. Hence, the neutral component of the triplet scalar can simultaneously serve as a inflaton and dark matter particle as well.
VI. DISCUSSION AND CONCLUSIONS
The measurements of the properties of the Higgs-like scalar boson detected at the Large Hadron Collider on 4th July 2012 are consistent with the minimal choice of the scalar sector. But the experimental data of the Higgs signal strengths and the uncertainties in the measurement of other standard model parameters still allow an extended scalar sector. We have taken an extra hyperchargeless scalar triplet as a new physics. First, we have considered that the extra neutral CP -even component of the scalar triplet has also participated in the EWSB. We have shown the detailed structure of the tree-level scalar potential and mixing of the scalar fields. We have also discussed the bounds on the VEV (v 2 ) of the neutral CP -even component of the scalar triplet from the ρ-parameter. To the best of our knowledge the full expressions of unitary bounds on the quartic couplings of the scalar potential in this model have not yet been presented in the literature. We have shown these unitary bounds in this model. As the SM gauge symmetry SU (2) L prohibits the coupling of SM neutrinos with the neutral CP -even component (η 0 ) of the scalar triplet, the model does not give neutrino masses. But the model is still interesting as it can play the role in improving the stability of the Higgs potential. We have taken into account various threshold corrections to calculate all the couplings at M t . Then using three-loop SM RGEs and two-loop triplet RGEs, we have evolved all the couplings up to the Planck mass M Pl . We have shown the allowed region in M H ± − M H plane. We have demanded that the EW vacuum of the scalar potential remain absolutely stable and do not violate the perturbative-unitarity up to the Planck mass M Pl . We have discussed the constraints on the parameter spaces from the recent LHC µ γγ and µ Zγ data. Furthermore, only a very small region of the parameter space is shown to survive on imposing the EWPT constraints.
Various kinds of astrophysical observations, such as anomalies in the galactic rotation curves gravitational lensing effects in bullet cluster etc., have indicated the existence of DM in the Universe. In the ITM, the extra scalar fields are protected by a discrete Z 2 -symmetry which ensures the stability of the lightest neutral particle. We have verified that the mass of the neutral scalar particle (H) are slightly lighter than the mass of the charged particle (H ± ) so that the contributions coming from co-annihilation between H and H ± play a significant role in the relic density calculation. In the low mass region, the co-annihilation rates are quite high so that the dark matter density is found to be much smaller than the right relic density Ωh 2 = 0.1198 ± 0.0026 of the Universe. We have obtained the relic density in the right ballpark for DM mass to be greater than 1.8 TeV. In this context, we have shown how the presence of an additional hyperchargeless scalar triplet improves the stability of the Higgs potential. In this study, we have used state of the art next-to-next-to leading order (NNLO) for the SM calculations. We have used the SM Higgs scalar potential up to two-loop quantum corrections which is improved by three-loop renormalization groups of the SM couplings. We have taken into account the contributions to the effective Higgs potential of the new scalars at one-loop only. These contributions are improved by two-loop renormalization groups of the new parameters. In this paper, we have explored the stability of the EW minimum of the new effective Higgs potential up to the Planck mass M Pl . We have presented the new modified stability conditions for the metastable EW vacuum. We have also shown various phase diagrams in various parameter spaces to show the explicit dependence of the EW (meta)stability on various parameters. For the first time, we have identified new regions of parameter space that correspond to the stable and metastable EW vacuum, which also provides the relic density of the DM in the Universe as measured by the WMAP and Planck experiments. In the present paper, we have also shown that the extra neutral scalar field H can play the role of an inflation and can serve as a dark matter candidate. The scalar potential can explain inflation for large scalar field values. We have obtained the inflationary observables as observed by the experiments.
( 2 )
2L , prohibits direct coupling of the SM fermions with the scalar fields of the triplet. The couplings of the new scalar fields (H, H ± ) with SM fermions are generated after the EWSB. The strength of Hf f (f are the up-,down-quarks and charged leptons) are proportional to sin γ. The couplings H +ν l l − and H +ū d are proportional to sin β.
FIG. 1 .
1The allowed region (green) from the unitarity, perturbativity and absolute stability which is valid up to the Planck mass M Pl . The region between the black-dashed line is allowed from the EWPT data at 2σ.Now we present our results for the central values of the SM parameters such as the Higgs mass M h = 125.7 GeV, top mass M t = 173.1 GeV, Z boson mass M Z = 91.1876 GeV, strong coupling constant α s = 0.1184.
FIG. 2 .
2The thin blue band corresponds the relic density, Ωh 2 = 0.1198 ± 0.0026 (3σ) from the combined data of WMAP and Planck[61].(a) The mass difference ∆M (green line) and the effective annihilation cross-section (black line) as function of dark matter mass for the portal coupling λ 3 (M Z ) = 0.10. (b) The relic density Ωh 2 as a function of the DM mass M DM (≡ M H ) (red line) for λ 3 (M Z ) = 0.10.
M
Pl −0.00339962 0.267706 0.206306 TABLE I. A set of values of all quartic coupling constants at M t and M Pl for M DM = 1897 GeV. We choose a specific benchmark point M DM (≡ M H ) = 1897 GeV, M h = 125.7 GeV and α s (M Z ) = 0.1184 such that it can give the right DM density of the Universe. The corresponding values of all quartic couplings λ 1,2,3 at M t = 173.1 GeV and M Pl = 1.2 × 10 19 GeV are presented in
FIG. 3 .
3RG evolution of the couplings λ 1,2,3 for the set of parameters in
with DM mass M DM = 1897 GeV.
FIG. 4 .
4(a) Tunneling probability P 0 dependence on M t . The left band (between dashed lines) corresponds to SM. The right one (between dotted lines) is for IT model for DM mass M H = 1897 GeV. Dark matter constraints are respected for these specific choice of parameters. Light-green band stands for M t at ±1σ. (b) P 0 is plotted against the Higgs DM coupling λ 3 (M Z ) for different values of λ 2 (M Z ).EW vacuum is residing. In Fig. 6, we vary the DM mass with λ 3 (M Z ) to keep the relic density at Ωh 2 = 0.1198. One can see that the EW vacuum approaches the stability with larger values of λ 2,3 (M Z ). The EW vacuum becomes absolutely stable for λ 3 (M Z ) ≥ 0.154 and λ 2 (M Z ) ≈ 0.10 (see blue line in Fig. 6). We show this phase diagram for central values of the SM parameters. Moreover, if we increase the top mass and/or decrease the Higgs mass along with α s (M Z ) then the size of the region corresponding to the metastable EW vacuum will be increased. We see that the conditions, the DM mass M DM ≥ 1912 GeV, λ 3 (M Z ) ≥ 0.31 and λ 2 (M Z ) ≥ 0.1 are required to stabilize the EW vacuum for M t = 174.9 GeV, M h = 124.8 GeV and α s (M Z ) = 0.1163.InFig. 7, we show the allowed parameter spaces in λ 3 (M Z ) − M H ± plane for central values of SM parameters and λ 2 (M Z ) = 0.1. The lower (red) region is excluded since the scalar potential becomes unbounded from below along the direction in between H ± and h. In this region, the effective Higgs quartic coupling is negative and at the same time λ 3 remains negative up to the Planck mass M Pl . We obtain the parameter space with negative λ 3 (M Z ) which is also allowed from metastability. In this case, λ 3 becomes positive at the scale Λ B and remains positive up to the Planck mass M Pl . The EW vacuum is absolutely stable in the green region. The upper red region violates unitary bounds. The right-side of the black dotted line are allowed from µ γγ at 1σ.
FIG. 5 .
5Phase diagrams in (a) M h − M t plane and (b) M t − α s (M Z ) plane ITM. Regions of absolute stability (green), metastability (yellow), instability (red) of the EW vacuum are also marked. The gray zones represent error ellipses at 1, 2 and 3σ. The three boundary lines (dotted, solid and dotted red) correspond to α s (M Z ) = 0.1184 ± 0.0007. FIG. 6. Dependence of confidence level at which the EW vacuum stability is excluded (one-sided) or allowed on λ 3 (M Z ) and λ 2 (M Z ) in ITM. Regions of absolute stability (green) and metastability (yellow) of EW vacuum are shown for λ 2 (M Z ) = 0.1. FIG. 7. Phase diagram in λ 3 (M Z ) − M H ± plane in ITM. Right side of the black-dotted line is allowed from the signal strength ratio of µ γγ within 68% confidence level and the left side is excluded at 1σ. In the metastable region, the Higgs portal coupling λ 3 (M Z ) is negative, however, beyond the scale Λ B it is greater than zero. V. INFLATION IN HTM(Y = 0)
FIG. 8 .
8Inflation potential in the Planck unit for ζ H = 1 and λ 2 = 10 −9 .
For v 2 = 0, the notation in eqn. 2.1 H ≡ η 0 and H ± ≡ η ± are the physical scalar fields.
As the βfunction of the Higgs quartic coupling, λ 1 contains − 6y 4 t 16π 2 (see eqn. A1), the values of the Higgs quartic couplings λ 1 at very high energies are extremely sensitive to M t .
Acknowledgements:The work of N.K. is supported by a fellowship from University Grants Commission. This work isAppendix A: Two-loop beta functions for IT ModelIn this study, we use the SM RGEs up to three-loop which have been given in Refs.[50][51][52][53]. The triplet contributions (λ 2,3 ) are taken up to two-loop which have been generated using SARAH[100].In the HTM (Y = 0), the RGEs of the couplings (χ i = g 1,2,3 , λ 1,2,3 and Y l,u,d ) and dimensionful mass parameters (µ 1,2 and λ 4 ) are defined asFor µ > M H , the RGEs of the scalar quartic couplings λ 1,2,3 and the mass parameter λ 4 are given byFor µ < M H , β λ 1 = β λ 1 (λ 2,3 = 0) and β λ 2,3,4 = 0, where, Y u = y u , y c , y t are the Yukawa couplings of up-,charm-and top-quark, Y d = y d , y s , y b for down-, strange-and bottom-quark. Y l represents the Yukawa couplings for the charged leptons. In our work, we have included the contribution only from top-quark. Since, the other Yukawa couplings are very small, they do not alter our result. We have also taken into account the contributions to the beta functions of the gauge couplings g 1,2,3 of the new physics. The importance of mass parameters µ 1,2 and λ 4 are found to be negligible in the stability analysis.
. Science and Technology. India via Grant Nosupported by a grant from the Department of Science and Technology, India via Grant No.
/001177. I would like to thank Subhendu Rakshit, Amitava Raychaudhuri, Amitava Datta, Subhendra Mohanty and Girish K. Chakravarty for useful discussions. Emr/, EMR/2014/001177. I would like to thank Subhendu Rakshit, Amitava Raychaudhuri, Amitava Datta, Subhendra Mohanty and Girish K. Chakravarty for useful discussions.
. G Aad, ATLAS CollaborationarXiv:1207.7214Phys. Lett. B. 7161hep-exG. Aad et al. [ATLAS Collaboration], Phys. Lett. B 716, 1 (2012) [arXiv:1207.7214 [hep-ex]].
. S Chatrchyan, CMS CollaborationarXiv:1207.7235Phys. Lett. B. 71630hep-exS. Chatrchyan et al. [CMS Collaboration], Phys. Lett. B 716, 30 (2012) [arXiv:1207.7235 [hep-ex]].
. P P Giardino, K Kannike, I Masina, M Raidal, A Strumia, arXiv:1303.3570JHEP. 140546hep-phP. P. Giardino, K. Kannike, I. Masina, M. Raidal and A. Strumia, JHEP 1405, 046 (2014) [arXiv:1303.3570 [hep-ph]].
. G Aad, ATLAS and CMS CollaborationsarXiv:1606.02266JHEP. 160845hepexG. Aad et al. [ATLAS and CMS Collaborations], JHEP 1608, 045 (2016) [arXiv:1606.02266 [hep- ex]].
. D Buttazzo, G Degrassi, P P Giardino, G F Giudice, F Sala, A Salvio, A Strumia, arXiv:1307.3536JHEP. 131289D. Buttazzo, G. Degrassi, P. P. Giardino, G. F. Giudice, F. Sala, A. Salvio and A. Strumia, JHEP 1312, 089 (2013) [arXiv:1307.3536].
. G Degrassi, S Di Vita, J Elias-Miro, J R Espinosa, G F Giudice, G Isidori, A Strumia, arXiv:1205.6497JHEP. 120898hep-phG. Degrassi, S. Di Vita, J. Elias-Miro, J. R. Espinosa, G. F. Giudice, G. Isidori and A. Strumia, JHEP 1208, 098 (2012) [arXiv:1205.6497 [hep-ph]].
. I Masina, arXiv:1209.0393Phys. Rev. D. 8753001hep-phI. Masina, Phys. Rev. D 87, 053001 (2013) [arXiv:1209.0393 [hep-ph]].
. J Elias-Miro, J R Espinosa, G F Giudice, G Isidori, A Riotto, A Strumia, arXiv:1112.3022Phys. Lett. B. 709222hep-phJ. Elias-Miro, J. R. Espinosa, G. F. Giudice, G. Isidori, A. Riotto and A. Strumia, Phys. Lett. B 709, 222 (2012) [arXiv:1112.3022 [hep-ph]].
. V Branchina, E Messina, arXiv:1307.5193Phys. Rev. Lett. 111241801hep-phV. Branchina and E. Messina, Phys. Rev. Lett. 111, 241801 (2013) [arXiv:1307.5193 [hep-ph]].
. V Branchina, E Messina, A Platania, arXiv:1407.4112JHEP. 1409182hep-phV. Branchina, E. Messina and A. Platania, JHEP 1409, 182 (2014) [arXiv:1407.4112 [hep-ph]].
. V Branchina, E Messina, M Sher, arXiv:1408.5302Phys. Rev. D. 9113003hep-phV. Branchina, E. Messina and M. Sher, Phys. Rev. D 91, 013003 (2015) [arXiv:1408.5302 [hep-ph]].
. V Branchina, E Messina, D Zappala, arXiv:1601.06963EPL. 116221001hepphV. Branchina, E. Messina and D. Zappala, EPL 116, no. 2, 21001 (2016) [arXiv:1601.06963 [hep- ph]].
. V Branchina, E Messina, arXiv:1507.08812EPL. 117661002hep-phV. Branchina and E. Messina, EPL 117, no. 6, 61002 (2017) [arXiv:1507.08812 [hep-ph]].
. E Bentivegna, V Branchina, F Contino, D , arXiv:1708.01138JHEP. 1712100hep-phE. Bentivegna, V. Branchina, F. Contino and D. Zappal, JHEP 1712, 100 (2017) [arXiv:1708.01138 [hep-ph]].
. N Khan, S Rakshit, arXiv:1407.6015Phys. Rev. D. 90113008hep-phN. Khan and S. Rakshit, Phys. Rev. D 90, 113008 (2014) [arXiv:1407.6015 [hep-ph]].
. N Khan, S Rakshit, arXiv:1503.03085Phys. Rev. D. 9255006hep-phN. Khan and S. Rakshit, Phys. Rev. D 92, 055006 (2015) [arXiv:1503.03085 [hep-ph]].
. A Datta, N Ganguly, N Khan, S Rakshit, arXiv:1610.00648hep-phA. Datta, N. Ganguly, N. Khan and S. Rakshit, arXiv:1610.00648 [hep-ph].
. L Basso, O Fischer, J J Van Der, Bij, arXiv:1309.6086Phys. Lett. B. 730326hep-phL. Basso, O. Fischer and J. J. van Der Bij, Phys. Lett. B 730, 326 (2014) [arXiv:1309.6086 [hep-ph]].
. O Fischer, arXiv:1607.00282hep-phO. Fischer, arXiv:1607.00282 [hep-ph].
. T Blank, W Hollik, hep-ph/9703392Nucl. Phys. B. 514113T. Blank and W. Hollik, Nucl. Phys. B 514, 113 (1998) [hep-ph/9703392].
. J R Forshaw, A Vera, B E White, hep-ph/0302256JHEP. 030659J. R. Forshaw, A. Sabio Vera and B. E. White, JHEP 0306, 059 (2003) [hep-ph/0302256].
. M C Chen, S Dawson, C B Jackson, arXiv:0809.4185Phys. Rev. D. 7893001hep-phM. C. Chen, S. Dawson and C. B. Jackson, Phys. Rev. D 78, 093001 (2008) [arXiv:0809.4185 [hep-ph]].
. M C Chen, S Dawson, T Krupovnickas, hep-ph/0604102Phys. Rev. D. 7435001M. C. Chen, S. Dawson and T. Krupovnickas, Phys. Rev. D 74, 035001 (2006) [hep-ph/0604102].
. M C Chen, S Dawson, T Krupovnickas, hep- ph/0504286Int. J. Mod. Phys. A. 214045M. C. Chen, S. Dawson and T. Krupovnickas, Int. J. Mod. Phys. A 21, 4045 (2006) [hep- ph/0504286].
. P H Chankowski, S Pokorski, J Wagner, hep-ph/0605302Eur. Phys. J. C. 50919P. H. Chankowski, S. Pokorski and J. Wagner, Eur. Phys. J. C 50, 919 (2007) [hep-ph/0605302].
. J R Forshaw, D A Ross, B E White, hep-ph/0107232JHEP. 01107J. R. Forshaw, D. A. Ross and B. E. White, JHEP 0110, 007 (2001) [hep-ph/0107232].
. Z U Khandker, D Li, W Skiba, arXiv:1201.4383Phys. Rev. D. 8615006hep-phZ. U. Khandker, D. Li and W. Skiba, Phys. Rev. D 86, 015006 (2012) [arXiv:1201.4383 [hep-ph]].
. P Perez, H H Patel, M J Ramsey-Musolf, K Wang, arXiv:0811.3957Phys. Rev. D. 7955024hep-phP. Fileviez Perez, H. H. Patel, M. J. Ramsey-Musolf and K. Wang, Phys. Rev. D 79, 055024 (2009) [arXiv:0811.3957 [hep-ph]].
. L Wang, X F Han, arXiv:1303.4490JHEP. 140310hep-phL. Wang and X. F. Han, JHEP 1403, 010 (2014) [arXiv:1303.4490 [hep-ph]].
. N Khan, B Mukhopadhyaya, S Rakshit, A Shaw, arXiv:1608.05673hep-phN. Khan, B. Mukhopadhyaya, S. Rakshit and A. Shaw, arXiv:1608.05673 [hep-ph].
. T Araki, C Q Geng, K I Nagao, arXiv:1102.4906Phys. Rev. D. 8375014hep-phT. Araki, C. Q. Geng and K. I. Nagao, Phys. Rev. D 83, 075014 (2011) [arXiv:1102.4906 [hep-ph]].
. S Y Ayazi, S M Firouzabadi, arXiv:1501.06176hep-phS. Y. Ayazi and S. M. Firouzabadi, arXiv:1501.06176 [hep-ph].
. S Y Ayazi, S M Firouzabadi, arXiv:1408.0654JCAP. 14115hep-phS. Y. Ayazi and S. M. Firouzabadi, JCAP 1411, 005 (2014) [arXiv:1408.0654 [hep-ph]].
. F X Josse-Michaux, E Molinaro, arXiv:1210.7202Phys. Rev. D. 8736007hep-phF. X. Josse-Michaux and E. Molinaro, Phys. Rev. D 87, 036007 (2013) [arXiv:1210.7202 [hep-ph]].
. W B Lu, P H Gu, 10.1088/1475-7516/2016/05/040arXiv:1603.05074JCAP. 16050540hep-phW. B. Lu and P. H. Gu, JCAP 1605, no. 05, 040 (2016) doi:10.1088/1475-7516/2016/05/040 [arXiv:1603.05074 [hep-ph]].
. O Fischer, J J Van Der, Bij, Mod. Phys. Lett. A. 262039O. Fischer and J. J. van der Bij, Mod. Phys. Lett. A 26, 2039 (2011).
. O Fischer, J J Van Der, Bij, arXiv:1311.1077JCAP. 140132hep-phO. Fischer and J. J. van der Bij, JCAP 1401, 032 (2014) [arXiv:1311.1077 [hep-ph]].
. K A Olive, Particle Data Group CollaborationChin. Phys. C. 3890001K. A. Olive et al. [Particle Data Group Collaboration], Chin. Phys. C 38, 090001 (2014).
. B W Lee, C Quigg, H B Thacker, Phys. Rev. Lett. 38883B. W. Lee, C. Quigg and H. B. Thacker, Phys. Rev. Lett. 38, 883 (1977).
. B W Lee, C Quigg, H B Thacker, Phys. Rev. D. 161519B. W. Lee, C. Quigg and H. B. Thacker, Phys. Rev. D 16, 1519 (1977).
. Y P Yao, C P Yuan, Phys. Rev. D. 382237Y. P. Yao and C. P. Yuan, Phys. Rev. D 38, 2237 (1988);
. H G J Veltman, Phys. Rev. D. 412294H. G. J. Veltman, Phys. Rev. D 41, 2294 (1990);
. H J He, Phys. Rev. Lett. 692619H. J. He et al., Phys. Rev. Lett. 69, 2619 (1992).
. S Kanemura, Phys. Lett. B. 313S. Kanemura et al., Phys. Lett. B 313, 155-160 (1993).
. A Arhrib, hep-ph/0012353A. Arhrib, hep-ph/0012353.
. M E Peskin, T Takeuchi, Phys. Rev. D. 46381M. E. Peskin and T. Takeuchi, Phys. Rev. D 46, 381 (1992).
. M Baak, Gfitter Group CollaborationarXiv:1407.3792Eur. Phys. J. C. 743046hep-phM. Baak et al. [Gfitter Group Collaboration], Eur. Phys. J. C 74, 3046 (2014) [arXiv:1407.3792 [hep-ph]].
. G Belanger, B Dumont, U Ellwanger, J F Gunion, S Kraml, Phys. Rev. D. 8875008G. Belanger, B. Dumont, U. Ellwanger, J. F. Gunion, and S. Kraml, Phys. Rev. D 88, 075008(2013).
. A Djouadi, hep-ph/0503173Phys. Rept. 459A. Djouadi, Phys. Rept. 459, 1 (2008) [hep-ph/0503173].
. G Aad, ATLAS CollaborationarXiv:1408.7084Phys. Rev. D. 90112015hep-exG. Aad et al. [ATLAS Collaboration], Phys. Rev. D 90, 112015 (2014) [arXiv:1408.7084 [hep-ex]].
. V Khachatryan, CMS CollaborationarXiv:1407.0558Eur. Phys. J. C. 743076hep-exV. Khachatryan et al. [CMS Collaboration], Eur. Phys. J. C 74, 3076 (2014) [arXiv:1407.0558 [hep-ex]].
. K G Chetyrkin, M F Zoller, arXiv:1205.2892JHEP. 120633hep-phK. G. Chetyrkin and M. F. Zoller, JHEP 1206, 033 (2012) [arXiv:1205.2892 [hep-ph]].
. M F Zoller, arXiv:1209.5609hep-phM. F. Zoller, arXiv:1209.5609 [hep-ph].
. K G Chetyrkin, M F Zoller, arXiv:1303.2890JHEP. 130491Erratum-ibid. 1309, 155 (2013). hep-phK. G. Chetyrkin and M. F. Zoller, JHEP 1304, 091 (2013), [Erratum-ibid. 1309, 155 (2013)] [arXiv:1303.2890 [hep-ph]].
. M Zoller, arXiv:1311.5085PoS EPS-HEP2013. 322hep-phM. Zoller, PoS EPS-HEP2013, 322 (2014) [arXiv:1311.5085 [hep-ph]].
. M Cirelli, N Fornengo, A Strumia, hep-ph/0512090Nucl. Phys. B. 753178M. Cirelli, N. Fornengo and A. Strumia, Nucl. Phys. B 753, 178 (2006) [hep-ph/0512090].
. M Cirelli, A Strumia, arXiv:0903.3381New J. Phys. 11105005hep-phM. Cirelli and A. Strumia, New J. Phys. 11, 105005 (2009) [arXiv:0903.3381 [hep-ph]].
. G Belanger, B Dumont, U Ellwanger, J F Gunion, S Kraml, arXiv:1306.2941Phys. Rev. D. 8875008hep-phG. Belanger, B. Dumont, U. Ellwanger, J. F. Gunion and S. Kraml, Phys. Rev. D 88, 075008 (2013) [arXiv:1306.2941 [hep-ph]].
. K Griest, D Seckel, Phys. Rev. D. 433191K. Griest and D. Seckel, Phys. Rev. D 43, 3191 (1991).
. A Alloul, N D Christensen, C Degrande, C Duhr, B Fuks, arXiv:1310.1921Comput. Phys. Commun. 1852250hep-phA. Alloul, N. D. Christensen, C. Degrande, C. Duhr and B. Fuks, Comput. Phys. Commun. 185, 2250 (2014) [arXiv:1310.1921 [hep-ph]].
. G Belanger, F Boudjema, P Brun, A Pukhov, S Rosier-Lees, P Salati, A Semenov, arXiv:1004.1092Comput. Phys. Commun. 182hep-phG. Belanger, F. Boudjema, P. Brun, A. Pukhov, S. Rosier-Lees, P. Salati and A. Semenov, Comput. Phys. Commun. 182, 842 (2011) [arXiv:1004.1092 [hep-ph]].
. G Belanger, F Boudjema, A Pukhov, A Semenov, arXiv:1305.0237Comput. Phys. Commun. 185hep-phG. Belanger, F. Boudjema, A. Pukhov and A. Semenov, Comput. Phys. Commun. 185, 960 (2014) [arXiv:1305.0237 [hep-ph]].
. P A R Ade, Planck CollaborationarXiv:1303.5076[astro-ph.COP. A. R. Ade et al. [Planck Collaboration], arXiv:1303.5076 [astro-ph.CO].
. E Aprile, XENON100 CollaborationarXiv:1104.2549Phys. Rev. Lett. 107131302astro-ph.COE. Aprile et al. [XENON100 Collaboration], Phys. Rev. Lett. 107, 131302 (2011) [arXiv:1104.2549 [astro-ph.CO]].
. E Aprile, XENON100 CollaborationarXiv:1207.5988Phys. Rev. Lett. 109181301astro-ph.COE. Aprile et al. [XENON100 Collaboration], Phys. Rev. Lett. 109, 181301 (2012) [arXiv:1207.5988 [astro-ph.CO]].
. D S Akerib, LUX CollaborationarXiv:1310.8214Phys. Rev. Lett. 11291303astro-ph.COD. S. Akerib et al. [LUX Collaboration], Phys. Rev. Lett. 112, 091303 (2014) [arXiv:1310.8214 [astro-ph.CO]].
Results from a search for dark matter in the complete LUX exposure. D S Akerib, Phys. Rev. Lett. 118221303D. S. Akerib et al. Results from a search for dark matter in the complete LUX exposure. Phys. Rev. Lett., 118(2):021303, 2017.
. E Aprile, XENON CollaborationarXiv:1705.06655Phys. Rev. Lett. 119181301astro-ph.COE. Aprile et al. [XENON Collaboration], Phys. Rev. Lett. 119, 181301 (2017) [arXiv:1705.06655 [astro-ph.CO]].
. J A Casas, J R Espinosa, M Quiros, hep-ph/9409458Phys. Lett. B. 342171J. A. Casas, J. R. Espinosa and M. Quiros, Phys. Lett. B 342, 171 (1995) [hep-ph/9409458].
. G Altarelli, G Isidori, Phys. Lett. B. 337141G. Altarelli and G. Isidori, Phys. Lett. B 337, 141 (1994).
. J A Casas, J R Espinosa, M Quiros, A Riotto, hep-ph/9407389Nucl. Phys. B. 436466Erratum-ibid. BJ. A. Casas, J. R. Espinosa, M. Quiros and A. Riotto, Nucl. Phys. B 436, 3 (1995), [Erratum-ibid. B 439, 466 (1995)] [hep-ph/9407389].
. J A Casas, J R Espinosa, M Quiros, hep-ph/9603227Phys. Lett. B. 382J. A. Casas, J. R. Espinosa and M. Quiros, Phys. Lett. B 382, 374 (1996) [hep-ph/9603227].
. M Quiros, hep-ph/9703412M. Quiros, hep-ph/9703412.
. C Ford, I Jack, D R T Jones, hep-ph/0111190Nucl. Phys. B. 387551Erratum-ibid. BC. Ford, I. Jack and D. R. T. Jones, Nucl. Phys. B 387, 373 (1992), [Erratum-ibid. B 504, 551 (1997)] [hep-ph/0111190].
. S P Martin, hep-ph/0111209Phys. Rev. D. 65116003S. P. Martin, Phys. Rev. D 65, 116003 (2002) [hep-ph/0111209].
. A Sirlin, R Zucchini, Nucl. Phys. B. 266389A. Sirlin and R. Zucchini, Nucl. Phys. B 266, 389 (1986).
. F Bezrukov, M Y Kalmykov, B A Kniehl, M Shaposhnikov, arXiv:1205.2893JHEP. 1210140hep-phF. Bezrukov, M. Y. Kalmykov, B. A. Kniehl and M. Shaposhnikov, JHEP 1210, 140 (2012) [arXiv:1205.2893 [hep-ph]].
. G Isidori, G Ridolfi, A Strumia, hep-ph/0104016Nucl. Phys. B. 609387G. Isidori, G. Ridolfi and A. Strumia, Nucl. Phys. B 609, 387 (2001) [hep-ph/0104016].
. S R Coleman, Phys. Rev. D. 152929Erratum-ibid. D 16, 1248 (1977)S. R. Coleman, Phys. Rev. D 15, 2929 (1977), [Erratum-ibid. D 16, 1248 (1977)].
. S R Coleman, F De Luccia, Phys. Rev. D. 213305S. R. Coleman and F. De Luccia, Phys. Rev. D 21, 3305 (1980).
. G Isidori, V S Rychkov, A Strumia, N Tetradis, arXiv:0712.0242Phys. Rev. D. 7725034hep-phG. Isidori, V. S. Rychkov, A. Strumia and N. Tetradis, Phys. Rev. D 77, 025034 (2008) [arXiv:0712.0242 [hep-ph]].
. L Delle Rose, C Marzo, A Urbano, arXiv:1507.06912JHEP. 160550hep-phL. Delle Rose, C. Marzo and A. Urbano, JHEP 1605, 050 (2016) [arXiv:1507.06912 [hep-ph]].
. J R Espinosa, M Quiros, hep-ph/9504241Phys. Lett. B. 353257J. R. Espinosa and M. Quiros, Phys. Lett. B 353, 257 (1995) [hep-ph/9504241].
. F L Bezrukov, M Shaposhnikov, arXiv:0710.3755Phys. Lett. B. 659703hep-thF. L. Bezrukov and M. Shaposhnikov, Phys. Lett. B 659, 703 (2008) [arXiv:0710.3755 [hep-th]].
. F Bezrukov, D Gorbunov, M Shaposhnikov, arXiv:0812.3622JCAP. 090629hepphF. Bezrukov, D. Gorbunov and M. Shaposhnikov, JCAP 0906, 029 (2009) [arXiv:0812.3622 [hep- ph]].
. F L Bezrukov, A Magnin, M Shaposhnikov, arXiv:0812.4950Phys. Lett. B. 67588hep-phF. L. Bezrukov, A. Magnin and M. Shaposhnikov, Phys. Lett. B 675, 88 (2009) [arXiv:0812.4950 [hep-ph]].
. F Bezrukov, M Shaposhnikov, arXiv:0904.1537JHEP. 090789hep-phF. Bezrukov and M. Shaposhnikov, JHEP 0907, 089 (2009) [arXiv:0904.1537 [hep-ph]].
. A O Barvinsky, A Y Kamenshchik, A A Starobinsky, arXiv:0809.2104JCAP. 081121hep-phA. O. Barvinsky, A. Y. Kamenshchik and A. A. Starobinsky, JCAP 0811, 021 (2008) [arXiv:0809.2104 [hep-ph]].
. A O Barvinsky, A Y Kamenshchik, C Kiefer, A A Starobinsky, C Steinwachs, arXiv:0904.1698JCAP. 09123hep-phA. O. Barvinsky, A. Y. Kamenshchik, C. Kiefer, A. A. Starobinsky and C. Steinwachs, JCAP 0912, 003 (2009) [arXiv:0904.1698 [hep-ph]].
. A Simone, M P Hertzberg, F Wilczek, arXiv:0812.4946Phys. Lett. B. 6781hep-phA. De Simone, M. P. Hertzberg and F. Wilczek, Phys. Lett. B 678, 1 (2009) [arXiv:0812.4946 [hep-ph]].
. J Garcia-Bellido, D G Figueroa, J Rubio, arXiv:0812.4624Phys. Rev. D. 7963531hep-phJ. Garcia-Bellido, D. G. Figueroa and J. Rubio, Phys. Rev. D 79, 063531 (2009) [arXiv:0812.4624 [hep-ph]].
. S C Park, S Yamaguchi, 10.1088/1475-7516/2008/08/009arXiv:0801.1722JCAP. 08089hep-phS. C. Park and S. Yamaguchi, JCAP 0808, 009 (2008) doi:10.1088/1475-7516/2008/08/009 [arXiv:0801.1722 [hep-ph]].
. R N Lerner, J Mcdonald, arXiv:0909.0520Phys. Rev. D. 80123507hep-phR. N. Lerner and J. McDonald, Phys. Rev. D 80, 123507 (2009) [arXiv:0909.0520 [hep-ph]].
. O Lebedev, H M Lee, arXiv:1105.2284Eur. Phys. J. C. 711821hep-phO. Lebedev and H. M. Lee, Eur. Phys. J. C 71, 1821 (2011) [arXiv:1105.2284 [hep-ph]].
. G K Chakravarty, S Mohanty, arXiv:1405.1321Phys. Lett. B. 746242hep-phG. K. Chakravarty and S. Mohanty, Phys. Lett. B 746, 242 (2015) [arXiv:1405.1321 [hep-ph]].
. G K Chakravarty, G Gupta, G Lambiase, S Mohanty, arXiv:1604.02556Phys. Lett. B. 760263hep-phG. K. Chakravarty, G. Gupta, G. Lambiase and S. Mohanty, Phys. Lett. B 760, 263 (2016) [arXiv:1604.02556 [hep-ph]].
. G K Chakravarty, U K Dey, G Lambiase, S Mohanty, arXiv:1607.06904Phys. Lett. B. 763501hep-phG. K. Chakravarty, U. K. Dey, G. Lambiase and S. Mohanty, Phys. Lett. B 763, 501 (2016) [arXiv:1607.06904 [hep-ph]].
. J Ellis, arXiv:1702.05436hep-phJ. Ellis, arXiv:1702.05436 [hep-ph].
. F Kahlhoefer, J Mcdonald, arXiv:1507.03600JCAP. 15111115astro-ph.COF. Kahlhoefer and J. McDonald, JCAP 1511, no. 11, 015 (2015) [arXiv:1507.03600 [astro-ph.CO]].
. P A R Ade, Planck CollaborationarXiv:1502.01589Astron. Astrophys. 594astro-ph.COP. A. R. Ade et al. [Planck Collaboration], Astron. Astrophys. 594, A13 (2016) [arXiv:1502.01589 [astro-ph.CO]].
. P A R Ade, Planck CollaborationarXiv:1502.02114Astron. Astrophys. 594astro-ph.COP. A. R. Ade et al. [Planck Collaboration], Astron. Astrophys. 594, A20 (2016) [arXiv:1502.02114 [astro-ph.CO]].
. F Staub, arXiv:1309.7223Comput. Phys. Commun. 1851773hep-phF. Staub, Comput. Phys. Commun. 185, 1773 (2014) [arXiv:1309.7223 [hep-ph]];
. F Staub, arXiv:1503.04200hep-phF. Staub, arXiv:1503.04200 [hep-ph].
. K Kannike, arXiv:1205.3781Eur. Phys. J. C. 722093hep-phK. Kannike, Eur. Phys. J. C 72, 2093 (2012) [arXiv:1205.3781 [hep-ph]].
| []
|
[
"PARSING AS TREE TRAVERSAL",
"PARSING AS TREE TRAVERSAL"
]
| [
"Dale Gerdemann \nSeminar ffir Sprachwissenschaft\nUniversiti t T bingen t\n\n"
]
| [
"Seminar ffir Sprachwissenschaft\nUniversiti t T bingen t\n"
]
| []
| This paper presents a unified approach to parsing, in which top-down, bottomup and left-corner parsers m:e related to preorder, postorder and inorder tree traversals. It is shown that the simplest bottom-up and left-corner parsers are left recursive and must be converted using an extended Greibach normal form. With further partial execution, the bottom-up and left-corner parsers collapse togethe~ as in the I]IJP parser of Matsumoto. | 10.3115/991886.991955 | null | 2,298,682 | cmp-lg/9407027 | d311dbbdf170420648cbed7e2981f3e493118ebc |
PARSING AS TREE TRAVERSAL
Dale Gerdemann
Seminar ffir Sprachwissenschaft
Universiti t T bingen t
PARSING AS TREE TRAVERSAL
This paper presents a unified approach to parsing, in which top-down, bottomup and left-corner parsers m:e related to preorder, postorder and inorder tree traversals. It is shown that the simplest bottom-up and left-corner parsers are left recursive and must be converted using an extended Greibach normal form. With further partial execution, the bottom-up and left-corner parsers collapse togethe~ as in the I]IJP parser of Matsumoto.
INTRODUCTION
In this paper, I present a unified approach to parsing, in which top-down, bottom-up and left-corner parsers are related to preorder, postorder and inorder tree traversals. To some extent, this connection is already clear since for each parsing strategy the nodes of the parse tree are constructed according to the corresponding tree traversal. It is somewhat trickier though, to actually use a tree traversa.l program as a parser since the resulting pa.rser may be left recursive. This left recursion can *The research presented in this paper was partially sponsored by Teilprojekt Bd "Constraints on Grammar for Efficient Generation" of the Sonderforschungsbereich 340 of the Deutsche Forschungsgemeinschaft. I wouhl also like to thank Guido Minnen and Dieter Martini for helpflfl comments. All mistakes are of course my own.
TREE TRAVERSAL PRO G RAM S
Following O'Keefe [8], we can implement i)reorder, postorder and inorder tree tra.versals as I)CCs, which will then 1)e converted directly into top-down ])otl.om-u 1) and heft-corner l)arsers, respectively. The general schema is:
x ._o r d e r(']'t'ee) --* (x_ordered node labels in Tree).
Note tha.t in this case, since we are most likely to call x_order with the Tree va.riable instantiated, we are using the DCG in generation mode rather tha.n as a parser. When used as a parser on the stringlS , the procedure will return all trees whose x_order traw~rsal produces S. The three, instantiations of this procedure are as ['ollows:
DIRECT ENCODING OF PARSING STRATEGIES
Analogous to these three tl'aversal programs, there are three parsing stragegies, which differ from the tree traversal programs in only two respects. First, the base case for a parser should be to parse a lexical item rathe,: than to parse an empty string. And second, in the recursive clauses, the mother care.gory fits into the parse tree and is licensed by the auxiliary predicate rule/3 but it does not figure into the string that is parsed.
As was the case for the three tree traversal programs, the three parsers differ from each other only with respect to the right hand side order. ])'or simplicity, I assume that phrase structure rules are binary branching, though the approach can easily be generalized to non-bi uary branching. 1 % top-down parser td(node(PreTerm,lf(Word))) --
> [Word], {word(PreTerm,Word)}. td(node(Mother,Left,Right)) --> {rule(Mother,Left,Right)}, gd(Left), td(Right). bottom-up parser bu(node(PreTerm,lf(Word))) --> [Word], {word(PreTerm,Word)}. bu(node(Mother,Left,Right)) --> bu(Left), bu(Right), {rule(Mother,Left,Right)}. Y, left-corner parser ic(node(PreTerm,lf (Word))) --> [Word] , {word (Pr eTerm, Word) }. ic (node (Mother, Left ,Right) ) --> ic(Lef%), {rule (Mother, Left, Right) }, ic (Right).
iks seen here the on]y difference between the t]lree strategies concerns |,he. choice of when to select a phrase structure rule. 2 Do you start with a. rule and then try to satisfy it as iu the top-down apl~roa.ch , or do you parse the (laught(ers of a. rule. first before selecting the rule as in the bottom-up approach, or do you l,al(e an inte,'mediate strategy as in the left-corner al)l)roach. lq'he only ln'oblematic ease is for left corner since the corresponding tre.e traw~'rsal inorder is normally defined only for bina,'y trees. But inorder is easily extended to non-binary trees as follows: i. visit the left daughter in inorder, ii. visit the mot, her, iii. visit the rest; of the. daughters in inorder. eAs opposed to, say, ~t choice of whether to use operations of expanding and matching or operations of shifting and reducing. [2]), however, the simplicity of the parsers here does not justify the extra complication in Dymetman's procedure. Using this transformation, the bottom-up parser then becomes as follows: 4 aEGNF is similar to normal GNF except that the arguments attached to non-terminals must be manipulated so that the original instantiations are preserved. For specific grammars, it is pretty e~y to see that such a manipulation is possiMe. It is nmch more diftlcult (and beyond the scope of this paper) to show that there is a general rule tbr such manipulations.
GREIBACH
NORMAL FORM PARSERS
4The Greibach NF conversion introduces one auxiliary predicate, which (following IIopcroft & Ulhnan [4]) I have called b. Of course, the GNF conversion also does not tell us what to do with the auxiliary procedures in curly brackets. What I've done here is silnply to put these auxiliary procedures in the transformed grammar in positions corresponding to where they occurred in the original grammar. Hother,L,R),Node).
PARTIAL EXECUTION
The improved ECNF bottom-np altd left-corner parsers (lilIhr now only in the position of the auxiliary l)redicate in curly brackets. If this auxiliary predicate is partially executed out with respect to a particular gramlnar, the two pltrsers will become identical. Such a unified approach to parsing is mostly useful simply (,o understand how the different l>arsers are related. It is sm'prising Co see, for examph:, that with partial executiol L the bottom-up and ]el't-cornc.r parsers be('ome, the same. The similarity bel;weeu t>ot(,om-u 1) and h:ft-corner pa.rsing ha.s caused a certain all/Ollllt (If (:onI'usion in the literature.
CONCLUSION
l"or example, (,It('. so-calh'd "botton>ui)" chart i)arse.r l)resenl,ed (among other l)laces) in Cazda.r "~ Me.llish [3] in fact uses a left-corner strategy. This was pointed out by Wiren [ll] but has not receive(l much attention in the litera-I.ure. It is hoped I.ha.1, the unifi('.d approa.ch to parsing l)re.seifix:d h(:re will hel l) 1,o clear u I> ol, her such confusions.
Finally, one Inight )nentiol)a co)l-heel.ion to C, ovcrnm('.nt-llinding parsingj a.s presented ill ,Iolmson & Stabhn' [5]. These a.uthors present a generate amd test approa.(:h, in which X-bar strucl, lli'es ~llTe ramlomly generated m~d then tesl, ed agldnst lIB principles. Once (,he logic of the program is expressed in such a ma.uner, cfIi('iency considerations are used in order to fold the testing procedures into the generation procedure.
One could view the strategy takel~ in this paper as rather similar. Running a tree traversal program in reverse is like randomly generating phrase structure. Then these randomly generated structures are tested against the constraints, i.e., the phrase structure rules. What I have shown here, is that the decision as to where to fold in the constraints is very significant. Folding in the constraints at different positions actually gives completely different parsing strategies.
It's not clear that one can always find such a "corresponding" position, though in the case of the bottom-up and left-corner parsers such a position is easy to identify. Mother,L,g)}, b(node(Mother,L,R),Node). This, however is not very ef[icient since the two clauses of both bu and b differ only in whether or not there is a final call to b. ~Ve can reduce l.he a.mount of backtracking by encoding this optiolmlity in the b procedure itself. % Improved EGNF bottom-up bu(Node) --> [Word], {word(PreTerm,Word)}, b(node(PreTerm,lf(Word)),Node). b(Node,Node) --> []. b(L,Node) --> bu(R), {rule(Mother,L,R)}, b(node(Mother,L,R),Node). l~y tile same I",GNI: transform~Ltion and improvement, s, tile resulting leftcorner parser is only minimally different from the bottom-up parser: Improved EGNF Left-corner Ic(Node) --> [Word], {word(PreTerm,Word)}, b(node(PreTerm,lf(Word)),Node). b(Node,Node) --> [].
For example, if we have a rule of the ['orl)l: s(tree(s,NP,VP)) --> np(RP), vp(VP). For either parser, this will result in one b clause of the form: b(np(NP),Node) --> lc(vp(VP)), b(node(s(tree(s,NP,VP)), np(RP),vp(VP)),Node). This is essentially eqtfivalent to the kind of rules produced by Matsumoto et al. ([6] [7])in their "bottom-up" l)arser BUI). s As seen here, Mal, sumo(.o et al were not wrong to call their parser bottom-ui) , but they could have just as well called it left-corner.
? KI .
KIWilhelmstr. 113, D-72074 T(ibingen, Germany, [email protected]. be eliminated, however, by employing a version of Greibach Normal Form which is extended to handle argument instantiations in definite clause grammars. The resulting parsers resemble the standard Prolog versions of versions of such parsers. One can then go one step further and partially execute the parser with respect to a particular grammar--as is normally done with definite clause gra,,nn~a,'s (Per(,ir~ ~ Warren [JO]). a surprising result of this partial execution is l.ha.t the bottom-up and leftcorner parsers become identical when they are 1)oth partially executed. This may explain why the BUP parser of ~/lil.tSllll]OtO eta]. [6] [71 was ,'eferre.d tO as a bottona-u I) parser even though it clearly follows a left-corner strategy.
While this approach reflects the logic of the top-down, bottom-up and leftcorner parsers in a clear way, the resulting programs are not all usable in Prolog since the bottom-up and the leftcorner parsers are left-recursive. There exists, however, a general technique for removal of left-recursion, namely, conversion to Oreibach normal form. The standard Oreibach normal form conversion, however, does not allow for I)CG type rules, but we can easily take care of the Prolog arguments by a technique suggested by Problem 3.118 of Pereira & Shieber [9] to produce what I will call Extended Greibach Normal Form (ECINF). 3 Pereira & Shieber's idea has been more formally presented in the Generalized Greibaeh Normal Form of Dymetman ([1]
A generalized greibach normal form for definit;e clause grammars. Marc Dymetman, COLING-92 vol. IMarc Dymetman. A generalized greibach normal form for definit;e clause grammars. In COLING-92 vol. I, pages 366-372, 1992.
Applicatios au probIeThc de la re-versibilite~n Traduclion A~do'matique. Marc Dymetman, Tra'asformations de Grammaires logiques. PhD thesis, Uniw;rsite/leMarc Dymetman. Tra'asforma- tions de Grammaires logiques. Ap- plicatios au probIeThc de la re- versibilite~n Traduclion A~do'ma- tique. PhD thesis, Uniw;rsite/le
Grenoble, The.~e d'Etat. Grenoble, FranceGrenoble, Grenoble, France, 1992. The.~e d'Etat.
Gerald Gazdar, Chris Mellish, Natural Lang~tage Processi.ng in Prolo 9. MassAddison-WesleyReadingGerald Gazdar and Chris Mel- lish. Natural Lang~tage Processi.ng in Prolo 9. Addison-Wesley, Read- ing, Mass, 1989.
Introduction to Automata 7'h,cory and Computation. John Itopcroft, .)effrcy Ljlhmm, Addison-WesleyReading, MassJohn Itopcroft and .)effrcy lJlhmm. Introduction to Automata 7'h,c- ory and Computation. Addison- Wesley, Reading, Mass, 197!).
Lecture Notes for Course taught at the LSA Summer School in Columbus Ohio. Mark Johnson, Edward Stabler, Mark Johnson and Edward Sta- bler, 1993. Lecture Notes for Course taught at the LSA Summer School in Columbus Ohio.
Bup: A bottom-up parser embedded in prolog. Y Matsumoto, H I{ Miyoshi, Yasukawa, New Ceneration Comp~tl-ing. 12Y. Matsumoto, H. tIirakawa., I{ Miyoshi, and I1 Yasukawa. Bup: A bottom-up parser embedded in prolog. New Ceneration Comp~tl- ing, 1(2):145-158, 11983.
Yuji Matsumoto, Natwral Language Parsin 9 Systems baaed on Logic Programming. PM) thesis. Kyoto UniversityYuji Matsumoto. Natwral Lan- guage Parsin 9 Systems baaed on Logic Programming. PM) thesis, Kyoto University, 1989.
The Craft of Prolog. O' Richard, Keefe, MIT PressCambridge, MassRichard O'Keefe. The Craft of Prolog. MIT Press, Cambridge, Mass, 1990.
ProIo 9 and Natural Language Analysis. C N Fernando, Stuart Pereira, Shieber, CSLI Lecture Notes. 10Chicago University PressFernando C. N. Pereira and Stu- art Shieber. ProIo 9 and Natural Language Analysis. CSLI Lecture Notes No. 10. Chicago University Press, Chicago, 1987.
W~m:en. Definite clause grammars-a surw'.y of the formalism and a comparison with augmented transition networks. ArliJicial ['ntelligence. C N Fernando, David Pereira, Li, 13Also in Grosz et. al.Fernando C. N. Pereira and David lI. 1). W~m:en. Definite clause grammars-a surw'.y of the formal- ism and a comparison with aug- mented transition networks. ArliJi- cial ['ntelligence , 13:231-278, 1980. Also in Grosz et. al., :1986.
A comparison of ruleinvocation strategies in contextfree chart parsing. Ivlats \viren, EACL Proceedings, 3lh Annual Meeting, l)ages. IVlats \Viren. A comparison of rule- invocation strategies in context- free chart parsing. In EACL Proceedings, 3lh Annual Meeting, l)ages 226-233, 11987.
| []
|
[
"Nonlinear reversal of PT-symmetric phase transition in a system of coupled semiconductor micro-ring resonators",
"Nonlinear reversal of PT-symmetric phase transition in a system of coupled semiconductor micro-ring resonators"
]
| [
"Absar U Hassan \nCREOL/College of Optics and Photonics\nUniversity of Central Florida\n32816OrlandoFloridaUSA\n",
"Hossein Hodaei \nCREOL/College of Optics and Photonics\nUniversity of Central Florida\n32816OrlandoFloridaUSA\n",
"Mohammad-Ali Miri \nCREOL/College of Optics and Photonics\nUniversity of Central Florida\n32816OrlandoFloridaUSA\n",
"Mercedeh Khajavikhan \nCREOL/College of Optics and Photonics\nUniversity of Central Florida\n32816OrlandoFloridaUSA\n",
"Demetrios N Christodoulides \nCREOL/College of Optics and Photonics\nUniversity of Central Florida\n32816OrlandoFloridaUSA\n"
]
| [
"CREOL/College of Optics and Photonics\nUniversity of Central Florida\n32816OrlandoFloridaUSA",
"CREOL/College of Optics and Photonics\nUniversity of Central Florida\n32816OrlandoFloridaUSA",
"CREOL/College of Optics and Photonics\nUniversity of Central Florida\n32816OrlandoFloridaUSA",
"CREOL/College of Optics and Photonics\nUniversity of Central Florida\n32816OrlandoFloridaUSA",
"CREOL/College of Optics and Photonics\nUniversity of Central Florida\n32816OrlandoFloridaUSA"
]
| []
| A system of two coupled semiconductor-based resonators is studied when lasing around an exceptional point. We show that the presence of nonlinear saturation effects can have important ramifications on the transition behavior of this system. In sharp contrast with linear PT-symmetric configurations, nonlinear processes are capable of reversing the order in which the symmetry breaking occurs. Yet, even in the nonlinear regime, the resulting non-Hermitian states still retain the structural form of the corresponding linear eigenvectors expected above and below the phase transition point. The conclusions of our analysis are in agreement with experimental data. | 10.1103/physreva.92.063807 | [
"https://arxiv.org/pdf/1510.03936v1.pdf"
]
| 26,360,845 | 1510.03936 | a5764e26dbbcde678a6861b71df30ad40dee81d3 |
Nonlinear reversal of PT-symmetric phase transition in a system of coupled semiconductor micro-ring resonators
14 Oct 2015
Absar U Hassan
CREOL/College of Optics and Photonics
University of Central Florida
32816OrlandoFloridaUSA
Hossein Hodaei
CREOL/College of Optics and Photonics
University of Central Florida
32816OrlandoFloridaUSA
Mohammad-Ali Miri
CREOL/College of Optics and Photonics
University of Central Florida
32816OrlandoFloridaUSA
Mercedeh Khajavikhan
CREOL/College of Optics and Photonics
University of Central Florida
32816OrlandoFloridaUSA
Demetrios N Christodoulides
CREOL/College of Optics and Photonics
University of Central Florida
32816OrlandoFloridaUSA
Nonlinear reversal of PT-symmetric phase transition in a system of coupled semiconductor micro-ring resonators
14 Oct 2015(Dated: October 15, 2015)arXiv:1510.03936v1 [physics.optics]numbers: 0545Yv4225Bs1130Er
A system of two coupled semiconductor-based resonators is studied when lasing around an exceptional point. We show that the presence of nonlinear saturation effects can have important ramifications on the transition behavior of this system. In sharp contrast with linear PT-symmetric configurations, nonlinear processes are capable of reversing the order in which the symmetry breaking occurs. Yet, even in the nonlinear regime, the resulting non-Hermitian states still retain the structural form of the corresponding linear eigenvectors expected above and below the phase transition point. The conclusions of our analysis are in agreement with experimental data.
I. INTRODUCTION
In recent years there has been a growing interest in optical structures based on parity-time (PT) symmetry [1][2][3][4][5][6][7][8][9][10]. Along these lines, some intriguing possibilities have been predicted and experimentally demonstrated. These include solitons [11,12], Bloch oscillations and exceptional lines in PT-symmetric lattices [13][14][15], unidirectional invisibility [7,16,17], power oscillations [6,7,18], and mode management in laser structures [19][20][21], to mention a few. In optical realizations, PT-symmetry can be established by judiciously incorporating gain and loss in a given structure. In particular, a necessary condition for this symmetry to hold is that the complex refractive index distribution involved should obey n(r) = n * (−r). In other words, the real part of the refractive index profile must be an even function of position whereas its imaginary counterpart must be odd [1]. Under these latter conditions, an optical system (composed of cavities, waveguides etc), can behave in a pseudo-Hermitian fashion provided that the overall attenuation and amplification is appropriately balanced. On the other hand, if the gain/loss contrast exceeds a certain threshold, the PT-symmetry can be spontaneously broken and the spectrum is no longer entirely real [22][23][24]. This marks the presence of an exceptional point [25][26][27][28] or the emergence of a PT-symmetry breaking transition [29][30][31]. Moreover, a number of studies have suggested that PT-symmetric concepts can also be fruitfully utilized in other settings beyond optics [32][33][34][35].
Lately, the phase transitions associated with exceptional points, have been effectively utilized to enforce single-mode operation in micro-ring laser resonators [20,36] and pump-induced lasing turn-off [37,38]. In particular, in Ref. [20], it was shown that the selective breaking of PT-symmetry can be exploited to enhance the maximum output power in a desired longitudinal mode. This mode selection scheme is inherently self-adapting and can * [email protected] be used over a broad bandwidth without the need of any other intra-cavity components. While the mechanism of PT-symmetry breaking is inherently linear, of fundamental interest will be to understand how this process unfolds in the presence of nonlinearity. This is imperative given that lasers are by nature nonlinear devices. In Ref. [39] Lumer et al. already indicated that it is possible to reverse the PT phase transition sequence using the conservative component of a Kerr nonlinearity in a periodic structure. Instead, in this paper we consider the properties of PT-symmetric coupled micro-ring laser cavities in the presence of saturation effects associated with the imaginary part of the nonlinearity.
In what follows we provide a nonlinear model describing the field evolution in two coupled cavities in the presence of saturable gain and loss that is prevalent in semiconductor systems. A dual micro-ring arrangement is studied using a temporal coupled mode formalism [40] when one ring is subjected to optical pumping while its counterpart is kept un-pumped. The system is shown to move from the linear broken PT-symmetry domain directly into the nonlinear broken regime. It is further demonstrated that by increasing the pumping level, the eigenmodes of the system transition into an unbroken pair of PT-symmetric states that exhibit two real eigenvalues. Conversely, if the system starts lasing in an unbroken PT-like phase then it remains there in spite of nonlinear saturation effects. An experimental demonstration of this process, when starting from the broken phase, is presented which is in qualitative agreement with our theoretical results. Finally, we briefly discuss the behavior of this nonlinear system starting from having both micro-rings equally pumped to eventually blocking the pump from one of them.
II. THEORETICAL MODEL
A schematic of a dual micro-ring arrangement is shown in Fig. 1. Each micro-ring in our system involves a multiple quantum well InGaAsP-InP structure that is embedded in a silica substrate as in Ref. [20]. The top surface of the rings is exposed to air that serves as a cladding. At the operating wavelength of 1.55 µm of this quantum well system, the effective refractive index is n e ≃ 3 while the group index in the waveguide rings is n g ≃ 4. For demonstration purposes here we assume that each cavity supports only a single longitudinal and transverse mode. In general, the dynamics in each cavity in isolation are described by a corresponding set of modal field amplitude equations in conjunction with a carrier evolution equation. Yet, once the carriers attain a steady-state, the field equation can be further simplified according to [41]:
dE dt = 1 2 σ (p − 1) 1 + ε|E| 2 − γ p (1 − iα H ) E(1)
Here p = τ e R p /N 0 is a pump parameter, where the carrier generation rate, R p = ηI/( ωd) and I,η, d are the pump intensity, the external quantum efficiency and the depth of each micro-ring respectively. In addition, τ e represents the carrier lifetime, N 0 stands for the transparency carrier population density, ω is the frequency of the emitted light and is the Planck's constant. The parameter ε is inversely proportional to the saturation intensity ε = n e cǫ 0 Γaτ e /(2 ω) and σ is proportional to the saturated loss in the absence of pumping (σ = Γv g aN 0 ). The linear loss γ p is the inverse of the photon lifetime (τ p ) in the cavity and α H is the linewidth enhancement factor. Finally, c and ǫ 0 are the speed of light and permittivity in vacuum respectively, Γ is the confinement factor, v g represents the group velocity and a, the gain constant (g = a(N − N 0 )). Note that in formulating the evolution equations the material response is assumed to be fast compared to carrier and photon lifetimes and hence is considered here to be instantaneous [41]. In our arrangement we assume that the coupling strength between the two rings is strong and hence any frequency detuning that could result from the α H -factor can be ignored. By adopting this latter assumption and denoting the unsaturated gain asg 0 = σ(p − 1), unsaturated loss (at p = 0) asf 0 = σ, we obtain the following two equations describing the dynamics in the aforementioned coupled cavities,
dA 1 dt = −γA 1 + g 0 1 + ε|A 1 | 2 A 1 + iκA 2 (2a) dA 2 dt = −γA 2 − f 0 1 + ε|A 2 | 2 A 2 + iκA 1 . (2b)
In Eq. (2), the modal field amplitudes A 1 ,A 2 correspond to the pumped and un-pumped resonators respectively,γ is the linear loss present in both cavities and κ is the coupling strength between the resonators. A normalized version of these equations can be easily obtained by adopting the normalized quantities, a 1,2 = √ εA 1,2 , τ = κt, γ =γ/κ, f 0 =f 0 /κ and g 0 =g 0 /κ,
a 1 = −γa 1 + g 0 1 + |a 1 | 2 a 1 + ia 2 (3a) a 2 = −γa 2 − f 0 1 + |a 2 | 2 a 2 + ia 1 (3b) whereȧ = da/dτ .
In what follows we will study the behavior associated with this system of nonlinear evolution equations.
III. LINEAR DYNAMICAL ANALYSIS
To analyze the response of this arrangement under linear conditions, we assume that the modal field amplitudes are small, i.e. |a 1,2 | ∼ 0. Under these assumptions, saturation effects in both the gain and loss mechanisms can be ignored. Hence, this regime can be effectively described by a linearized version of Eq. (3), e.g.,
a 1 = −γa 1 + g 0 a 1 + ia 2 (4a) a 2 = −γa 2 − f 0 a 2 + ia 1 (4b)
The eigenvalues of this system, λ, can be directly obtained by adopting the form, a 1 a 2 T = a 01 a 02 T e −iλτ , where a 01,02 are complex constants and T represents a transpose operation. In this respect, two regimes can be identified depending on whether (g 0 + f 0 ) ≶ 2. In the first case where (g 0 + f 0 ) < 2, the modal solutions of Eq. (4) are given by,
a 1 a 2 = 1 ±e ±iθ e ( g 0 −f 0 2 −γ)τ e ±i(cos θ)τ ,(5)
where sin θ = (g 0 + f 0 )/2. We note that the structure of the modal fields closely resembles that expected from an unbroken PT-symmetric coupled arrangement [19].
In particular, the two eigenvectors are by nature nonorthogonal with a phase factor θ that depends on the gain/loss contrast. In addition, a PT-like bifurcation is present around a threshold value given that,
cos θ = 1 − g 0 + f 0 2 2 (6)
If on the other hand (g 0 + f 0 ) > 2, the eigenvectors of Eq. (4) are, where cosh θ = (g 0 + f 0 )/2. As opposed to those described by Eq. (5), these latter solutions exhibit features of a broken PT-symmetric configuration. In this regime the modal field amplitudes are phase shifted by π/2 and moreover, they are unequal.
a 1 a 2 = 1 ie ±θ e ( g 0 −f 0 2 −γ)τ e ∓(sinh θ)τ ,(7)
If the system is operating in the first regime (unbroken PT-symmetry, given by Eq. (5), then the fields will experience linear amplification as long as the gain is above the total loss in the system, i.e. g 0 > (2γ
+ f 0 ) = g (U)
th . Conversely, in the broken PT-symmetric phase (described by Eq. (7)), growth will occur provided that,
g 0 − f 0 2 − γ + g 0 + f 0 2 2 − 1 > 0.(8)
Equation (8) implies that in this case, the threshold for lasing is dictated by the following condition,
g 0 > 1 γ + f 0 + γ,(9)
i.e. the gain threshold in this broken symmetry is g
(B) th = (γ + f 0 ) −1 + γ.
In view of the above results, one can conclude that the lasing thresholds corresponding to these two regimes (above/below the PT-symmetry breaking point) are uniquely determined by the parameters, γ, and f 0 . To compare these thresholds, one has to consider whether (γ+f 0 ) ≷ 1. If for example, (γ+f 0 ) > 1 then the broken phase (Eq. (7)) has a lower threshold and therefore will lase (g
(B) th < g (U)
th ). Interestingly however, if (γ + f 0 ) < 1, the situation is reversed and the unbroken PT eigenstate, as given by Eq. (5), will experience amplification. The behavior of the system in these two different domains is depicted in Figs. 2(a) and 2(b) for various values of the gain, g 0 . Figure 2
(b) clearly sug- gests that for (γ + f 0 ) > 1 (i.e. g (B) th < g (U)
th ), the lasing threshold is in fact lower than the total loss in the system, (f 0 + 2γ). This counter-intuitive result is attributed to the coupling process which is in this case relatively slow and therefore does not allow the photon energy to see the entire two-ring system. These two lasing thresholds can be summarized by the following inequality,
g 0 > min 1 γ + f 0 + γ, 2γ + f 0 .(10)
With these preliminary conditions for g 0 , needed for lasing, we can now consider the ensuing nonlinear response of this system.
IV. NONLINEAR REGIMES
As the fields in the PT-coupled cavity configuration start to grow, nonlinear saturation effects come into play, as described by Eq. (3). Yet, as we will see, the properties of the linear system not only determine the lasing thresholds, but also provide valuable information as to how this arrangement will respond in the nonlinear regime. More specifically, if (γ + f 0 ) > 1, the system will start from a linear broken PT-symmetry and then enter a broken PT-like nonlinear domain. By further increasing the gain, this same arrangement will transition into a nonlinear unbroken PT phase and will remain there. If on the other hand, (γ + f 0 ) < 1, this structure will lase into an unbroken PT-like domain (whether linear or nonlinear) for all values of the gain g 0 above threshold. It is important to emphasize that in the first case of (γ + f 0 ) > 1, upon increasing the pump level, a reversal in the order in which symmetry breaking occurs is observed, i.e. the solutions transition from a broken to an unbroken state. The two possible nonlinear phases of lasing are described below along with their corresponding gain parameter ranges.
A. Broken PT
This section is pertinent to the case where lasing takes place in the linear PT-broken regime where (γ + f 0 ) > 1. In this scenario, the nonlinear broken-PT supermodes can be directly obtained by assuming stationary solutions for the field amplitudes that have the form, a 1 a 2 T = a 01 a 02 T , where a 01,02 are complex constants. Equation (3) is then reduced to,
0 = −γa 01 + g 0 1 + |a 01 | 2 a 01 + ia 02 (11a) 0 = −γa 02 − f 0 1 + |a 02 | 2 a 02 + ia 01 (11b)
These equations clearly suggest that a 01 and a 02 are out of phase by π/2. This in turn allows one to write a 02 = iρa 01 , where ρ ∈ ℜ + (see Eq. (11b)) represents the modal ratio. From Eq. (11), we readily obtain the following quartic polynomial equation for ρ,
ρ 4 − g 0 + 1 γ − γ ρ 3 + g 0 − f 0 γ − 2 ρ 2 + −f 0 + 1 γ − γ ρ + 1 = 0 (12)
In solving Eq. (12), we look for a real root in the interval [0, 1] since, from a physical perspective, one expects that under steady state conditions, the modal field in the lossy ring will be less than that with gain. In addition, one can show that among all four possible roots, that contained in [0, 1] happens to be the only stable one. It is important to note that similar to the broken symmetry modes in linear PT systems, the solution sets in this regime are characterized by an asymmetric distribution of modal fields in the two coupled resonators. For this specific reason, the point ρ = 1 is crucial since it marks a PT-breaking transition. The critical gain value (g c ) where this transition occurs is found to be,
g c = f 0 (1 + γ) (1 − γ) .(13)
Note that this critical gain value is smaller than the lasing threshold needed for the linear unbroken phase, g th . To demonstrate the energy occupancy in the two cavities, we vary the value of g 0 in the range g (B) th < g 0 < g c . Figure. 3 depicts these results for γ = 0.1 and f 0 = 2.
As it can be seen in Fig. 3, higher values of g 0 not only result in higher steady state intensities in the resonators but also lead to an increased ratio (ρ) that eventually becomes unity. As previously mentioned, an unequal distribution of the fields in the two rings, along with a phase difference of π/2 clearly indicates that the solution sets in this regime have broken PT-like forms as in Eq. (7). Moreover, there is no frequency shift associated with the resonance of the ring system-another indicator of a broken PT-symmetry.
Remarkably, if one considers linear PT-symmetric dimers, it is well known that the difference between the field intensities in the two components of the dimer becomes larger as we increase the gain-loss contrast beyond the spontaneous symmetry breaking point, indicated by the term e ±θ in Eq. (7). However, in the nonlinear case, higher pumping levels (larger values of gain) eventually lead the system to the symmetric phase, at ρ = 1. Figure 4 shows the time evolution of the modal intensities as a function of τ after solving Eq. (3) for g 0 = 1 and g 0 = 2.3. This latter figure demonstrates that higher pump levels eventually enforce a transition towards an unbroken phase where the modal ratio is unity. To summarize, the relevant gain range for solutions within this regime is g (B) th < g 0 < g c , provided that (γ + f 0 ) > 1.
B. Unbroken PT
Before we discuss in detail the properties associated with the nonlinear unbroken PT-symmetry, we note that the results of this section are applicable in both regimes, i.e. (γ + f 0 ) ≷ 1. To obtain the nonlinear eigenmodes in the PT-symmetric phase, we now assume time harmonic solutions, a 1 a 2 T = a 01 a 02 T e iλτ , where λ ∈ ℜ. In this case, Eq. (3) leads to the following relations:
iλa 01 = −γa 01 + g 0 1 + |a 01 | 2 a 01 + ia 02 (14a)
iλa 02 = −γa 02 − f 0 1 + |a 02 | 2 a 02 + ia 01 . (14b)
Using the representation g s = g 0 /(1 + |a 01 | 2 ) for the saturated gain and f s = f 0 /(1 + |a 02 | 2 ) for the saturated loss and assuming that a 01,02 = 0, we get a quadratic equation for the eigenvalues,
λ 2 −i(2γ+f s −g s )λ−(γ 2 +γ(f s −g s )−g s f s +1) = 0. (15)
Given that λ is real, it is necessary that,
g s 2γ − f s 2γ = 1.(16)
This last relation is directly satisfied through the parametric representation g s = 2γ cosh 2 (η) and f s = 2γ sinh 2 (η) where η is a positive real quantity. In this respect we arrive at the following relations for the intensities,
|a 01 | 2 = g 0 2γ cosh 2 (η) − 1 (17) |a 02 | 2 = f 0 2γ sinh 2 (η) − 1.(18)
The eigenvalue Eq. (15), now readily reduces to, λ 2 = 1 − γ 2 cosh 2 (2η), in which case, λ 1,2 = ± cos θ nl provided that γ cosh (2η) = sin θ nl . Here θ nl represents a nonlinear phase shift ranging between 0 and π/2. Moreover, after dividing Eq. (14a) by a 01 and Eq. (14b) by a 02 , upon subtraction we obtain,
2i sin θ nl = (ρ − ρ −1 ) cos φ + i(ρ + ρ −1 ) sin φ (19)
where a 02 = ρe iφ a 01 . Equation (19) can be solved for the real and imaginary parts, from where one finds that, ρ = ±1 and φ = ±θ nl , which clearly suggests that |a 01 | 2 = |a 02 | 2 . This is only possible as long as (by considering Eqs. (17) and (18)),
tanh(η) = f 0 g 0 (20)
i.e. g 0 > f 0 . The eigenvalue expression in Eq. (15) now simplifies to,
λ 2 = 1 − γ 2 g 0 + f 0 g 0 − f 0 2 .(21)
Equation (21) directly indicates that real eigenvalues are only possible if, g 0 ≥ f 0 (1+γ)/(1−γ), which is equivalent to g 0 ≥ g c , corroborating the earlier findings in Sec. IV A. In other words the gain level has to be above this critical value, a necessary condition for observing solution sets in this regime. The unfolding of the nonlinear eigenvalues as a function of the gain level is shown in Fig. 5. From these results, one can then determine the unbroken nonlinear PT-symmetric eigenvectors, e.g., (22) where sin θ nl = γ(g 0 + f 0 )/(g 0 − f 0 ). When Equations (17) and (18) are used in conjunction with Eq. (20), they provide another restriction on the value of g 0 since |a i | 2 > 0. More specifically, the restriction is given by
a 1 a 2 = g 0 − f 0 2γ − 1 1 ±e ±iθ nl e ±i(cos θ nl )τ ,g 0 ≥ (2γ + f 0 ) = g (U)
th . Hence, the complete range of g 0 for this solution to exist is:
g 0 ≥ g c ∩ g 0 ≥ g (U) th(23)
It should be noted here that under the condition (γ + f 0 ) > 1, i.e. when lasing begins in the broken PT phase, once the gain level exceeds g c , both conditions in (23) are satisfied and the steady state now assumes the nonlinear unbroken form of Eq. (22). This confirms the aforementioned reversal of PT-symmetric phase transition due to the nonlinearity. However, if (γ + f 0 ) < 1, where lasing begins in the linear unbroken PT phase, the lasing threshold g (U) th is greater than g c , which immediately implies that once lasing begins, the system will eventually attain the nonlinear unbroken PT-symmetric steady state solutions, described by Eq. (22).
The time evolution of intensities in the two coupled resonators can be studied by numerically solving Eq. (3). These results are displayed in Fig. 6(a). Notice that in this domain, the modal amplitudes eventually become equal (irrespective of initial conditions), an indication of a nonlinear unbroken PT-symmetry. The analytical expressions in Eq. (22) suggest that the system admits two fixed point solutions, λ 1,2 = ± cos θ nl . The choice between the two depends upon the initial conditions provided. An effective method of deducing which initial conditions correspond to which of the two attractors, is to project the initial vector onto the two nonlinear eigenmodes.
The projection operation has to be carried out in a PT-symmetric sense [42], i.e. respecting bi-orthogonality, which implies that for two vectors Φ 1 = φ 1x φ 1y T and
Φ 2 = φ 2x φ 2y T , Φ 1 |Φ 2 = φ * 1y φ * 1x φ 2x φ 2y .(24)
By considering the eigenvectors in Eqs. (5) or (22) c 2 = v 0 |v 2 associated with these eigenvectors can then be obtained and are given by,
|c 1 | 2 = |a 01 | 2 + |a 02 | 2 + 2|a 01 ||a 02 | cos(∆φ 0 − θ),(25)|c 2 | 2 = |a 01 | 2 + |a 02 | 2 − 2|a 01 ||a 02 | cos(∆φ 0 + θ),(26)
where ∆φ 0 = φ a01 − φ a02 is the initial phase difference between a 01 and a 02 . As we will see, the eigenvector with the larger initial amplitude will eventually dominate.
The eigenvalue for the vector v 1 is λ 1 = cos θ which implies a counter-clockwise rotation in the complex plane.
Note that this corresponds to the low frequency supermode since the fast variations in the field, leading to Eq. (1) were assumed to be of the form e −iω0t where ω 0 is the resonance frequency of each individual resonator. Similarly the other eigenvalue, λ 2 = − cos θ corresponds to its high frequency counterpart.
For counter-clockwise rotation, one requires that |c 1 | 2 > |c 2 | 2 and vice versa for clockwise rotation. Since cos θ > 0 for θ = [0, π/2], these conditions can be reduced to, (i)∆φ 0 ≤ π/2, where the low frequency supermode survives and (ii)∆φ 0 > π/2, favoring its high frequency counterpart. As an example, let g 0 = 2.25, f 0 = 1 and γ = 0.1, that satisfy the conditions in Eq. (23). We now consider two cases for the phase difference in the initial values a 01 and a 02 . Figure 6(b) shows the field evolution when the initial amplitudes in the two rings are equal, |a 01 | = |a 02 | = 0.2, but the phase difference is ∆φ 0 = π/2 + 0.1 and (c) shows the same case when ∆φ 0 = π/2 − 0.1. The intensity evolution with time for these two cases happens to be identical and is depicted in Fig. 6(a). The initial exponential growth is evident and the intensities finally saturate to a common value as given by Eq. (22). We note that in an actual experiment, both eigenmodes will be excited from noise and hence the spectrum will involve two lines at ±κ cos(θ nl ) around ω 0 .
V. GENERAL SYSTEM BEHAVIOR
Based on the results of the previous section, it can be established that in the steady state, the form of the nonlinear solutions is predetermined by the system parameters, specifically by the normalized values of unsaturated absorption (f 0 ) and linear loss (γ). In the coupled ring resonator arrangement, as the pumping level is increased (as g 0 increases), there are two possible scenarios for the system behavior. If (γ + f 0 ) < 1, lasing begins in the linear unbroken PT-symmetric domain (Eq. (5)) and then moves into the nonlinear unbroken PT-symmetric regime where the field intensities are equal in both rings albeit with a phase difference, according to Eq. (22). If on the other hand (γ + f 0 ) > 1, lasing starts in the linear broken PT-symmetric domain (Eq. (7)) and then transitions into the nonlinear broken PT-symmetric phase where the distribution of field strengths in the two coupled resonators is asymmetric and a phase difference of π/2 exists between them, as established in Sec. IV A. At even higher gain levels, interestingly, a phase transition occurs from the broken domain into the nonlinear unbroken PT domain when g 0 > g c . The two scenarios are summarized in Fig. 7, where the nonlinear reversal of a PT-symmetric phase transition (broken to unbroken) is displayed in the lower half. It should be noted that the lasing thresholds in these two cases are different and both paths eventually end up in the unbroken PT-like phase as the gain is increased. where (γ + f0) < 1, the system is always in an unbroken PT phase. In the lower half, however, where (γ + f0) > 1, the configuration first transitions from a linear broken to a nonlinear broken phase and then eventually enters the nonlinear unbroken domain when g0 exceeds gc.
VI. EXPERIMENTAL RESULTS
To experimentally verify our findings, we used lithographic techniques to fabricate sets of coupled microring resonators comprised of six InGaAsP (Indium-Gallium-Arsenide-Phosphide) quantum wells embedded in InP, capable of providing amplification in the wavelength range 1350-1600 nm. A detailed description of the fabrication process can be found in Ref. [20]. The rings in our experiments have an outer radius of 10 µm, a width of 500 nm, and a height of 210 nm. Such dimensions are deliberately chosen so as the rings support a single transverse mode and to also favor the TE polarization. At first, the two coupled resonators were evenly illuminated using a circular pump beam with a diameter of 80 µm. The intensity distribution and spectrum of the modes in the microrings are monitored using a CCD camera and a spectrometer respectively. Figure 8(a) shows the spectrum of the two active rings when are both exposed to a peak pump power of 0.4 mW (15 ns pulses with a repetition rate of 290 kHz). Under these conditions, coupling-induced mode splitting can clearly be seen. Next, a knife edge is used to selectively withhold the pump power from one of the rings, hence establishing a PT-symmetric gain/loss microring arrangement. tem. As expected, lasing occurs exclusively in the active cavity and the frequency of the resonance shifts to the center of the supermode peaks. In other words, the system starts lasing in the broken PT-phase as predicted in Sec. IV A. Next, the pump power illuminating the active ring is increased by a factor of two while keeping again the lossy ring in the dark. The emission spectrum of the PT arrangement subjected to such a high pump power is depicted in Fig. 8(c). In agreement with our theoretical predictions (Sec. IV B), the PT-symmetry of the combined structure is now restored due to a saturation of nonlinearities. In this regime, both resonators are again contributing equally to lasing and as a result two supermode wavelength peaks are now present in the measured spectrum. Our experimental results confirm the fact that nonlinear processes are indeed capable of reversing the order in which the symmetry breaking occurs.
The discussions in earlier sections are also applicable to the findings in Ref. [20]. In that work, lasing was observed when both microrings were at first equally illuminated (in a way similar to Fig. 8(a)), in which case the system was positioned in the unbroken PT-symmetry phase. This behavior is in agreement with our theoretical results presented in Sec. IV B provided that one sets f 0 = −g 0 . In this case, η is purely imaginary and θ nl = 0, and hence the normalized eigenvalues are λ 1,2 = ±1, i.e. the mode splitting is twice the coupling between the two cavities-resembling that in standard Hermitian systems. On the other hand, by removing the pump from one of the rings, saturable losses are introduced since now f 0 is positive. In this scenario, Eq. (23) is no longer satisfied and as a result the system enters the broken PT phase, in agreement with the observed behavior in Ref. [20].
VII. CONCLUSIONS
In conclusion, we have shown that a nonlinear dimer of two coupled microring laser resonators can transition from a nonlinear broken PT-symmetric phase into an unbroken PT domain provided that lasing was initiated in a broken mode. This surprising result is a byproduct of nonlinear saturation that is capable of re-establishing the PT-symmetric phase, something that is not possible in the linear domain. On the other hand, if lasing initially occurs in an unbroken mode, the system always remains in the unbroken phase even under nonlinear conditions. In all cases, in spite of the presence of nonlinearities, the eigenmodes of this arrangement retain features associated with linear eigenvectors in PT-symmetric configurations.
FIG. 1 .
1A PT-symmetric arrangement of two coupled microring resonators.
FIG. 2 .
2Imaginary components of eigenvalues (blue curves) of the linear system are displayed as the gain level increases. In all cases amplification occurs if Im {λ} > 0-represented by the gray regions. The broken PT-symmetric phase appears after a bifurcation takes place. The graph in (a) shows that Im {λ} > 0 before branching occurs, i.e. when (γ + f0) < 1, whereas in (b) lasing begins in the broken phase, which takes place when (γ + f0) > 1. In both cases the dashed lines indicate the two possible thresholds, where the red line corresponds to the broken phase (g (B) th = γ + 1/(γ + f0)) and the green to the unbroken (g (U) th = 2γ + f0). The system parameters used here are γ = 0.1 and (a) f0 = 0.5, (b) f0 = 1.3.
FIG. 3 .
3, that is possible in the parameter range (γ + f 0 ) < 1. Hence this nonlinear broken PT phase only arises once Light intensity in the pumped ring as a function of the modal ratio ρ, as obtained from Eqs.(11)and(12)when f0 = 2 and γ = 0.1. The linear gain g0 is varied between g (B) th = 0.58 to gc = 2.44 lasing begins in the linear broken PT phase which only occurs when (γ + f 0 ) > 1, where we also have g c > g (B)
FIG. 4 .
4The unequal distribution of steady state intensities (broken symmetry) in the rings with gain (red) and loss (black) is shown. The curves are obtained after numerically integrating Eq. (3) for γ = 0.1 and f0 = 2. Dashed lines represent the solution for g0 = 1 and solid lines are obtained for g0 = 2.3. A higher gain naturally results in higher intensities but at the same time, the intensity contrast between the two resonators decreases.
FIG. 5 .
5The eigenvalues of the nonlinear system exhibit a square-root bifurcation when entering the unbroken symmetry regime. The region g0 < gc represents broken symmetry where the eigenvalues are degenerate. The parameter values used here are γ = 0.1 and f0 = 2. The eigenvalues λ± approach the asymptotes ± (1 − γ 2 ) for large values of g0.
FIG. 6 .
6, and by letting v 1 = 1 e iθ T and v 2 = 1 −e −iθ T , any initial state v 0 = a 01 a 02 T can then be projected. Absolute values of the complex coefficients c 1 = v 0 |v 1 and (a) Intensity evolution in the two rings is plotted against time τ , when, g0 = 2.25, f0 = 1 and γ = 0.1. Trajectory of the modal fields when (b)∆φ = π/2 + 0.1-clockwise rotation and (c)∆φ = π/2 − 0.1-counter-clockwise rotation.
FIG. 7 .
7System response as a function of gain in two different parameter regimes is schematically shown. In the upper half,
FIG. 8 .
8Figure 8(b) illustrates the lasing spectrum of this PT sys-Emitted spectrum from (a) uniformly pumped coupled microrings with the pump power of 0.4 mW (b) PTsymmetric structure when 0.4 mW of pump power reaches the active ring (c) PT-symmetric structure when the active ring is subjected to 0.8 mW pump power. The insets depict mode profiles of the different scenarios, recorded by the scattering from the surface of the rings. Dashed vertical lines are used to compare the locations of resonances.
ACKNOWLEDGMENTSThe authors gratefully acknowledge the financial support from NSF CAREER Award (ECCS-1454531), NSF (grant ECCS-1128520), and AFOSR (grants FA9550-12-1-0148 and FA9550-14-1-0037), ARO (grant W911NF-14-1-0543).
. K G Makris, R El-Ganainy, D N Christodoulides, Z H Musslimani, 10.1103/PhysRevLett.100.103904Phys. Rev. Lett. 100103904K. G. Makris, R. El-Ganainy, D. N. Christodoulides, and Z. H. Musslimani, Phys. Rev. Lett. 100, 103904 (2008).
. R El-Ganainy, K G Makris, D N Christodoulides, Z H Musslimani, 10.1364/OL.32.002632Opt. Lett. 322632R. El-Ganainy, K. G. Makris, D. N. Christodoulides, and Z. H. Musslimani, Opt. Lett. 32, 2632 (2007).
. A Guo, G J Salamo, D Duchesne, R Morandotti, M Volatier-Ravat, V Aimez, G A Siviloglou, D N Christodoulides, 10.1103/PhysRevLett.103.093902Phys. Rev. Lett. 10393902A. Guo, G. J. Salamo, D. Duchesne, R. Morandotti, M. Volatier-Ravat, V. Aimez, G. A. Siviloglou, and D. N. Christodoulides, Phys. Rev. Lett. 103, 093902 (2009).
. S Klaiman, U Günther, N Moiseyev, 10.1103/PhysRevLett.101.080402Phys. Rev. Lett. 10180402S. Klaiman, U. Günther, and N. Moiseyev, Phys. Rev. Lett. 101, 080402 (2008).
. S Longhi, 10.1103/PhysRevA.82.031801Phys. Rev. A. 8231801S. Longhi, Phys. Rev. A 82, 031801 (2010).
. C E Rüter, K G Makris, R El-Ganainy, D N Christodoulides, M Segev, D Kip, Nat. Phys. 6192C. E. Rüter, K. G. Makris, R. El-Ganainy, D. N. Christodoulides, M. Segev, and D. Kip, Nat. Phys. 6, 192 (2010).
. A Regensburger, C Bersch, M.-A Miri, G Onishchukov, D N Christodoulides, U Peschel, Nature. 488167A. Regensburger, C. Bersch, M.-A. Miri, G. On- ishchukov, D. N. Christodoulides, and U. Peschel, Na- ture 488, 167 (2012).
. E.-M Graefe, H F Jones, 10.1103/PhysRevA.84.013818Phys. Rev. A. 8413818E.-M. Graefe and H. F. Jones, Phys. Rev. A 84, 013818 (2011).
. T Kottos, Nat. Phys. 6166T. Kottos, Nat. Phys. 6, 166 (2010).
. S V Suchkov, S V Dmitriev, B A Malomed, Y S Kivshar, 10.1103/PhysRevA.85.033825Phys. Rev. A. 8533825S. V. Suchkov, S. V. Dmitriev, B. A. Malomed, and Y. S. Kivshar, Phys. Rev. A 85, 033825 (2012).
. Z H Musslimani, K G Makris, R El-Ganainy, D N Christodoulides, 10.1103/PhysRevLett.100.030402Phys. Rev. Lett. 10030402Z. H. Musslimani, K. G. Makris, R. El- Ganainy, and D. N. Christodoulides, Phys. Rev. Lett. 100, 030402 (2008).
. M Wimmer, A Regensburger, M.-A Miri, C Bersch, D N Christodoulides, U Peschel, Nat. Commun. 6M. Wimmer, A. Regensburger, M.-A. Miri, C. Bersch, D. N. Christodoulides, and U. Peschel, Nat. Commun. 6 (2015).
. S Longhi, Phys. Rev. Lett. 103123601S. Longhi, Phys. Rev. Lett. 103, 123601 (2009).
. K G Makris, R El-Ganainy, D N Christodoulides, Z H Musslimani, 10.1103/PhysRevA.81.063807Phys. Rev. A. 8163807K. G. Makris, R. El-Ganainy, D. N. Christodoulides, and Z. H. Musslimani, Phys. Rev. A 81, 063807 (2010).
. B Zhen, C W Hsu, Y Igarashi, L Lu, I Kaminer, A Pick, S.-L Chua, J D Joannopoulos, M Soljacic, 10.1038/nature14889Nature. 525354B. Zhen, C. W. Hsu, Y. Igarashi, L. Lu, I. Kaminer, A. Pick, S.-L. Chua, J. D. Joannopoulos, and M. Soljacic, Nature 525, 354 (2015).
. Z Lin, H Ramezani, T Eichelkraut, T Kottos, H Cao, D N Christodoulides, 10.1103/PhysRevLett.106.213901Phys. Rev. Lett. 106213901Z. Lin, H. Ramezani, T. Eichelkraut, T. Kot- tos, H. Cao, and D. N. Christodoulides, Phys. Rev. Lett. 106, 213901 (2011).
. L Feng, Y.-L Xu, W S Fegadolli, M.-H Lu, J E Oliveira, V R Almeida, Y.-F Chen, A Scherer, Nat. Mater. 12108L. Feng, Y.-L. Xu, W. S. Fegadolli, M.-H. Lu, J. E. Oliveira, V. R. Almeida, Y.-F. Chen, and A. Scherer, Nat. Mater. 12, 108 (2013).
. K G Makris, L Ge, H E Türeci, 10.1103/PhysRevX.4.041044Phys. Rev. X. 441044K. G. Makris, L. Ge, and H. E. Türeci, Phys. Rev. X 4, 041044 (2014).
. M.-A Miri, P Likamwa, D N Christodoulides, Opt. Lett. 37764M.-A. Miri, P. LiKamWa, and D. N. Christodoulides, Opt. Lett. 37, 764 (2012).
. H Hodaei, M.-A Miri, M Heinrich, D N Christodoulides, M Khajavikhan, 10.1126/science.1258480Science. 346975H. Hodaei, M.-A. Miri, M. Heinrich, D. N. Christodoulides, and M. Khajavikhan, Science 346, 975 (2014).
. H Hodaei, M.-A Miri, A U Hassan, M Heinrich, D N Christodoulides, M Khajavikhan, Opt. Lett. to be publishedH. Hodaei, M.-A. Miri, A. U. Hassan, M. Heinrich, D. N. Christodoulides, and M. Khajavikhan, Opt. Lett. (to be published);
. H Hodaei, M.-A Miri, A U Hassan, W E Hayenga, M Heinrich, D N Christodoulides, M Khajavikhan, Optica. submittedH. Hodaei, M.-A. Miri, A. U. Hassan, W. E. Hayenga, M. Heinrich, D. N. Christodoulides and M. Khajavikhan, Optica (submitted).
. C M Bender, S Boettcher, 10.1103/PhysRevLett.80.5243Phys. Rev. Lett. 805243C. M. Bender and S. Boettcher, Phys. Rev. Lett. 80, 5243 (1998).
. M , 10.1016/S0375-9601(01)00301-2Phys. Lett. A. 2857M. Znojil, Phys. Lett. A 285, 7 (2001).
. Z Ahmed, 10.1016/S0375-9601(01)00218-3Phys. Lett. A. 282343Z. Ahmed, Phys. Lett. A 282, 343 (2001).
T Kato, Perturbation theory for linear operators. BerlinSpringerT. Kato, Perturbation theory for linear operators (Springer, Berlin, 1966).
. M V Berry, Czech , J. Phys. 541039M. V. Berry, Czech. J. Phys. 54, 1039 (2004).
. W D Heiss, J. Phys. A. 45444016W. D. Heiss, J. Phys. A 45, 444016 (2012).
. R El-Ganainy, M Khajavikhan, L Ge, 10.1103/PhysRevA.90.013802Phys. Rev. A. 9013802R. El-Ganainy, M. Khajavikhan, and L. Ge, Phys. Rev. A 90, 013802 (2014).
. L Chang, X Jiang, S Hua, C Yang, J Wen, L Jiang, G Li, G Wang, M Xiao, Nat. Photon. 8524L. Chang, X. Jiang, S. Hua, C. Yang, J. Wen, L. Jiang, G. Li, G. Wang, and M. Xiao, Nat. Photon. 8, 524 (2014).
. B Peng, Ş K Özdemir, F Lei, F Monifi, M Gianfreda, G L Long, S Fan, F Nori, C M Bender, L Yang, Nat. Phys. 10394B. Peng, Ş. K.Özdemir, F. Lei, F. Monifi, M. Gianfreda, G. L. Long, S. Fan, F. Nori, C. M. Bender, and L. Yang, Nat. Phys. 10, 394 (2014).
. Y D Chong, L Ge, A D Stone, 10.1103/PhysRevLett.106.093902Phys. Rev. Lett. 10693902Y. D. Chong, L. Ge, and A. D. Stone, Phys. Rev. Lett. 106, 093902 (2011).
. J Schindler, A Li, M C Zheng, F M Ellis, T Kottos, 10.1103/PhysRevA.84.040101Phys. Rev. A. 8440101J. Schindler, A. Li, M. C. Zheng, F. M. Ellis, and T. Kot- tos, Phys. Rev. A 84, 040101 (2011).
. R Fleury, D Sounas, A Alù, Nat. Commun. 6R. Fleury, D. Sounas, and A. Alù, Nat. Commun. 6 (2015).
. J M Lee, T Kottos, B Shapiro, 10.1103/PhysRevB.91.094416Phys. Rev. B. 9194416J. M. Lee, T. Kottos, and B. Shapiro, Phys. Rev. B 91, 094416 (2015).
. A Basiri, I Vitebskiy, T Kottos, 10.1103/PhysRevA.91.063843Phys. Rev. A. 9163843A. Basiri, I. Vitebskiy, and T. Kottos, Phys. Rev. A 91, 063843 (2015).
. L Feng, Z J Wong, R.-M Ma, Y Wang, X Zhang, Science. 346972L. Feng, Z. J. Wong, R.-M. Ma, Y. Wang, and X. Zhang, Science 346, 972 (2014).
. M Liertzer, L Ge, A Cerjan, A D Stone, H E Türeci, S Rotter, Phys. Rev. Lett. 108173901M. Liertzer, L. Ge, A. Cerjan, A. D. Stone, H. E. Türeci, and S. Rotter, Phys. Rev. Lett. 108, 173901 (2012).
. M Brandstetter, M Liertzer, C Deutsch, P Klang, J Schöberl, H Türeci, G Strasser, K Unterrainer, S Rotter, Nat. Commun. 5M. Brandstetter, M. Liertzer, C. Deutsch, P. Klang, J. Schöberl, H. Türeci, G. Strasser, K. Unterrainer, and S. Rotter, Nat. Commun. 5 (2014).
. Y Lumer, Y Plotnik, M C Rechtsman, M Segev, Phys. Rev. Lett. 111263901Y. Lumer, Y. Plotnik, M. C. Rechtsman, and M. Segev, Phys. Rev. Lett. 111, 263901 (2013).
. B E Little, S T Chu, H A Haus, J Foresi, J.-P Laine, J. Lightwave Technol. 15998B. E. Little, S. T. Chu, H. A. Haus, J. Foresi, and J.-P. Laine, J. Lightwave Technol. 15, 998 (1997).
G Agrawal, N Dutta, Long-wavelength Semiconductor Lasers. Van Nostrand ReinholdG. Agrawal and N. Dutta, Long-wavelength Semiconduc- tor Lasers (Van Nostrand Reinhold, 1986).
. C M Bender, D C Brody, H F Jones, Phys. Rev. Lett. 89270401C. M. Bender, D. C. Brody, and H. F. Jones, Phys. Rev. Lett. 89, 270401 (2002).
| []
|
[
"Spectrum of cosmological correlation from vacuum fluctuation of Stringy Axion in entangled de Sitter space",
"Spectrum of cosmological correlation from vacuum fluctuation of Stringy Axion in entangled de Sitter space"
]
| [
"Sayantan Choudhury [email protected] \nMax Planck Institute for Grav-itational Physics (Albert Einstein Institute)\nQuantum Gravity and Unified Theory and Theoretical Cosmology Group\nAm Mühlenberg 114476Potsdam-GolmGermany\n\nAlternative\n",
"Sudhakar Panda [email protected] \nNational Institute of Science Education and Research\n752050Jatni, BhubaneswarOdishaIndia\n\nHomi Bhabha National Institute\nTraining School Complex\nAnushakti Nagar, Mumbai-400085India\n"
]
| [
"Max Planck Institute for Grav-itational Physics (Albert Einstein Institute)\nQuantum Gravity and Unified Theory and Theoretical Cosmology Group\nAm Mühlenberg 114476Potsdam-GolmGermany",
"Alternative",
"National Institute of Science Education and Research\n752050Jatni, BhubaneswarOdishaIndia",
"Homi Bhabha National Institute\nTraining School Complex\nAnushakti Nagar, Mumbai-400085India"
]
| []
| In this work, we study the impact of quantum entanglement on the two-point correlation function and the associated primordial power spectrum of mean square vacuum fluctuation in a bipartite quantum field theoretic system. The field theory that we consider is the effective theory of axion field arising from Type IIB string theory compactified to four dimensions. We compute the expression for the power spectrum of vacuum fluctuation in three different approaches, namely (1) field operator expansion (FOE) technique with the quantum entangled state, (2) reduced density matrix (RDM) formalism with mixed quantum state and (3) the method of non-entangled state (NES). For massless axion field, in all these three formalism, we reproduce, at the leading order, the exact scale-invariant power spectrum which is well known in the literature. We observe that due to quantum entanglement, the sub-leading terms for these thee formalisms are different. Thus, such correction terms break the degeneracy among the analysis of the FOE, RDM and NES formalisms in the super-horizon limit. On the other hand, for massive axion field, we get a slight deviation from scale invariance and exactly quantify the spectral tilt of the power spectrum in small scales. Apart from that, for massless and massive axion field, we find distinguishable features of the power spectrum for the FOE, RDM, and NES on the large scales, which is the result of quantum entanglement. We also find that such large-scale effects are comparable to or greater than the curvature radius of the de Sitter space. Most importantly, in the near future, if experiments probe for early universe phenomena, one can detect such small quantum effects. In such a scenario, it is possible to test the implications of quantum entanglement in primordial cosmology.We express the D'Alembertian operator in this particular manifold and apply method of separation of variable to find out the total solution in terms of time, radial and angular coordinates Using Bunch Davies vacuum stateUsing generalised vacua stateBogoliubov transformationWe express the D'Alembertian operator in this particular manifold and apply method of separation of variable to find out the total solution in terms of time, radial and angular coordinatesBogoliubov transformationHere we express the solution in terms of the oscillator Here we express the solution in terms of the oscillator Bogoliubov transformation | 10.1140/epjc/s10052-019-7582-x | [
"https://arxiv.org/pdf/1809.02905v2.pdf"
]
| 119,368,167 | 1809.02905 | c6a5236883b45a21e5d855b72eeab77271626821 |
Spectrum of cosmological correlation from vacuum fluctuation of Stringy Axion in entangled de Sitter space
21 Sep 2018
Sayantan Choudhury [email protected]
Max Planck Institute for Grav-itational Physics (Albert Einstein Institute)
Quantum Gravity and Unified Theory and Theoretical Cosmology Group
Am Mühlenberg 114476Potsdam-GolmGermany
Alternative
Sudhakar Panda [email protected]
National Institute of Science Education and Research
752050Jatni, BhubaneswarOdishaIndia
Homi Bhabha National Institute
Training School Complex
Anushakti Nagar, Mumbai-400085India
Spectrum of cosmological correlation from vacuum fluctuation of Stringy Axion in entangled de Sitter space
21 Sep 2018De-Sitter spaceQuantum EntanglementCosmology of Theories beyond the SMQuantum correlation
In this work, we study the impact of quantum entanglement on the two-point correlation function and the associated primordial power spectrum of mean square vacuum fluctuation in a bipartite quantum field theoretic system. The field theory that we consider is the effective theory of axion field arising from Type IIB string theory compactified to four dimensions. We compute the expression for the power spectrum of vacuum fluctuation in three different approaches, namely (1) field operator expansion (FOE) technique with the quantum entangled state, (2) reduced density matrix (RDM) formalism with mixed quantum state and (3) the method of non-entangled state (NES). For massless axion field, in all these three formalism, we reproduce, at the leading order, the exact scale-invariant power spectrum which is well known in the literature. We observe that due to quantum entanglement, the sub-leading terms for these thee formalisms are different. Thus, such correction terms break the degeneracy among the analysis of the FOE, RDM and NES formalisms in the super-horizon limit. On the other hand, for massive axion field, we get a slight deviation from scale invariance and exactly quantify the spectral tilt of the power spectrum in small scales. Apart from that, for massless and massive axion field, we find distinguishable features of the power spectrum for the FOE, RDM, and NES on the large scales, which is the result of quantum entanglement. We also find that such large-scale effects are comparable to or greater than the curvature radius of the de Sitter space. Most importantly, in the near future, if experiments probe for early universe phenomena, one can detect such small quantum effects. In such a scenario, it is possible to test the implications of quantum entanglement in primordial cosmology.We express the D'Alembertian operator in this particular manifold and apply method of separation of variable to find out the total solution in terms of time, radial and angular coordinates Using Bunch Davies vacuum stateUsing generalised vacua stateBogoliubov transformationWe express the D'Alembertian operator in this particular manifold and apply method of separation of variable to find out the total solution in terms of time, radial and angular coordinatesBogoliubov transformationHere we express the solution in terms of the oscillator Here we express the solution in terms of the oscillator Bogoliubov transformation
Contents
Introduction
The concept of quantum entanglement is one of the most interesting features that one can study in the context of quantum mechanics. Using such idea one can study the instantaneous physical implication of local measurements [1][2][3]. There are several applications in the framework of quantum field theory in which the quantum entanglement play a significant role. For example, particle creation (EPR Bell pair [4]) through the bubble nucleation procedure has been explained using the idea of quantum entanglement where the quantum system is strongly correlated [5][6][7][8]. Also using the concept of quantum entanglement in QFT one successfully explains many phenomena like entropy bounds, phase transitions, anomalies, confinement, thermalization and quantum critical quenches, localization in quantum gravity and description of interior of black holes. Apart from that quantum entanglement has huge application in the context of quantum information theory, quantum cryptography and interferometry. The von-Neumann entropy and Rényi entropy are the appropriate measures of quantum entanglement the framework of condensed matter theory [9], in quantum information theory and in theoretical high energy physics. The idea of entanglement entropy in the context of quantum field theory is the best possible computational tool to quantify and study the nature of the long range effects of quantum correlation. However, the computation of entanglement entropy for a specific class of quantum field theories were not easy before the method proposed by Ryu and Takayanagi [10]. In this work, the authors have computed the entanglement entropy for a strongly coupled field theory set up with a gravity dual using the techniques of holography and the results are remarkable as it is in agreement with various expectations from the quantum field theory side [11].
Following this success, Maldacena and Pimentel in ref. [12] further proposed an explicit technique to compute the entanglement entropy in the framework of quantum field theory of de Sitter space with Bunch Davies quantum initial vacuum state. Here, the authors have studied the gravitational dual of the quantum field theory of de Sitter space using holographic techniques in detail. Further in ref. [13] the authors have extended this computation in the context of α vacua [14] in the same context. In ref. [15] and [16] the computation of quantum entanglement entropy and the formation of EPR Bell pair from stringy Axion were discussed with Bunch Davies and α vacua respectively.
Based on the physical set up used in our previous works [15] and [16], in this paper we have studied the cosmological implications of quantum entanglement by focussing on the long range effects of the two point correlation function computed from the mean square vacuum fluctuation of stringy Axion field with Bunch Davies and α quantum states as initial choice of vacua . We expect from this analysis that the signature and impact of quantum entanglement could be manifest in the correlation function even beyond the Hubble horizon scale. Our expectation is mainly due to the fact that de Sitter expansion of universe distinguish between a pair of Axions [17][18][19][20], known as EPR Bell pair which is created within causally connected Hubble region. For this purpose, we use three different techniques: 1. Field operator expansion (FOE) method with entangled state, 2. Reduced density matrix formalism (RDM) with mixed state and 3. Non-entangled state (NES) method. We implement the RDM formalism using the previous work done by Maldacena and Pimentel in ref. [12] in the context of de Sitter cosmology. In our computation we have explicitly included the effect of Stringy Axion in the small field regime and as a result we get perturbatively corrected contributions in the expression for the power spectrum derived using FOE, RDM and NES formalisms. Such correction terms can be interpreted as quantum effects which are appearing from the UV complete theory, such as a specific type of bipartite quantum field theory driven by axion. We note that the axion field which is being considered here, is actually originating from Type IIB string theory compactified on a Calabi-Yau three fold (CY 3 ), in presence of a NS5 brane sitting at the bottom of a long throat [21]. Most importantly, in the large wave number 1 limit (small scale or small wave length approximation [22]) we have shown the results for the power spectrum derived from these three formalism perfectly match with each other if we consider only the leading order contribution. However, the results are different for these three formalisms if we we include the contributions from next and next to next leading order. In a way one can say that such additional small perturbative correction terms play a pivotal role to distinguish between the FOE, RDM and NES formalisms. This is obviously an important information because using the present observational data on early universe cosmology [23,24] one can further constrain the present model and also test the appropriateness of these formalisms. Apart from this, for completeness, we have also analysed the behaviour of the power spectrum in the small wave number limit (large scale or large wave length approximation). We find that all these three formalisms yield distinctive results in terms of the momentum (quantum number) dependence of the power spectrum in order by order. But the lack of observational data on this particular regime does not allow us to test the appropriateness and correctness of the proposed methods. We hope that in near future when the observational data for this regime will be available, our results can further constrain the model and rule out two of the possibilities between the three formalisms discussed here. We would like to mention here that in our computation of the power spectrum for mean square vacuum fluctuation we have not considered the quantum fluctuation of the pseudo scalar Axion field as a classical back ground field, the approach which is mostly used in the context of the cosmological correlations from early universe. Instead , we have chosen the field operator of the Axion field itself as quantum operator whose fluctuation with respect to a quantum mechanical vacuum state (Bunch Davies and α vacua). Thus, in this paper, we have followed: 1. A complete quantum approach to compute the primordial power spectrum of mean square vacuum fluctuation, which is not usually followed in the context of cosmology.
2. For the specific structure of the axion effective potential , we have computed the explicit form of the corrections which are due to quantum effects.
3. For our calculation, we have used three different approaches at super horizon time scale hoping that the quantum corrections, at small and large wave number limits when confronted with observations, can select the most effective approach and the nature of quantum corrections.. From the cosmological perspective we believe this is a very important step forward.
The plan of the paper is as follows: In section 2, we begin our discussion with the computation of the wave function of the Axion field in a de Sitter hyperbolic open chart. For this purpose we have discussed the details of the background de Sitter geometrical set up in subsection 2.1. Further in subsection 2.2 and 2.3, we have solved the total wave function for Axion for Bunch Davies vacuum and generalised α-vacua respectively. Using these solutions we have derived the cosmological power spectrum of mean square quantum vacuum fluctuation in section 3. In subsections 3.1.1 and 3.1.2 we have discussed the quantum vacuum fluctuation using field operator expansion (FOE) formalism with entangled state for Axion. field. We have also derived the explicit form of the wave function in this formalism. This solution is used to derive the power spectrum by computing the two point quantum correlation function from mean square vacuum fluctuation. In subsection 3.2.1and 3.2.2 we have discussed the quantum vacuum fluctuation using reduced density matrix (RDM) formalism using mixed state for Axion field and we have derived the explicit form of the reduced density matrix in the de Sitter hyperbolic open chart. Further, this result is used to derive the power spectrum by computing the two point quantum correlation function from mean square vacuum fluctuation in large and small wave number limits for both massless and massve Axion fields. In subsection 3.3.1and 3.3.2 we have studied the quantum vacuum fluctuation using non entangled state (NES) formalism for Axion field and have discussed the NES formalism in detail. This result has been used to derive the power spectrum by computing the two point quantum correlation function from mean square vacuum fluctuation. Finally, section 4 has been devoted to summery and conclusion and future prospects . In Figure (1), we have presented a schematic diagram for the computation algorithm of long range effect of cosmological correlation function from quantum entanglement of axion in de Sitter open hyperbolic chart.
Wave function of axion in open chart
We briefly review here, for sake of completeness, the background geometry and the results for wave function of the axion field. The details can be found in our earlier work [].
Background geometry
We consider a time preserving space-like hypersurface S 2 in the open hyperbolic chart of the de Sitter space.. As a result S 2 is divided into two sub regions-interior and exterior which are identified by RI (≡L)/ RII (≡R). In terms of the Lorentzian signature an open chart in de Sitter space is described by three different subregions : where H =ȧ/a is the Hubble parameter and dΩ 2 2 represents angular part of the metric on S 2 . Now let us assume that the total Hilbert space of the local quantum mechanical system is described by H, which can be written using bipartite decomposition in a direct product space as, H = H INT ⊗ H EXT . Here H INT and H EXT are the Hilbert space associated with interior and exterior region and describe the localised modes in RI/ RII respectively.
R(= RII)/L(= RI) : τ E = ± π 2 ∓ it R/L t R ≥ 0/t L ≥ 0 ρ E = −ir R/L r R ≥ 0/ r L ≥ 0 ds 2 R/L = 1 H 2 −dt 2 R/L + sinh 2 t R/L dr 2 R/L + sinh 2 r R//L dΩ 2 2 (2.1) C : τ E = t C − π 2 ≤ t C ≤ π 2 ρ E = π 2 − ir C −∞ < r c < ∞. ds 2 C = 1 H 2 dt 2 C + cos 2 t C −dr 2 C + cosh 2 r C dΩ 2 2 (2.2)
In Figure (2) we have shown the schematic diagram for the geometrical construction and underlying symmetries of the bipartite quantum field theoretic system of de Sitter hyperbolic open chart. Corresponding Penrose diagrams are also drawn for completeness.
Wave function for Axion using Bunch Davies vacuum
Though our prime objective is to compute the cosmological correlation functions for axion field in de Sitter space, we need the results for the wave function of the axion field in the just mentioned geometrical set up. Note that he axion field under consideration is coming from RR sector of Type IIB string theory compactified on CY 3 in presence of NS 5 brane [21,26]. The effective action for the axion field is given by [21]:
S axion = d 4 x √ −g − 1 2 (∂φ) 2 + µ 3 φ + bf a cos φ f a ,(2.3)
where µ 3 is the mass scale, f a is axion decay constant and the parameter b is defined as, b = Λ 4 G /µ 3 f a . Here Λ G depend on the string coupling g s , slope parameter α and details of SUSY breaking parameter. For φ << f a , effective potential for axion can be expressed as:
V (φ) ≈ µ 3 (bf a + φ) − m 2 axion 2 φ 2 ,(2.4)
where we introduce the effective mass of the axion as, m 2
axion = µ 3 b/f a = Λ 4 G /f 2 a .
Here axion decay constant follow a (conformal) time dependent profile, which is explicitly mentioned in refs. [].
In Figure (3) we have explicitly presented the behaviour of the above axion potential with respect to the dimensionless field value φ/f a .
ϕ/f a b cos ϕ/f a ϕ/f a +b cos ϕ/f a ϕ/f a +b[1-(ϕ/f a ) 2 /2] 0 1 2 3 4 5 6 -4 -2 0 2 4 6 ϕ/f a V(ϕ)/μ 3 f a
Axion effective potential (for b=2) Figure 3. Behaviour of the axion effective potential obtained from Type IIB String Theory with respect to the dimensionless field value φ/f a , where f a is the axion decay constant. Further using Eqn (2.3) the field equation of motion for the axion can be written as:
1 a 3 (t) ∂ t a 3 (t)∂ t − 1 H 2 a 2 (t) L 2 H 3 + m 2 axion φ = µ 3 ,(2.5)
where the scale factor a(t) in de Sitter open chart is given by, a(t) = sinh t/H. Here the Laplacian operator L 2 H 3 in H 3 can be written as:
L 2 H 3 = 1 sinh 2 r ∂ r sinh 2 r ∂ r + 1 sin θ ∂ θ (sin θ ∂ θ ) + 1 sin 2 θ ∂ 2 φ ,(2.6)
which satisfy the following eigenvalue equation:
L 2 H 3 Y plm (r, θ, φ) = −(1 + p 2 )Y plm (r, θ, φ). (2.7)
Here Y plm (r, θ, φ) represents orthonormal eigenfunctions which can be written in terms of a radial and angular part as:
Y plm (r, θ, φ) = Γ (ip + l + 1) Γ (ip + 1) p √ sinh r P −(l+ 1 2 ) (ip− 1 2 ) (cosh r) Y lm (θ, φ),(2.8)
where Y lm (θ, φ) is the spherical harmonics. Consequently, the total solution of the equations of motion can be written as:
Φ(t, r, θ, φ) = σ=±1 Q=p,l,m a Q V Q (t, r, θ, φ) + a † Q V * Q (t, r, θ, φ) ,(2.9)
Here the total solution V Q (t, r, θ, φ) for Bunch Davies vacuum can be expressed as:
V Q (t, r, θ, φ) = 1 a(t) χ p,σ (t)Y plm (r, θ, φ) = H sinh t χ p,σ (t)Y plm (r, θ, φ),(2.10)
where χ p,σ (t) forms a complete set of positive frequency function. Also this can be written as a sum of complementary (χ (c) p,σ (t)) and particular integral (χ (p) p,σ (t)) part, as given by:
χ p,σ (t) = χ (c) p,σ (t) + χ (p) p,σ (t)
. (2.11) Explicitly the solution for the complementary part and the particular integral part can be expressed as:
χ (c) p,σ (t) = 1 2 sinh πp e πp − iσ e −iπν Γ ν + 1 2 + ip P ip (ν− 1 2 ) (cosh t R ) − e −πp − iσ e −iπν Γ ν + 1 2 − ip P −ip (ν− 1 2 ) (cosh t R ) for R σ 2 sinh πp e πp − iσ e −iπν Γ ν + 1 2 + ip P ip (ν− 1 2 ) (cosh t L ) − e −πp − iσ e −iπν Γ ν + 1 2 − ip P −ip (ν− 1 2 ) (cosh t L )
for L, (2.12)
χ (p) p,σ (t) = sinh 2 t ∞ n=0 1 (p 2 − p 2 n ) χ (c) pn,σ (t) dt χ (c) pn,σ (t ) µ 3 . (2.13)
where the parameter ν is defined as:
ν = 9 4 − m 2 axion H 2 = 9 4 − µ 3 b f a H 2 = 9 4 − Λ 4 G f 2 a H 2 .
(2.14)
In Figure (4) we have given a schematic diagram for the computation algorithm of solving the wave function of our universe in de Sitter hyperbolic open chart for stringy axion.
Wave function for Axion using α vacua
Here we use two subspaces in CPT invariant SO (1,4) isometric de Sitter space, which is identified as RI and RII respectively. Use the result obtained for Bunch Davies vacuum, and performing a Bogoliubov transformation the mode functions for the α-vacua can be expressed as: (2.15) where the α-vacua state are defined as: +l. (2.16) In this context, the α-vacua mode function F (α) σplm can be expressed in terms of Bunch Davies mode function V σplm (r, t, θ, φ) using Bogoliubov transformation as:
Φ(r, t, θ, φ) = ∞ 0 dp σ=±1 ∞ l=0 +l m=−l d σplm F (α) σplm (r, t, θ, φ) + d † σplm (F (α) σplm ) * (r, t, θ, φ) ,d σplm |α = 0 ∀σ = (+1, −1); 0 < p < ∞; l = 0, · · · , ∞, m = −l, · · · ,F (α) σplm = cosh α V σplm (r, t, θ, φ) + sinh α V * σplm (r, t, θ, φ) . (2.17)
Here V σplm (r, t, θ, φ) is the Bunch Davies vacuum states, which is defined as:
V σplm (r, t, θ, φ) = H sinh t χ p,σ (t)Y plm (r, θ, φ). (2.18)
After substituting Eq (2.17) and Eq (2.18) in Eq (2.15) we get the following expression for the wave function: (2.19) Finally, the solution of the time dependent part of the wave function can be recast as:
Φ(r, t, θ, φ) = H sinh t ∞ 0 dp σ=±1 p−1 l=0 +l m=−l d σplm cosh α χ p,σ (t) + d † σplm sinh α χ * p,σ (t) Y plm (r, θ, φ),χ p,σ (t) = q=R,L 1 N p α σ q P q + β σ q P q * Complementary solution + ∞ n=0 1 N pn (p 2 n − p 2 ) ᾱ σ q,nP q,n +β σ q,nP q * ,n Particular solution ∀σ = ±1 (2.20)
where we use the following shorthand notation:
P q,n = sinh 2 t dt χ (c) pn,σ,q (t ) µ 3 P q,n .(2.21)
Here we also use the shorthand notations P q , P q,n , for the Legendre polynomial. Also the coefficient functions (α σ q , β σ q ) and (α σ q,n , β σ q,n ), normalization constants N p , N pn for the complementary and particular part of the solution which are defined as: 2.23) In this section, we present our computation of the spectrum of Bunch Davies vacuum and α vacua fluctuation from two point correlation function . We will be discussing the computation of two point correlation function and their associated cosmological spectra from three completely different formalisms:-
N p = 4 sinh πp √ π √ cosh πp − σ sin πν |Γ ν + ip + 1 2 | , (2.22) N p,(n) = 4 sinh πp n √ π √ cosh πp n − σ sin πν |Γ ν + ip n + 1 2 | .(
Field operator expansion (FOE) method:
This method is useful for entangled quantum states with the wave function of the de Sitter universe for Bunch Davies and most generalised α vacua. Technically this formalism is based on the wave function χ I which we will explicitly derive . The cosmological spectrum is characterised by the two point correlation function and their associated power spectrum. Using such entangled state in this formalism one can construct the usual density matrix for Bunch Davies and most generalised α vacua.
Reduced density matrix (RDM) formalism:
This formalism is helpful for mixed quantum states and is useful for the construction of reduced density matrix in a diagonalised representation of Bunch Davies and α vacua by tracing over the all possible degrees of freedom from the region R. Technically the formalism is based on the wave function ψ I which we explicitly derive.
Non entangled state (NES) formalism:
This formalism in presence of non entangled quantum state which deals with the construction of wave function in the region L in which the total universe is described. Here we also use Bunch Davies and most generalised α vacua in the region L. Technically this formalism is based on the wave function φ I which we explicitly derive in this paper.
We will now derive the expression for the mean square fluctuation considering both Bunch Davies vacuum and α vacua using the results presented in the previous section. For this computation we will follow the steps which are outlined below:
1. First of all, we trace out all contributions which belong to the R region. As a result the required field operator is only defined in the L region. This method we use in FOE formalism where the quantum states for L and R region are entangled with each other. On the other hand, doing a partial trace over region R one can construct reduced density matrix which leads to RDM formalism. Instead, if we use the non entangled quantum state and compute the wave function solely in L region we will be lead to the NES formalism. Note that all of these three methods are used to compute mean square vacuum fluctuation or more precisely the quantum mechanical computation of two point correlation function for axion and the associated power spectrum.
2. Instead of doing the computation in |L basis we use a new basis |L , obtained by applying Bogoliubov transformation in |L . Consequently the field operators will act on |L and the FOE method is developed in this transformed basis. On the other hand, as mentioned earlier it will appear in the expression for the reduced density matrix to be used in the RDM formalism. But in the NES formalism this transformation is not very useful since in this case the total wave function is solely described by the quantum mechanical state appearing in the L region and the corresponding Hilbert space is spanned by only |L which forms a complete basis.
3. Further, we will compute the expressions for the mean square quantum vacuum fluctuation and the corresponding cosmological power spectrum after horizon exit using all the three formalisms i.e. FOE, RDM and NES. We will finally consider two limiting situations : long wave length and short wave length approximation for the computation of the power spectrum. Let us first compute the spectrum of vacuum fluctuation using field operator expansion (FOE). In figure (5) we have presented a schematic diagram for the computation algorithm of field operator expansion method for entangled state of axion in de Sitter hyperbolic open chart. To compute the vacuum fluctuation using FOE, we focus only with the left region L as it is completely symmetric to the right region R. We use the time dependent mode function for the left region L which we have presented in section 2. Thus instead of getting a (4 × 4) square matrix (when both sectors are considered) we have a (4 × 2) matrix which appears in the solution of the field equation as:
χ I = 1 N p M I J P J + ∞ n=0 1 N p,(n) M (n) I J P J (n) ,(3.1)
where the index J = 1, 2 is appearing for the contribution from region L. To write down the total solution in region L we define the following matrices:
M I J = α σ L β σ L β σ * L α σ * L , M (n) I J = ᾱ σ L,nβ σ L,n β σ * L,nᾱ σ * L,n ,(3.2)χ I = χ σ (t) χ σ * (t), , P J = P L P L * , , P J (n) = P L,n P L * ,n ,(3.3)
where σ = ±1, I = 1, 2, 3, 4 and J = 1, 2. The Fourier mode of the field operator, which is also the total solution of the field equation for axion (in presence of source contribution) can be expressed as:
Φ(t L ) = H sinh t L Q I χ I = H sinh t L Q I 1 N p M I J P J + ∞ n=0 1 N p,(n) M (n) I J P J (n) ,(3.4)
where the operator Q I represent a set of creation and annihilation operators which are defined (in section 2) for Bunch Davies vacuum (α = 0) and α vacua (α = 0) as:
Q I ≡ a I = (a σ , a † σ ) = a (c) I + ∞ n=0 a (p) I(n)
for Bunch Davies vacuum
d I = (d σ , d † σ ) = d (c) I + ∞ n=0 d (p) I(n)
for α vacua.
(3.5)
Here we have labeled the time coordinate t by t L since we are considering the left region L only.
To explicitly write down the expression for the amplitude of the normalized power spectrum, we start with the column matrix representation of the time dependent part of the solution of the wave function, given by: 6) where the entries of the column matrix for the complementary and particular integral part of the solution are given by the following expressions:
χ I = χ σ (t) χ σ * (t) = A σ L P L + B σ L P L * B σ * L P L + A σ * L P L * + ∞ n=0 A σ L,(n) P L (n) + B σ L,(n) P L * (n) B σ * L,(n) P L (n) + A σ * L,(n) P L * (n) ,(3.A σ L = α σ L N p = σ e πp − iσ e −iπν N p Γ ν + ip + 1 2 , (3.7) B σ L = β σ L N p = −σ e −πp − iσ e −iπν N p Γ ν − ip + 1 2 , (3.8) A σ L,(n) = α σ L,(n) N p,(n) = σ e πpn − iσ e −iπν N p,(n) Γ ν + ip n + 1 2 , (3.9) B σ L,(n) = β σ L,(n) N p,(n) = −σ e −πpn − iσ e −iπν N p,(n) Γ ν − ip n + 1 2 .
(3.10)
N p and N p,(n) in the above equations are the normalization constants for the complementary part and particular integral part of the solution as defined section 2.
Two point correlation function
To compute the expression for the two point correlation function for the vacuum fluctuation let us now concentrate on a single mode with fixed value of the SO(3, 1) quantum numbers p, l and m. As a result the mean square vacuum fluctuation of axion for any generalized arbitrary vacuum state (|Ω ) can be expressed as:
Ω| Φ plm (t L ) Φ p l m (t L ) † |Ω = H 2 sinh 2 t L Ω| Q I χ I plm Q I χ I p l m † |Ω . (3.11)
Further explicitly writing the expression for the mean square vacuum fluctuation of axion for Bunch Davies vacuum we get the following simplified expressions:
BD| Φ plm (t L ) Φ p l m (t L ) † |BD = H 2 sinh 2 t L BD| a I χ I plm a I χ I p l m † |BD = H 2 sinh 2 t L σ=±1 | χ σ | 2 δ(p − p ) δ ll δ mm ≡ P BD (p, t L ) δ(p − p ) δ ll δ mm ,(3.12)
where we define the amplitude of the normalized power spectrum of axion as:
P BD (p, t L ) = p 3 2π 2 P BD (p, t L ) = p 3 2π 2 H 2 sinh 2 t L σ=±1 | χ σ | 2 . (3.13)
Further using Eq (3.6) we compute the following expression, which is appearing in the expression for the amplitude of the normalized power spectrum:
σ=±1 | χ σ | 2 = σ=±1 χ σ † χ σ = |A σ L | 2 + |B σ L | 2 P L P L * + A σ L B σ * L P L 2 + A σ * L B σ L P L * 2 + ∞ n=0 A σ L A σ * L,(n) + B σ L B σ * L,(n) P L P L * (n) + A σ L B σ * L,(n) + A σ L,(n) B σ * L P L P L (n) + A σ * L,(n) B σ L + A σ * L B σ L,(n) P L * (n) P L * + ∞ n=0 ∞ m=0 A σ L,(n) A σ * L,(m) + B σ L,(n) B σ * L,(m) P L (n) P L * (m) + A σ L,(n) B σ * L,(m) P L (n) P L (m) + A σ * L,(n) B σ L,(m) P L * (n) P L *(m)
. (3.14)
Using Eq (3.14), the amplitude of the normalized power spectrum of axion from Bunch Davies vacuum can be expressed in all time scales of region L as:
P BD (p, t L ) = p 3 2π 2 H 2 sinh 2 t L σ=±1 | χ σ | 2 = p 3 2π 2 H 2 sinh 2 t L |A σ L | 2 + |B σ L | 2 P L P L * + A σ L B σ * L P L 2 + A σ * L B σ L P L * 2 + ∞ n=0 A σ L A σ * L,(n) + B σ L B σ * L,(n) P L P L * (n) + A σ L B σ * L,(n) + A σ L,(n) B σ * L P L P L (n) + A σ * L,(n) B σ L + A σ * L B σ L,(n) P L * (n) P L * + ∞ n=0 ∞ m=0 A σ L,(n) A σ * L,(m) + B σ L,(n) B σ * L,(m) P L (n) P L * (m) + A σ L,(n) B σ * L,(m) P L (n) P L (m) + A σ * L,(n) B σ L,(m) P L * (n) P L * (m)
. (3.15) However, it is not easy to extract any information from Eqn (3.15) for cosmological predictions. Hence, we consider the superhorizon time scales (t L >> 1) of region L. In such a case, the Legendre functions, appearing in the complementary part and the particular integral part of the time dependent solution, can be approximated as :
P L , P L * ≡ P ±ip ν− 1 2 (cosh t L ) t L >> 1 − −−−− → 2 ν− 1 2 (cosh t L ) ν− 1 2 Γ(ν) √ πΓ ν ∓ ip + 1 2 , (3.16) P L (n) , P L * (n) ≡ P ±ipn ν− 1 2 (cosh t L ) t L >> 1 − −−−− → 2 ν− 1 2 (cosh t L ) ν− 1 2 Γ(ν) √ πΓ ν ∓ ip n + 1 2 . (3.17)
Consequently, in the superhorizon time scales (t L >> 1) of region L eqn (3.14) can be further simplified as: (3.18) where the time independent function M(p, ν) is defined as:
σ=±1 | χ σ | 2 = σ=±1 χ σ † χ σ t L >> 1 − −−−− → M(p, ν) (cosh t L ) 2ν−1M(p, ν) = 2 2ν−1 (Γ(ν)) 2 π × σ=±1 |A σ L | 2 + |B σ L | 2 Γ ν + ip + 1 2 2 + A σ L B σ * L Γ ν − ip + 1 2 2 + A σ * L B σ L Γ ν + ip + 1 2 2 + ∞ n=0 A σ L A σ * L,(n) + B σ L B σ * L,(n) Γ ν − ip + 1 2 Γ ν + ip n + 1 2 + A σ L B σ * L,(n) + A σ L,(n) B σ * L Γ ν − ip + 1 2 Γ ν − ip n + 1 2 + A σ * L,(n) B σ L + A σ * L B σ L,(n) Γ ν + ip n + 1 2 Γ ν + ip + 1 2 + ∞ n=0 ∞ m=0 A σ L,(n) A σ * L,(m) + B σ L,(n) B σ * L,(m) Γ ν − ip n + 1 2 Γ ν + ip m + 1 2 + A σ L,(n) B σ * L,(m) Γ ν − ip n + 1 2 Γ ν − ip m + 1 2 + A σ * L,(n) B σ L,(m) Γ ν + ip n + 1 2 Γ ν + ip m + 1 2 . (3.19)
As a result, in the superhorizon time scales (t L >> 1) of region L the amplitude of the normalized power spectrum of axion from Bunch Davies vacuum can be expressed as:
P BD (p, t L ) = p 3 2π 2 H 2 sinh 2 t L σ=±1 | χ σ | 2 t L >> 1 − −−−− → p 3 2π 2 (cosh t L ) 2ν−3 H 2 M(p, ν). (3.20)
Here, it is important to note that in the superhorizon time scales (t L >> 1) of region L if we consider the massless case where we fix the mass parameter to be ν = 3/2, then the time dependent contribution can be approximated as:
(cosh t L ) 2ν−1 sinh 2 t L ν=3/2 t L >> 1 − −−−− → 1. (3.21)
Consequently, in the superhorizon time scales of region L and for the massless axion case, the amplitude of the normalized power spectrum of axion from Bunch Davies vacuum can be expressed as: (3.22) This implies that in the massless case, the amplitude of the vacuum fluctuation gets frozen with respect to the time scale when the associated modes exit the horizon. Further to infer the exact wave number dependence of the amplitude of the normalized power spectrum from Bunch Davies vacuum we need to know the behaviour of the power spectrum at very short wavelengths (p, p n >> 1). In this limit it is expected that the power spectrum should match the result obtained for spatially flat universe. Note that in the short wave length approximation the time independent function M(p >> 1, ν) for any arbitrary mass parameter ν can be expressed as:
P BD (p, t L ) = p 3 2π 2 H 2 sinh 2 t L σ=±1 | χ σ | 2 t L >> 1, ν = 3/2 − −−−−−−−−−−− → p 3 2π 2 H 2 M(p, ν = 3/2).M(p >> 1, ν) = 2 2(ν−1) (Γ(ν)) 2 p 3 π G(p >> 1),(3.23)
where we have defined a new function G(p >> 1) in the short wave length limit as :
G(p) = 1 1 + 1 82944p 4 × 1 + e −2πp 2 + ∞ n=0 p p n 3 2 1 + 1 82944p 4 1 + 1 82944p 4 n 1 + 2 e −2πp + e −2πpn + e −2π(p+pn) + ∞ n=0 ∞ m=0 p 3 (p n p m ) 3/2 1 + 1 82944p 4 1 + 1 82944p 4 n 1 + 1 82944p 4 m 1 + e −π(pm+pn) 2 .
(3.24)
The above equation implies that for very large p, p n >> 1 one can rewrite this as, G(p) ∼ 1 + · · · , and all the · · · terms can be considered as small correction terms. Also for the mass less case (ν = 3/2) and in the short wave length approximation, the time independent function M(p, ν = 3/2) can be further simplified as: Finally, in the superhorizon time scales (t L >> 1) of region L, the amplitude of the normalized power spectrum of axion from Bunch Davies vacuum in the short wave length limit can be expressed as:
M(p >> 1, ν = 3/2) = G(p >> 1) 2p 3 . (3.25)P BD (p >> 1, t L >> 1) = p 3 2π 2 (cosh t L ) 2ν−3 H 2 M(p, ν) = (2 cosh t L ) 2ν−3 H 2π 2 Γ(ν) Γ 3 2 2 G(p >> 1). (3.26)
Also for the massless case (ν = 3/2) in the superhorizon time scales (t L >> 1) of region L the amplitude of the normalized power spectrum of axion from Bunch Davies vacuum in the short wave length limit can be simplified as:
P BD (p >> 1, t L >> 1) = p 3 2π 2 H 2 M(p >> 1, ν = 3/2) = H 2π 2 G(p >> 1). (3.27)
Now, we generalize the above results for the two point correlation function and the associated power spectrum for α vacua. For α vacua the mean square vacuum fluctuation of axion in the short wave length limit can be expressed as:
α| Φ plm (t L ) Φ p l m (t L ) † |α = H 2 sinh 2 t L α| d I χ I plm d I χ I p l m † |α = H 2 sinh 2 t L σ=±1 | χ σ | 2 δ(p − p ) δ ll δ mm ≡ P (p >> 1, α, t L ) δ(p − p ) δ ll δ mm . (3.28)
where we have defined the amplitude of the normalized power spectrum of axion in the short wave length limit as:
P(p >> 1, α, t L ) = p 3 2π 2 P (p >> 1, α, t L ) = P BD (p >> 1, t L ) (cosh 2α − sinh 2α) = exp(−2α) P BD (p >> 1, t L ).
(3.29)
In the above equation, P BD (p, t L ) is defined as:
P BD (p >> 1, t L ) = p 3 2π 2 H 2 sinh 2 t L σ=±1 | χ σ | 2 . (3.30)
We carry out the same approximations as earlier and we note that in the superhorizon time scales (t L >> 1) of region L the amplitude of the normalized power spectrum of axion in the short wave length limit from α vacua can be expressed as:
P(p >> 1, α, t L >> 1) = P BD (p >> 1, t L >> 1) (cosh 2α − sinh 2α) = exp(−2α) P BD (p, t L >> 1), (3.31)
where the normalized power spectrum in superhorizon scale for Bunch Davies vacuum P BD (p >> 1, t L >> 1) is defined in Equation (3.26). Here it is important to note that, with α = 0 then we can reproduce the results obtained for Bunch Davies vacuum. In figure (6(a)) and figure (6(b)) we have shown the behaviour of the power spectrum of the mean square vacuum fluctuation computed from FOE formalism in the short wave length regime for α = 0 and α = 0.1 and for fixed values of the mass parameter ν(= 3/2, 2, 5/2, 3, 7/2) respectively. In both the cases we have found almost similar behaviour. Additionally, in figure (6(c)) we have depicted the behaviour of the power spectrum with respect to the mass parameter ν with fixed values of the parameter α(= 0, 0.1, 0.2, 0.3, 0.4). It is clear from this figure that the power spectrum shows two distinct behaviour in 1/2 < ν < 1 and ν > 1 region. For 1/2 < ν < 1 region, the amplitude of the normalized power spectrum decreases to a certain value but just after ν = 1 it increases.
On the other hand, to know the exact wavenumber dependence of the amplitude of the normalised power spectrum from Bunch Davies vacuum in the long wavelength limit we need to know the behaviour of the power spectrum at p, p n << 1. In this limit it is expected that the power spectrum of axion match with the result obtained for spatially flat universe. Here the time independent function M(p << 1, ν) for any arbitrary mass parameter ν can be expressed as:
M(p << 1, ν) = 2 2(ν−1) (Γ(ν)) 2 π G(p << 1), (3.32)
where we have defined a new function G(p << 1) in the long wave length limit as :
G(p << 1) = π |Γ ν + 1 2 | 2 1 + |Γ ν + 1 2 | 2 Γ ν + 1 2 2 1 + 3e −πp ∞ n=0 e −πpn + 2 ∞ n=0 ∞ m=0 e −π(pn+pm)
. (3.33) This implies that for very small wave numbers p, p n << 1, one can write,
G(p << 1) ∼ π |Γ(ν+ 1 2 )| 2 [1 + · · · ],
where all the· · · terms are small correction terms.
Also for the massless case (ν = 3/2) and in the long wave length approximation, the time independent function M(p << 1, ν = 3/2) can further be simplified as:
M(p << 1, ν = 3/2) = G(p << 1) 2 . (3.34)
Finally, in the super horizon time scales (t L >> 1) of region L the amplitude of the normalized power spectrum of axion from Bunch Davies vacuum, in the long wave length limit, can be expressed as: (3.35) and for the massless case (ν = 3/2) this simplifies to:
P BD (p << 1, t L >> 1) = p 3 2π 2 (cosh t L ) 2ν−3 H 2 M(p << 1, ν) = (2 cosh t L ) 2ν−3 H 2π 2 p 3 Γ(ν) Γ 3 2 2 G(p << 1),P BD (p << 1, t L >> 1) = p 3 2π 2 H 2 M(p << 1, ν = 3/2) = H 2π 2 p 3 G(p << 1). (3.36)
Here it is important to note that both of Eq (3.35) and Eq (3.36) are valid after horizon exit. Next, we generalize the result for the two point correlation function and the associated power spectrum for α vacua. For α vacua the mean square vacuum fluctuation of axion in the long wave length limit can be expressed as:
α| Φ plm (t L ) Φ p l m (t L ) † |α = H 2 sinh 2 t L α| d I χ I plm d I χ I p l m † |α = H 2 sinh 2 t L σ=±1 | χ σ | 2 δ(p − p ) δ ll δ mm ≡ P (p << 1, α, t L ) δ(p − p ) δ ll δ mm ,(3.37)
where the amplitude of the normalized power spectrum of axion at long wave length limit is defined as: with P BD (p << 1, t L ) as defined earlier.
P(p << 1, α, t L ) = p 3 2π 2 P (p << 1, α, t L ) = P BD (p, t L ) (cosh 2α − sinh 2α) = exp(−2α) P BD (p << 1, t L ),(3.
In the super horizon time scales (t L >> 1) of region L the amplitude of the normalized power spectrum of axion in the long wave length approximation from α vacua can be expressed as:
P(p << 1, α, t L >> 1) = P BD (p << 1, t L >> 1) (cosh 2α − sinh 2α) = exp(−2α) P BD (p << 1, t L >> 1),(3.39)
where P BD (p << 1, t L >> 1) is defined in Eq (3.35). It may be noted that, for α = 0 we get back the results obtained for Bunch Davies vacuum.
In figure (7(a)) , figure (7(b)) and in figure (7(c)) we have shown the behaviour of the power spectrum of the mean square vacuum fluctuation computed from FOE formalism in the small wave number regime. The values of α and the values of the mass parameter ν used here are same as those taken for large wave number regime. As expected, the behaviour for the the two limiting cases are distinct. However, the characteristics observed for α and ν dependences for both the cases are almost similar.
3.2 Quantum vacuum fluctuation using reduced density matrix (RDM) formalism (with mixed state)
In this section, we study the features of the two point correlation function of the quantum vacuum fluctuations and the associated primordial power spectrum using the reduced density matrix formalism. In figure (8)
Reduced density matrix (RDM) formalism
We first write down the Fourier mode of the field operator, which is also the total solution of the field equation for axion in presence of source contribution. We start directly from the solution obtained in Eqn (2.20) and rewrite it in terms of the following matrix equation:
χ I = 1 N p M I J P J + ∞ n=0 1 N p,(n) M (n) I J P J (n) (3.40)
where for the complementary part of the solution we have defined the following matrices:
M I J = α σ q β σ q β σ * q α σ * q , χ I = χ σ (t) χ * σ (t), , P J = P q P q * , . (3.41)
Similarly for the particular solution, we define the following matrices:
M (n) I J = ᾱ σ q,nβ σ q,n β σ * q,nᾱ σ * q,n , P J (n) = P q,n P q * ,n ,(3.42)
where σ = ±1, q = R, L and I, J = 1, 2, 3, 4. The redefined normalization constant for the particular part of the solution N p,(n) can be expressed as, N p,(n) = 2 sinh πp n N pnσ p 2 − p 2 n . Further using Eqn (3.40) the Bunch-Davies mode function can be written as:
H sinh t a I χ I = H sinh t a I 1 N p M I J P J + ∞ n=0 1 N p,(n) M (n) I J P J (n) ,(3.43)
where a I = (a σ , a † σ ) represents a set of creation and annihilation operators. We also define the following operators:
b J = a (c) I M I J , b J(n) = a (p) I(n) M (n) I J , (3.44)
where a (c)
I = (a (c) σ , a (c) † σ ) and a (p) I(n) = (a (p)
σ,n , a (p) † σ,n ) are the set of creation and annihilation operators which act on the complementary and particular part respectively. Thus, the operator contribution for the total solution is:
a I = a (c) I + ∞ n=0 a (p) I(n) ,(3.45)
where by inverting Eqn (3.44) we have expressed:
a (c) I = b J M −1 I J , a(p)I(n) = b J(n) M −1 (n) I J . (3.46)
The inverse matrices are defined as:
M −1 I J = γ σq δ σq δ * σq γ * σq , M −1 (n) I J = γ σq,nδσq,n δ * σq,nγ * σq,n ,(3.47)
where σ = ±1, q = R, L and I, J = 1, 2, 3, 4. For further computation, α-vacua are defined in terms of Bunch Davies vacuum state as:
|α = exp 1 2 tanh α σ=±1 a † σ a σ |BD . (3.48)
It is to be noted that for α = 0 we get, |α = 0 = |0 = |BD . Moreover, we can also write the R and L vacua as:
|R = |R (c) + ∞ n=0 |R (p),n , |L = |L (c) + ∞ n=0 |L (p),n ,(3.49)
with subscripts (c) and (p) representing the complementary and particular part respectively. Further assuming the bipartite Hilbert space (H α := H R ⊗ H L ) one can also write the α-vacua in terms of the R and L vacuum as:
|α = exp 1 2 tanh α σ=±1 a † σ a σ exp 1 2 i,j=R,L m ij b † i b † j + 1 2 i,j=R,L ∞ n=0m ij,nb † i,nb † j,n (|R ⊗ |L ) Bunch−Davies contribution ,(3.
50) where the matrices m ij andm ij,n are defined for the complementary and particular part of the solution obtained for Bunch Davies vacuum state. In other words by setting α = 0 we get the following expression for the Bunch Davies quantum state:
|BD = exp 1 2 i,j=R,L m ij b † i b † j + 1 2 i,j=R,L ∞ n=0m ij,nb † i,nb † j,n (|R ⊗ |L ). (3.51)
Also the creation and annihilation operators for the R and L vacuum are defined in terms of new b type of oscillators using Bogoliubov transformation as:
a σ = q=R,L γ qσ b q + δ * qσ b † q + ∞ n=0 γ qσ,nbq,n +δ * qσ,nb † q,n ∀σ = ±1, (3.52) a † σ = q=R,L γ * qσ b † q + δ qσ b q + ∞ n=0 γ * qσ,nb † q,n +δ qσ,nbq,n ∀σ = ±1. (3.53)
Here γ qσ , δ qσ ,γ qσ,n andδ qσ,n are the coefficient matrices. For our further computation we use the definition of α-vacuum state (and Bunch Davies vacuum state), which is very useful to compute long range cosmological correlation functions in de Sitter space. In the context of α-vacua the creation and annihilation operators are defined in terms of the constituents of R or L vacuum state as:
d σ = q=R,L (cosh α γ qσ − sinh α δ qσ ) b q + cosh α δ * qσ − sinh α γ * qσ b † q + cosh α ∞ n=0γ qσ,nbq,n − sinh α ∞ n=0δ qσ,nbq,n + cosh α ∞ n=0δ * qσ,nb † q,n − sinh α ∞ n=0γ * qσ,nb † q,n ∀σ = ±1, (3.54) d † σ = q=R,L cosh α γ * qσ − sinh α δ * qσ b † q + (cosh α δ qσ − sinh α γ qσ ) b q + cosh α ∞ n=0γ * qσ,nb † q,n − sinh α ∞ n=0δ * qσ,nb † q,n + cosh α ∞ n=0δ qσ,nbq,n − sinh α ∞ n=0γ qσ,nbq,n ∀σ = ±1, (3.55)
where we use the definition of creation and annihilation operators in Bunch Davies vacuum as mentioned in Eq (3.53) and Eq (3.52). In this computation it is important to note that, under Bogoliubov transformation the original matrix γ qσ , δ qσ ,γ qσ,n andδ qσ,n used for Bunch Davies vacuum transform ( for α-vacua) as:
γ qσ −→ (cosh α γ qσ − sinh α δ qσ ) , δ qσ −→ (cosh α δ qσ − sinh α γ qσ ) , (3.56) γ qσ,n −→ cosh αγ qσ,n − sinh αδ qσ,n ,δ qσ,n −→ cosh αδ qσ,n − sinh αγ qσ,n .
Thus, after the Bogoliubov transformation, α-vacua state can be written in terms of R and L vacua as:
|α = exp 1 2 i,j=R,Lm ij b † i b † j + 1 2 i,j=R,L ∞ n=0m ij,nb † i,nb † j,n (|R ⊗ |L ),(3.57)
Herem ij andm ij,n represent the entries of the matrices corresponding to the complementary and particular solution respectively and we will compute them by demanding d σ |α = 0, and keeping only linear terms of creation operators. This directly yields the following:
[m ij (cosh α γ jσ − sinh α δ jσ ) + (cosh α δ * iσ − sinh α γ * iσ )] = 0,(3.58)
cosh αm ij,nγjσ,n − sinh αm ij,nδjσ,n + cosh αδ * iσ,n − sinh αγ * iσ,n = 0∀ n. (3.59) From these two equations, the matrices corresponding to the complementary and particular part of the solution can be expressed as:
m ij = − (cosh α δ * iσ − sinh α γ * iσ ) (cosh α γ − sinh α δ) −1 σj = m RRmRL m LRmLL , (3.60) m ij,n = − cosh αδ * iσ,n − sinh αγ * iσ,n cosh αγ − sinh αδ −1 σj,n = m RR,nmRL,n m LR,nmLL,n . (3.61)
Substituting the expressions for γ, δ, γ n and δ n we finally obtain the entries of the mass matrices for i, j = R, L as:m
ij = e iθ √ 2 e −pπ T (ν) ij √ cosh 2πp + cos 2πν cosh 2 α + sinh 2 α e −2π(p+iν) (3.62) m ij,n = e iθ √ 2 e −pnπ T (ν,n) ij √ cosh 2πp n + cos 2πν cosh 2 α + sinh 2 α e −2π(pn+iν) (3.63)
where we defined the T matrices as:
T (ν) ij = T (ν) RR T (ν) RL T (ν) LR T (ν) LL , T (ν,n) ij = T (ν,n) RR T (ν,n) RL T (ν,n) LR T (ν,n) LL .
( 3.64) and the corresponding entries of the T matrices are given by:
T (ν) RR = T (ν) LL = cosh 2 α + sinh 2 α e −2iπν − sinh 2α sinh 2 πp e −iπν sec πν cos πν, (3.65) T (ν) RL = T (ν) LR = i cosh 2 α + sinh 2 α e −2iπν + sinh 2α cos πν e −iπν sinh πp, (3.66) T (ν,n) RR = T (ν,n) LL = cosh 2 α + sinh 2 α e −2iπν − sinh 2α sinh 2 πp n e −iπν sec πν cos πν, (3.67) T (ν,n) RL = T (ν,n) LR = i cosh 2 α + sinh 2 α e −2iπν + sinh 2α cos πν e −iπν sinh πp n . (3.68)
For the massless (ν = 3/2) axion case, we obtain the following simplified expressions:
m ij = e iθ √ 2 e −pπ T (3/2) ij √ cosh 2πp − 1 cosh 2 α − sinh 2 α e −2πp (3.69) m ij,n = e iθ √ 2 e −pnπ T (3/2,n) ij √ cosh 2πp n − 1 cosh 2 α − sinh 2 α e −2πpn (3.70)
where we have defined the T (3/2) matrices as:
T (3/2) ij = T (3/2) RR T (3/2) RL T (3/2) LR T (3/2) LL , T (3/2,n) ij = T (3/2,n) RR T (3/2,n) RL T (3/2,n) LR T (3/2,n) LL .
(3.71) and the corresponding entries of the T (3/2) matrices are given by:
T (3/2) RR = T (3/2) LL = 0, (3.72) T (3/2) RL = T (3/2) LR = i sinh πp, (3.73) T (3/2,n) RR = T (3/2,n) LL = 0, (3.74) T (3/2,n) RL = T (3/2,n) LR = i sinh πp n . (3.75)
In the above analysis, we have considered small axion mass (ν 2 > 0) limiting situations with an arbitrary parameter α, which corresponds to Bunch Davies vacuum state with the choice α = 0. For completeness, we also consider the large axion mass (ν 2 < 0 where ν → −i|ν|) limiting situation which is very important to study the imprints of quantum entanglement in cosmological correlation functions. In this large axion mass limiting situation, we actually consider a specific window of SO(1, 3) principal quantum number, which is bounded within the range 0 < p < |ν|.
Consequently, the entries of the coefficient matrixm can be approximated as:
m RR = − cosh(|ν| − p) cosh(|ν| + p) 2 cosh 2α cosh 2 π|ν| − sinh 2α sinh 2 πp + 1 2 sinh 2π|ν| (e 2πp + e 2π|ν| ) cosh 2 α + (e 2πp + e 2π|ν| ) sinh 2 α , (3.76) m RL = − cosh(|ν| − p) cosh(|ν| + p)
which for α = 0 yield a simplified expression for them with Bunch Davies vacuum state. We note that for general value of α and for large axion mass (ν 2 < 0 where ν → −i|ν|) , we always get real value form RR and imaginary value form RL . This is an important observation for our further analysis.
From the perspective of cosmological observation in the superhorizon time scale, we again consider two further limiting situations: (a) large wave number (p >> 1) or small wave length limit and (b)small wave number (p << 1) or large wave length limit.
Using these two limiting situations we can simplify the expression for the entries of the coefficient matrixm considering both small and large axion mass. We start with the expressions for small axion mass limit in large wave number (p >> 1) approximation:
m ij ≈ 2 e iθ e −2pπ T (ν) ij sech 2 α (3.78) m ij,n ≈ 2 e iθ e −2pnπ T (ν,n) ij sech 2 α (3.79)
where we have defined the T matrices for p >> 1 limit as:
T (ν) ij = T (ν) RR T (ν) RL T (ν) LR T (ν) LL , T (ν,n) ij = T (ν,n) RR T (ν,n) RL T (ν,n) LR T (ν,n) LL .
( 3.80) and the corresponding entries of the T matrices for p >> 1 limit are given by the following simplified expressions:
T (ν) RR = T (ν) LL = cosh 2 α + sinh 2 α e −2iπν − 1 4 sinh 2α e 2pπ e −iπν sec πν cos πν, (3.81) T (ν) RL = T (ν) LR = i cosh 2 α + sinh 2 α e −2iπν + sinh 2α cos πν e −iπν 1 2 e πp , (3.82) T (ν,n) RR = T (ν,n) LL = cosh 2 α + sinh 2 α e −2iπν − 1 4 sinh 2α e 2pnπ e −iπν sec πν cos πν, (3.83) T (ν,n) RL = T (ν,n) LR = i cosh 2 α + sinh 2 α e −2iπν + sinh 2α cos πν e −iπν 1 2 e πpn . (3.84)
For massless (ν = 3/2) axion, we get the following simplified expressions:
m ij ≈ 2 e iθ e −2pπ T (3/2) ij sech 2 α (3.85) m ij,n ≈ 2 e iθ e −2pnπ T (3/2,n) ij sech 2 α (3.86)
where the T (3/2) matrices (for p >> 1) are given by: 3.87) and the corresponding entries of the T (3/2) matrices are given by :
T (3/2) ij = T (3/2) RR T (3/2) RL T (3/2) LR T (3/2) LL , T (3/2,n) ij = T (3/2,n) RR T (3/2,n) RL T (3/2,n) LR T (3/2,n) LL .(T (3/2) RR = T (3/2) LL = 0, (3.88) T (3/2) RL = T (3/2) LR = i 2 e πp , (3.89) T (3/2,n) RR = T (3/2,n) LL = 0, (3.90) T (3/2,n) RL = T (3/2,n) LR = i 2 e πpn . (3.91)
On the other hand, for small axion mass and for large wave number (p << 1) we have:
m ij ≈ e iθ √ 2 e −pπT (ν) ij √ cos 2πν cosh 2 α + sinh 2 α e −2πiν (3.92) m ij,n ≈ e iθ √ 2 e −pnπT (ν,n) ij √ cos 2πν cosh 2 α + sinh 2 α e −2πiν (3.93)
where theT matrices are defined as: 3.94) and the corresponding entries of theT matrices (for p << 1 ) are given by :
T (ν) ij = T (ν) RRT (ν) RL T (ν) LRT (ν) LL ,T (ν,n) ij = T (ν,n) RRT (ν,n) RL T (ν,n) LRT (ν,n) LL (T (ν) RR =T (ν) LL = cosh 2 α + sinh 2 α e −2iπν − sinh 2α π 2 p 2 e −iπν sec πν cos πν, (3.95) T (ν) RL =T (ν) LR = i cosh 2 α + sinh 2 α e −2iπν + sinh 2α cos πν e −iπν πp, (3.96) T (ν,n) RR =T (ν,n) LL = cosh 2 α + sinh 2 α e −2iπν − sinh 2α π 2 p 2 n e −iπν sec πν cos πν, (3.97) T (ν,n) RL =T (ν,n) LR = i cosh 2 α + sinh 2 α e −2iπν + sinh 2α cos πν e −iπν πp n . (3.98)
For the case of massless (ν = 3/2) axion, we get the following simplified expressions:
m ij ≈ e iθ √ 2 e −pπT (3/2) ij (3.99) m ij,n ≈ e iθ √ 2 e −pnπT (3/2,n) ij (3.100)
with theT matrices defined as:
T (3/2) ij = T (3/2) RRT (3/2) RL T (3/2) LRT (3/2) LL ,T (3/2,n) ij = T (3/2,n) RRT (3/2,n) RL T (3/2,n) LRT (3/2,n) LL (3.101)
and the corresponding entries of theT (3/2) matrices (for p << 1 ) are given by:
T (3/2) RR =T (3/2) LL = 0, (3.102) T (3/2) RL =T (3/2) LR = iπp, (3.103) T (3/2,n) RR =T (3/2,n) LL = 0, (3.104) T (3/2,n) RL =T (3/2,n) LR = iπp n .(3.105)
For further analysis, it is convenient to change over to a suitable basis by tracing over all possible contributions from R and L region. To achieve this we perform another Bogoliubov transformation by introducing new sets of operators :
c R =ũ b R +ṽ b † R ,c L =ū b L +v b † L ,C R,n =Ũ n b R,n +Ṽ n b † R,n ,C L,n =Ū n b L,n +V n b † L,n , (3.106)
satisfying the following conditions:
|ũ| 2 − |ṽ| 2 = 1, |ū| 2 − |v| 2 = 1, |Ũ n | 2 − |Ṽ n | 2 = 1, |Ū n | 2 − |V n | 2 = 1. (3.107)
Using these operators we write the α-vacuum state in terms of new basis represented by the direct product of R and L vacuum state as:
|α = 1 − |γ (α) p | 2 + ∞ n=0 |Γ (α) p,n | 2 1/2 exp γ (α) pc † Rc † L + ∞ n=0 Γ (α) p,nC † R,nC † L,n |R ⊗ |L (α) , (3.108) where γ (α) p and Γ (α)
p,n are to be determined shortly. We note that the the relationship between the new and the old basis is given by:
(|R ⊗ |L ) → |R ⊗ |L (α) = 1 − |γ (α) p | 2 + ∞ n=0 |Γ (α) p,n | 2 −1/2 exp −γ (α) pc † Rc † L − ∞ n=0 Γ (α) p,nC † R,nC † L,n exp 1 2 i,j=R,L m ij b † i b † j + 1 2 i,j=R,L ∞ n=0m ij,nb † i,nb † j,n (|R ⊗ |L ) . (3.109)
The commutation relations between the creation and annihilation operators corresponding to the new sets of oscillators is taken as:
c i ,c † j = δ ij , [c i ,c j ] = 0 = c † i ,c † j , C i,n ,C † j,m = δ ij δ nm , C i,n ,C j,m = 0 = C † i,mC † j,m .(3.110)
These operations act on the α vacuum state in the following way:
c R |α = γ (α) pc † L |α ,c R |α = γ (α) pc † L |α ,C R,n |α = Γ (α) p,nC † L,n |α ,C R,n |α = Γ (α) p,nC † L,n |α .(3.111)
Further, one can express the new c type annihilation operators in terms of the old b type annihilation operators as:
c J = b IG I J = b I Ũ qṼ * q V qŨ * q ,C J(n) =b J(n) G (n) I J =b J(n) Ū q,nV * σq,ñ V q,nŪ * q,n .
(3.112)
Note thatŨ q ≡ diag (ũ,ū),Ṽ q ≡ diag (ṽ,v) ,Ū q,n ≡ diag Ũ n ,Ū n ,V q,n ≡ diag Ṽ n ,V n . From Equations (3.106) and (3.111), we obtain the following sets of homogeneous equations:
For complementary solution :
m RRũ +ṽ − γ (α) pmRLv * = 0, (3.113) m RRū +v − γ (α) pmRLṽ * = 0, (3.114) m RLũ − γ (α) pū * − γ (α) pmRRv * = 0, (3.115) m RLū − γ (α) pũ * − γ (α) pmRRṽ * = 0,(3.
116)
For particular solution :
m RR,nŨn +Ṽ n − Γ (α) p,nmRL,nV * n = 0,m RR,nŪn +V n − Γ (α) p,nmRL,nṼ * n = 0, (3.117) m RL,nŨn − Γ (α) p,nŪ * n − Γ (α) p,nmRR,nV * n = 0,m RL,nŪn − Γ (α) p,nŨ * n − Γ (α)
p,nmRR,nṼ * n = 0, (3.118) Using the relationsṽ * =v,ũ * =ū,Ṽ * n =V n ,Ũ * n =Ū n , |ũ| 2 − |ṽ| 2 = 1 and |Ũ n | 2 − |Ṽ n | 2 = 1 the solutions of these equations can be written as:
γ (α) p = 1 √ 2|m RL | 1 + |m RL | 4 + |m RR | 4 −2|m RR | 2 −m 2 RR (m * RL ) 2 −m 2 RL (m * RR ) 2 ± −1 − |m RL | 4 − |m RR | 4 +2|m RR | 2 +m 2 RR (m * RL ) 2 +m 2 RL (m * RR ) 2 2 − 4|m RL | 4 1 2 1 2 ≈ i √ 2 cosh 2 α + sinh 2 α e 2iπν + sinh 2α cos πν e iπν √
cosh 2πp + cos 2πν ± √ cosh 2πp + cos 2πν + 2 cosh 2 α + sinh 2 α e −2π(p−iν)
α = 0 − −− → γ (0) p = 1 2m RL 1 + m 2 RL − m 2 RR ± 1 + m 2 RL − m 2 RR 2 − 4m 2 RL ≈ i √ 2 √ cosh 2πp + cos 2πν ± √ cosh 2πp + cos 2πν + 2 , (3.119) Γ (α) p,n = 1 √ 2|m RL,n | 1 + |m RL,n | 4 + |m RR,n | 4 −2|m RR,n | 2 −m 2 RR,n (m * RL,n ) 2 −m 2 RL,n (m * RR,n ) 2 ± −1 − |m RL,n | 4 − |m RR,n | 4 +2|m RR,n | 2 +m 2 RR,n (m * RL,n ) 2 +m 2 RL,n (m * RR,n ) 2 2 − 4|m RL,n | 4 1 2 1 2 ≈ i √ 2 cosh 2 α + sinh 2 α e 2iπν + sinh 2α cos πν e iπν √
cosh 2πp n + cos 2πν ± √ cosh 2πp n + cos 2πν + 2 cosh 2 α + sinh 2 α e −2π(pn−iν)
α = 0 − −− → Γ (0) p,n = 1 2m RL,n 1 +m 2 RL,n −m 2 RR,n ± 1 +m 2 RL,n −m 2 RR,n 2 − 4m 2 RL,n ≈ i √ 2 √ cosh 2πp n + cos 2πν ± √ cosh 2πp n + cos 2πν + 2 ,(3.120)
where the componentsm RR =m LL ,m RL =m LR andm RR,n =m LL,n ,m RL,n =m LR,n are defined in equations (3.62-68) for general α vacua. Also the components without tilde symbol represent the contribution from α = 0, which is the Bunch Davies vacuum state.
Further, for the massless (ν = 3/2) axion field we get the following simplified expressions:
γ (α,3/2) p ≈ i √ 2 √ cosh 2πp − 1 ± √ cosh 2πp + 1 cosh 2 α − sinh 2 α e −2πp α = 0 − −− → γ (0,3/2) p ≈ i √ 2 √ cosh 2πp − 1 ± √ cosh 2πp + 1 , (3.121) Γ (α) p,n ≈ i √ 2 √ cosh 2πp n − 1 ± √ cosh 2πp n + 1 cosh 2 α − sinh 2 α e −2πpn α = 0 − −− → Γ (0) p,n ≈ i √ 2 √ cosh 2πp n − 1 ± √ cosh 2πp n + 1 ,(3.122)
In the large axion mass (ν 2 < 0 where ν → −i|ν|) limit the two solutions for the γ (α) p and Γ (α) p,n for α vacuum are given by:
γ (α) p ≈ 1 2|m RL | 1 + |m RL | 2 −m 2 RR ± 1 + |m RL | 2 −m 2 RR 2 − 4|m RL | 2 . (3.123) Γ (α) p,n ≈ 1 2|m RL,n | 1 + |m RL,n | 2 −m 2 RR,n ± 1 + |m RL | 2 −m 2 RR 2 − 4|m RL,n | 2 (3.124)
In this limit, we divide the total window of p into two regions, given by 0 < p < |ν| and |ν| < p < Λ C . In these regions of interest, the two solutions for γ (α) p in presence of α vacuum can be approximately written as:
|γ (α) p | ≈ e ∓π|ν| (1 + tan α)
for 0 < p < |ν| e ∓πp (1 + tan α) 1 + tan α e 2π|ν| (1 + tan 2 α e −2πp )
for |ν| < p < Λ C /2π. (3.125) and
|Γ (α) p,n | = e ∓π|ν| (1 + tan α)
for 0 < p < |ν| e ∓πpn (1 + tan α) 1 + tan α e 2π|ν| (1 + tan 2 α e −2πpn )
for |ν| < p < Λ C /2π. (3.126) Further, in the limit p >> 1 we get the following simplified results:
γ (α) p ≈ i 2 cosh 2 α + sinh 2 α e 2iπν + sinh 2α cos πν e iπν sech 2 α | cosh 2πp| ± | cosh 2πp| + 4 α = 0 − −− → γ (0) p ≈ i 2 | cosh 2πp| ± | cosh 2πp| + 4 ,(3.127)
Γ (α) p,n ≈ i 2 cosh 2 α + sinh 2 α e 2iπν + sinh 2α cos πν e iπν sech 2 α | cosh 2πp n | ± | cosh 2πp n | + 4
α = 0 − −− → Γ (0) p,n ≈ i 2 | cosh 2πp n | ± | cosh 2πp n | + 4 ,(3.128)
For massless (ν = 3/2) axion field this simplifies to :
γ (α,3/2) p ≈ i 2sech 2 α | cosh 2πp| ± | cosh 2πp| + 4 α = 0 − −− → γ (0,3/2) p ≈ i 2 | cosh 2πp| ± | cosh 2πp| + 4 ,(3.129)Γ (α,3/2) p,n ≈ i 2sech 2 α | cosh 2πp n | ± | cosh 2πp n | + 4 α = 0 − −− → Γ (0,3/2) p,n ≈ i 2 | cosh 2πp n | ± | cosh 2πp n | + 4 ,(3.130)
On the other hand, in the limit p << 1 we get the following results:
γ (α) p ≈ i √ 2 cosh 2 α + sinh 2 α e 2iπν + sinh 2α cos πν e iπν √ cos 2πν + 1 ± √ cos 2πν + 3 cosh 2 α + sinh 2 α e 2πiν α = 0 − −− → γ (0) p ≈ i √ 2 √ cos 2πν + 1 ± √ cos 2πν + 3 , (3.131) Γ (α) p,n ≈ i √ 2 cosh 2 α + sinh 2 α e 2iπν + sinh 2α cos πν e iπν √ cos 2πν + 1 ± √ cos 2πν + 3 cosh 2 α + sinh 2 α e 2πiν α = 0 − −− → Γ (0) p,n ≈ i √ 2 √ cos 2πν + 1 ± √ cos 2πν + 3 ,(3.132)
which, for massless (ν = 3/2) axion field , simplifies to: 3.134) and are very useful information for the computation of spectrum of vacuum fluctuation. Further, the Fourier mode of the total compact solution in the region L in case of α vacua can be re-expressed in terms of the oscillators defined in the new basis (c,C) as well as the SO(1,3) quantum numbers (p, l, m) as:
γ (α,3/2) p ≈ ±i 1 √ 2 α = 0 − −− → γ (0,3/2) p ≈ ±i 1 √ 2 ,(3.133)Γ (α,3/2) p,n ≈ ±i 1 √ 2 α = 0 − −− → Γ (0,3/2) p,n ≈ ±i 1 √ 2 ,(φ L,plm (t L ) = H sinh t Lc T Iψ I T = H sinh t L 1 N p (G −1 ) I J P J + ∞ n=0 1 N p,(n) G −1 (n) I J P J (n) , (3.135)
where the total wave functionψ I T is a column matrix and for the complementary and particular part of the solution the inverse matrix (G −1 ) I J and G −1 (n) I J are defined as:
(G −1 ) I J = ũ * −ṽ * −ṽũ , G −1 (n) I J = Ũ * (n) −Ṽ * (n) −Ṽ (n)Ũ(n) , ψ I,T = ψ L,T (t L ) ψ L * ,T (t L ) . (3.136)
When we trace out the degrees of freedom over the right part of the Hilbert space, we obtain the following reduced density matrix for the left part of the Hilbert space :
(ρ L (α)) p,l,m = Tr R |α α|, (3.137) where the α vacuum state is written in terms ofc type of oscillators as:
|α ≈ 1 − |γ (α) p | 2 + ∞ n=0 |Γ (α) p,n | 2 1/2 exp γ (α) pc † Rc † L + ∞ n=0 Γ (α) p,nC † R,nC † L,n |R ⊗ |L (α)
, (3.138) Substituting Eq (3.138) in Eq (3.137), we get the expression for the reduced density matrix for the left part of the Hilbert space:
(ρ L (α)) p,l,m = 1 − |γ (α) p | 2 1 + f (α) p ∞ k=0 |γ (α) p | 2k |k; p, l, m k; p, l, m| Complementary part + (f (α) p ) 2 1 + f (α) p ∞ n=0 ∞ r=0 |Γ (α)
p,n | 2r |n, r; p, l, m n, r; p, l, m| 3.140) and the states |k; p, l, m and |n, r; p, l, m are expressed in terms of the new quantum state |L as:
Particular part . (3.139) where f (α) p is given by f (α) p = ∞ n=0 1 1 − |Γ (α) p,n | 2 −1 ,(|k; p, l, m = 1 √ k! (c † L ) k |L , |n, r; p, l, m = 1 √ r! (C † L,n ) r |L . (3.141)
Note that for α = 0, we get back the result obtained for Bunch Davies vacuum.
Two point correlation function
In this subsection we explicitly compute the two point correlation function and its significant role to obtain long range effect in the cosmological correlation using the generalised α and Bunch Davies vacuum. For this purpose and using the expression for the reduced density matrix, derived in the previous subsection, we first compute the mean square quantum vacuum fluctuation, which is expressed for α vacua as:
Tr L ρ L (α)φ L (t L )φ † L (t L ) (α) = exp (−2α) 1 − |γ (α) p | 2 ∞ n=0 |γ (α) p | 2n n; p, l, m|φ L (t L )φ † L (t L ) |n; p, l, m
Complementary part
+ 1 f (α) p 2 ∞ r=0 ∞s=0
|Γ (α) p,r,s | 2r s, r; p, l, m|φ L (t L )φ † L (t L ) |s, r; p, l, m
Particular part . (3.142)
In the above, we have used the shorthand notation φ L (t L ) = φ Lplm (t) for the field. Note that, setting α = 0 in Eq (3.142) we get the result for the Bunch Davies vacuum which is given by: The contributions from the complementary and the particular part, as appearing in the right hand side of Eq (3.142) for each n-particle state are found to be:
Tr L ρ L (α)φ L (t L )φ † L (t L ) (BD) = 1 − |γ (0) p | 2 ∞ n=0 |γ (0) p | 2n n; p, l, m|φ L (t L )φ † L (t L )|n; p, l, m Complementary part + 1 f (0) p 2 ∞ r=0 ∞ s=0 |Γ (0) p,r,s | 2r s, r; p, l, m|φ L (t L )φ † L (t L )n; p, l, m|φ L (t L )φ † L (t L ) |n; p, l, m = H 2 sinh 2 t L 1 n! L |(c L ) n c T Iψ †I T c T Jψ †J T † (c † L ) n |L = H 2 sinh 2 t L (2n + 1) |ψ L T | 2 , (3.144) s, r; p, l, m|φ L (t L )φ † L (t L ) |s, r; p, l, m = H 2 sinh 2 t L 1 r! L |(C (s) L ) r c T Iψ †I T c T Jψ †J T † (C (s) † L ) r |L = H 2 sinh 2 t L (2r + 1) |ψ L T | 2 ,(3.145)
whereψ L T is given by :
ψ L T = ψ L T (t) ψ L * T (t) = E L P L + F L P L * F * L P L + E * L P L * + ∞ n=0 E L,(n) P L (n) + F L,(n) P L * (n) F * L,(n) P L (n) + E * L,(n) P L * (n) ,(3.146)
with the entries of the column matrix for the complementary and particular integral part of the solution being: The normalization constants N c and N c,(n) for the complementary part and particular integral part of the solution is defined as:
E L =ū N c , (3.147) F L = −v N c ,(3.N c = 2 π e − πp 2 cosh 2πp + cos2πν, (3.151) N c,(n) = 2 π e − πpn 2 cosh 2πp n + cos2πν. (3.152)
The expression for (ū,v) for complementary solution and (Ū n ,V n ) for particular solution are given by the following expressions:
For complementary part :
u = 1 − γ (α) pmLR |1 − γ (α) pmLR | 2 − |m RR | 2 α = 0 − −− →ū = 1 − γ (0) p m LR |1 − γ (0) p m LR | 2 − |m RR | 2 , (3.153) v =m RR |1 − γ (α) pmLR | 2 − |m RR | 2 α = 0 − −− →v = m RR |1 − γ (0) p m LR | 2 − |m RR | 2 ,(3.
154)
For particular part :
U n = 1 − Γ (α) p,nmLR |1 − Γ (α) p,nmLR | 2 − |m RR | 2 α = 0 − −− →Ū n = 1 − Γ (0) p,n m LR |1 − Γ (0) p,n m LR | 2 − |m RR | 2 , (3.155) V n =m LR |1 − Γ (α) p,nmLR | 2 − |m RR | 2 α = 0 − −− →V n = m LR |1 − Γ (0) p,n m LR | 2 − |m RR | 2 ,(3.
p,n ) for the complementary and particular part of the solution are defined earlier in equations (3.62-68) and equations (3.119-120) respectively. We have used Eq (3.113), Eq (3.114), Eq (3.115) and Eq (3.116) and also have imposed the normalization conditions, |ū| 2 −v| 2 = 1 and |ū| 2 −v| 2 = 1. Note that the structural form of the equations for α = 0 corresponding to Bunch Davies vacuum is exactly same as that of α vacua. Only the significant changes appear when we explicitly consider the entries of (m LR , m RR ) and (γ p , Γ p,n ) for the complementary and particular part of the solution. Now, substituting Eq. (3.144) and Eq. (3.145) in Eq (3.142) we get the following simplified expression for the mean square quantum vacuum fluctuation for α vacua as:
Tr L ρ L (α)φ L (t L )φ † L (t L ) (α) = exp (−2α) H 2 sinh 2 t L |ψ L T | 2 1 − |γ (α) p | 2 ∞ n=0 |γ (α) p | 2n (2n + 1) Complementary part + H 2 sinh 2 t L |ψ L T | 2 1 f (α) p 2 ∞ r=0 ∞ s=0 |Γ (α) p,r,s | 2r (2r + 1) Particular part . = H 2 sinh 2 t L |ψ L T | 2 exp (−2α) 1 + |γ (α) p | 2 1 − |γ (α) p | 2 + 1 f (α) p 2 ∞ s=0 1 + |Γ (α) p,s | 2 1 − |Γ (α) p,s | 2 2 . (3.157)
Setting α = 0 we get the expression for the Bunch Davies vacuum as :
Tr L ρ L (α)φ L (t L )φ † L (t L ) (BD) = H 2 sinh 2 t L |ψ L T | 2 1 − |γ (0) p | 2 ∞ n=0 |γ (0) p | 2n (2n + 1) Complementary part + H 2 sinh 2 t L |ψ L T | 2 1 f (0) p 2 ∞ r=0 ∞ s=0 |Γ (0) p,r,s | 2r (2r + 1) Particular part . = H 2 sinh 2 t L |ψ L T | 2 1 + |γ (0) p | 2 1 − |γ (0) p | 2 + 1 f (0) p 2 ∞ s=0 1 + |Γ (0) p,s | 2 1 − |Γ (0) p,s | 2 2 . (3.158)
We note that, to derive this expression we have used the following identities:
∞ n=0 (2n + 1)|γ (α) p | 2n = 1 + |γ (α) p | 2 1 − γ (α) p | 2 2 α = 0 − −− → ∞ n=0 (2n + 1)|γ (0) p | 2n = 1 + |γ (0) p | 2 1 − γ (0) p | 2 2 , (3.159) ∞ s=0 ∞ r=0 (2r + 1)|Γ (α) p,r,s | 2r = ∞ s=0 1 + |Γ (α) p,s | 2 1 − Γ (α) p,s | 2 2 α = 0 − −− → ∞ s=0 ∞ r=0 (2r + 1)|Γ (0) p,r,s | 2r = ∞ s=0 1 + |Γ (0) p,s | 2 1 − Γ (0) p,s | 2 2 . (3.160)
The expression for |ψ L T | 2 , now comes out to be:
|ψ L T | 2 = ψ L T †ψ L T = |E L | 2 + |F L | 2 P L P L * + E L F * L P L 2 + E * L F L P L * 2 + ∞ n=0 E L E * L,(n) + F L F * L,(n) P L P L * (n) + E L F * L,(n) + E L,(n) F * L P L P L (n) + E * L,(n) F L + E * L F L,(n) P L * (n) P L * + ∞ n=0 ∞ m=0 E L,(n) E * L,(m) + F L,(n) F * L,(m) P L (n) P L * (m) + E L,(n) F * L,(m) P L (n) P L (m) + E * L,(n) F L,(m) P L * (n) P L * (m) (3.161)
Here also by fixing the parameter α = 0 one can get the expression for the square of the magnitude of the wave function for Bunch Davies vacuum in the newly defined Bogliubov transformed basis. Using Eq (3.161), the amplitude of the normalised power spectrum of axion from the generalised α vacua can be expressed in all time scales of region L as:
P(p, α, t L ) = p 3 2π 2 Tr L ρ L (α)φ L (t L )φ † L (t L ) (α) = p 3 2π 2 H 2 sinh 2 t L |ψ L T | 2 exp (−2α) 1 + |γ (α) p | 2 1 − |γ (α) p | 2 + 1 f (α) p 2 ∞ s=0 1 + |Γ (α) p,s | 2 1 − |Γ (α) p,s | 2 2 = p 3 2π 2 H 2 sinh 2 t L exp (−2α) 1 + |γ (α) p | 2 1 − |γ (α) p | 2 + 1 f (α) p 2 ∞ s=0 1 + |Γ (α) p,s | 2 1 − |Γ (α) p,s | 2 2 |E L | 2 + |F L | 2 P L P L * + E L F * L P L 2 + E * L F L P L * 2 + ∞ n=0 E L E * L,(n) + F L F * L,(n) P L P L * (n) + E L F * L,(n) + E L,(n) F * L P L P L (n) + E * L,(n) F L + E * L F L,(n) P L * (n) P L * + ∞ n=0 ∞ m=0 E L,(n) E * L,(m) + F L,(n) F * L,(m) P L (n) P L * (m) + E L,(n) F * L,(m) P L (n) P L (m) + E * L,(n) F L,(m) P L * (n) P L * (m)
.
(3.162) However, the above equation is very complicated to extract any physical information for further cosmological predictions. For this reason we consider the superhorizon time scales (t L >> 1) of region L, in which the Legendre functions appearing in the complementary part and the particular integral part of the time dependent solution can be approximated as the following simplified form:
P L , P L * ≡ P ±ip ν− 1 2 (cosh t L ) t L >> 1 − −−−− → 2 ν− 1 2 (cosh t L ) ν− 1 2 Γ(ν) √ πΓ ν ∓ ip + 1 2 , (3.163) P L (n) , P L * (n) ≡ P ±ipn ν− 1 2 (cosh t L ) t L >> 1 − −−−− → 2 ν− 1 2 (cosh t L ) ν− 1 2 Γ(ν) √ πΓ ν ∓ ip n + 1 2 . (3.164)
Consequently, in the superhorizon time scales (t L >> 1) of region L eqn (3.162) can be simplified for as:
|ψ L T | 2 = ψ L T †ψ L T t L >> 1 − −−−− → Q(p, α, ν) (cosh t L ) 2ν−1 (3.165)
where the time independent function Q(p, α, ν) for generalised α vacua is defined as:
Q(p, α, ν) = 2 2ν−1 (Γ(ν)) 2 π × |E L | 2 + |F L | 2 |Γ ν + ip + 1 2 | 2 + E L F * L Γ ν − ip + 1 2 2 + E * L F L Γ ν + ip + 1 2 2 + ∞ n=0 E L E * L,(n) + F L F * L,(n) Γ ν − ip + 1 2 Γ ν + ip n + 1 2 + E L F * L,(n) + E L,(n) F * L Γ ν − ip + 1 2 Γ ν − ip n + 1 2 + E * L,(n) F L + E * L F L,(n) Γ ν + ip + 1 2 Γ ν + ip n + 1 2 + ∞ n=0 ∞ m=0 E L,(n) E * L,(m) + F L,(n) F * L,(m) Γ ν − ip n + 1 2 Γ ν + ip m + 1 2 + E L,(n) F * L,(m) Γ ν − ip n + 1 2 Γ ν − ip m + 1 2 + E * L,(n) F L,(m) Γ ν + ip n + 1 2 Γ ν + ip m + 1 2 . (3.166)
As a result, in the superhorizon time scales (t L >> 1) of region L the amplitude of the normalised power spectrum of axion from generalised α vacua can be expressed as:
P(p, α, t L ) = p 3 2π 2 H 2 sinh 2 t L |ψ L T | 2 exp (−2α) 1 + |γ (α) p | 2 1 − |γ (α) p | 2 + 1 f (α) p 2 ∞ s=0 1 + |Γ (α) p,s | 2 1 − |Γ (α) p,s | 2 2 t L >> 1 − −−−− → p 3 2π 2 (cosh t L ) 2ν−1 sinh 2 t L H 2 Q(p, ν) exp (−2α) 1 + |γ (α) p | 2 1 − |γ (α) p | 2 + 1 f (α) p 2 ∞ s=0 1 + |Γ (α) p,s | 2 1 − |Γ (α) p,s | 2 2 . (3.167)
We note that in the superhorizon time scales (t L >> 1) of region L if we consider the massless case by fixing the mass parameter ν = 3/2, then the time dependent contribution can be approximated as:
(cosh t L ) 2ν−1 sinh 2 t L ν=3/2 t L >> 1 − −−−− → 1. (3.168)
From this we infer that for an arbitrary value of the parameter ν we can write:
(cosh t L ) 2ν−1 sinh 2 t L t L >> 1 − −−−− → (cosh t L ) 2ν−3 . (3.169)
Consequently, in the super horizon time scales (t L >> 1) of region L considering the massless case (ν = 3/2) the amplitude of the normalised power spectrum of axion from generalised α vacua can be expressed as:
P(p, α, t L ) = p 3 2π 2 H 2 sinh 2 t L |ψ L T | 2 exp (−2α) 1 + |γ (α) p | 2 1 − |γ (α) p | 2 + 1 f (α) p 2 ∞ s=0 1 + |Γ (α) p,s | 2 1 − |Γ (α) p,s | 2 2 t L >> 1, ν = 3/2 − −−−−−−−−−−− → p 3 2π 2 H 2 Q(p, ν = 3/2) exp (−2α) 1 + |γ (α) p | 2 1 − |γ (α) p | 2 + 1 f (α) p 2 ∞ s=0 1 + |Γ (α) p,s | 2 1 − |Γ (α) p,s | 2 2 . (3.170)
Like the result in the case of field operator expansion method derived in the previous section, this result also implies that in the massless case (ν = 3/2) amplitude of the vacuum fluctuation gets frozen with respect to the time scale when the associated modes exit horizon. Further to know the exact wave number dependence of the amplitude of the normalised power spectrum from generalised α vacua we need to know the behaviour of the power spectrum at very short wavelengths (p, p n >> 1). In this limit it is expected that the power spectrum of axion should match with the result obtained for spatially flat universe. In the short wave length approximation the time independent function Q(p >> 1, α, ν) for any arbitrary mass parameter ν can be expressed for generalised α vacua as:
Q(p >> 1, α, ν) = 2 2(ν−1) (Γ(ν)) 2 p 3 π G(p >> 1) = M(p, ν) ∀α,(3.171)
where we have already defined the function G(p >> 1) in the earlier section. Here for very large wave number p, p n >> 1 one can write, G(p >> 1) ∼ 1 + · · · , where all · · · are small correction terms. This also implies to the interesting fact that for large wavenumber limit and for any values of the parameter α, the time independent function Q(p >> 1, α, ν) computed for generalised α vacua exactly matches with the result obtained for Bunch Davies vacua in the earlier section i.e. M(p >> 1, ν). This means that the final result is independent of the choice of the parameter α.
For the massless case (ν = 3/2) in the short wave length approximation, the time independent function Q(p >> 1, α, ν = 3/2) can further be simplified to:
Q(p >> 1, α, ν = 3/2) = G(p >> 1) 2p 3 = M(p >> 1, ν = 3/2) ∀α. (3.172)
Additionally, we note that the following important contribution appearing in the normalised power spectrum for axion can be simplified, in the large wave number limit, as: Finally, in the super horizon time scales (t L >> 1) of region L, the amplitude of the normalised power spectrum of axion, in the short wave length approximation, can be expressed as: 1). (3.174) For the massless case (ν = 3/2), in the same scale and the same approximation, the above amplitude takes the form:
1 + |γ (α) p | 2 1 − |γ (α) p | 2 + 1 f (α) p 2 ∞ s=0 1 + |Γ (α) p,s | 2 1 − |Γ (α) p,s | 2 2 p>>1 = 1 + ∞ s=0 1 −1 =0 ∀α.(3.P(p >> 1, α, t L >> 1) = p 3 2π 2 (cosh t L ) 2ν−3 exp (−2α) H 2 Q(p >> 1, α, ν) = p 3 2π 2 (cosh t L ) 2ν−3 exp (−2α) H 2 M(p >> 1, ν) = (2 cosh t L ) 2ν−3 H 2π 2 Γ(ν) Γ 3 2 2 G(p >>P(p >> 1, α, t L >> 1) = p 3 2π 2 exp (−2α) H 2 Q(p >> 1, α, ν = 3/2) = p 3 2π 2 exp (−2α) H 2 M(p >> 1, ν = 3/2) = H 2π 2 exp (−2α) G(p >> 1). (3.175)
It is important to note that both of Eq (3.174) and Eq (3.175) are valid after horizon exit. From the same results , we also observe that the normalised power spectrum from generalised α vacua,in the leading order, computed from reduced density matrix formalism is exactly same as that obtained in the previous sub-section, computed using field operator expansion method. For completeness, we present the result for the two point correlation function and the associated power spectrum for Bunch Davies vacuum by fixing the parameter α = 0 in our previous equations and they can be expressed as:
P BD (p >> 1, t L >> 1) = p 3 2π 2 (cosh t L ) 2ν−3 H 2 Q(p >> 1, α = 0, ν) = p 3 2π 2 (cosh t L ) 2ν−3 H 2 M(p >> 1, ν) = (2 cosh t L ) 2ν−3 H 2π 2 Γ(ν) Γ 3 2 2 G(p >> 1). (3.176)
For for the massless case (ν = 3/2) this can be further simplified to:
P BD (p >> 1, t L >> 1) = p 3 2π 2 H 2 Q(p >> 1, α = 0, ν = 3/2) = p 3 2π 2 H 2 M(p >> 1, ν = 3/2) = H 2π 2 G(p >> 1). (3.177)
In figure (9(a)) and figure (9(b)) we have shown the behaviour of the power spectrum of the mean square vacuum fluctuation computed from RDM formalism in the large wave number regime. We have considered α = 0 and α = 0.1 and fixed values of the mass parameter ν respectively. Additionally, in figure (9(c)) we have depicted the behaviour of the power spectrum with respect to the mass parameter ν for fixed values of the parameter α = 0, 0.1, 0.2, 0.3, 0.4. From the figures, we observe that the power spectrum shows two distinctive behaviour in 1/2 < ν < 1 and ν > 1 region. For 1/2 < ν < 1 region the amplitude of the power spectrum decrease to a certain value and just after ν = 1 it increases. Also note that in large wave number regime, the power spectrum obtained from RDM formalism behaves in the same as way as that obtained from FOE formalism in the previous section.
On the other hand, to know the exact wave number dependence of the amplitude of the normalised power spectrum from generalised α vacua in the long wave length approximation, we need to know the behaviour of the power spectrum for p, p n << 1. In this regime we expect that the power spectrum of axion should match with the result obtained for spatially flat universe. The time independent function Q(p << 1, α, ν) for the mass parameter ν = 3/2 can be expressed for generalised α vacua as: (3.178) where the function G(p << 1) is defined for ν = q/2 2 as:
Q(p << 1, α, ν) = 2 2(ν−1) (Γ(ν)) 2 p 3 π G(p << 1) ∀α,G(p << 1) = πp 2| cos πν| Γ ν + 1 2 2 |1 − γ (α) pmLR | 2 |1 − γ (α) pmLR | 2 − |m RR | 2 × 1 + |m RR | 2 + 1 − γ (α) pmLR * m RR + 1 − γ (α) pmLR m * RR |1 − γ (α) pmLR | 2 + ∞ n=0 p n p |1 − γ (α) pmLR | 2 − |m RR | 2 |1 − Γ (α) p,nmLR,n | 2 − |m RR,n | 2 1 |1 − γ (α) pmLR | 2 1 − γ (α) pmLR 1 − Γ (α) p,nmLR,n * +m RRm * RR,n + 1 − γ (α) pmLR m * RR,n + 1 − Γ (α) p,nmLR,n m * RR + 1 − γ (α) pmLR * m RR,n + 1 − Γ (α) p,nmLR,n * m RR + ∞ n=0 ∞ m=0 p n p m p 2 |1 − γ (α) pmLR | 2 − |m RR | 2 2 |1 − Γ (α) p,nmLR,n | 2 − |m RR,n | 2 |1 − Γ (α) p,mmLR,m | 2 − |m RR,m | 2 1 |1 − γ (α) pmLR | 2 1 − Γ (α) p,nmLR,n 1 − Γ (α)
p,mmLR,m * +m RR,nm * RR,m
+ 1 − Γ (α) p,nmLR,n m * RR,m + 1 − Γ (α) p,nmLR,n m * RR,m + 1 − Γ (α) p,nmLR,n * m RR,m + 1 − Γ (α)
p,nmLR,n * m RR,m (3.179) Here for very small wave number p, p n << 1 one can write,
G(p << 1) ∼ πp 2| cos πν| Γ ν + 1 2 2 |1 − γ (α) pmLR | 2 |1 − γ (α) pmLR | 2 − |m RR | 2 [1 + · · · ] ,
where all · · · are small correction terms. For Bunch Davies vacuum once we fix α = 0, we find that the function G(p << 1) only depends on the mass parameter ν for massive axion field.
On the contrary, for the case where ν = n/2 (which also includes the massless situation ν = 3/2) the expression G(p << 1) diverges due to the overall factor 1/| cos πν|. But we can avoid such unwanted divergent contributions by rewriting all the expressions for p, p n << 1 with ν = n/2 that we have mentioned earlier. In such a situation for the massless case the time independent function Q(p << 1, α, ν = 3/2) can be further simplified as: 3.180) where the function G(p << 1) is defined for ν = 3/2 as 3 :
Q(p << 1, α, ν = 3/2) = G(p << 1, ν = 3/2) 2p 3 ∀α,(G(p << 1, ν = 3/2) = π 2 1 + 1 ± e iθ πp e −pπ |1 ± e iθ πp e −pπ | ∞ n=0 1 ± e −iθ πp n e −pnπ |1 ± e iθ πp n e −pnπ | + ∞ n=0 ∞ m=0 (1 ± e iθ πp n e −pnπ ) |1 ± e iθ πp n e −pnπ | (1 ± e −iθ πp m e −pmπ ) |1 ± e iθ πp m e −pmπ | (3.181)
Here for very small wave number p, p n << 1 with ν = 3/2 and ν = 3/2 one can write,
G(p << 1) ∼ π 2 [1 + · · · ] ,
where all · · · are small correction terms. For Bunch Davies vacuum we get the same result as the function G(p << 1) for massless axion field (ν = 3/2) is independent of the parameter α. Moreover, it is important to note that the following contribution appearing in the normalised power spectrum for massive (ν = 3/2) and massless (ν = 3/2) axion field can be simplified in the small wave number limit as:
1 + |γ (α) p | 2 1 − |γ (α) p | 2 + 1 f (α) p 2 ∞ s=0 1 + |Γ (α) p,s | 2 1 − |Γ (α) p,s | 2 2 p<<1 ≈ ( √ cos 2πν+1± √ cos 2πν+3) 2 (cosh 2 α+sinh 2 α e 2πiν ) 2 [cosh 2 α+sinh 2 α e 2iπν +sinh 1 + |γ (α,3/2) p | 2 1 − |γ (α,3/2) p | 2 + 1 f (α,3/2) p 2 ∞ s=0 1 + |Γ (α,3/2) p,s | 2 1 − |Γ (α,3/2) p,s | 2 2 p<<1 ≈ 1 + 1 2 ∞ s=0 1 −1 =0 = 1 ∀α and ν = 3/2. (3.183)
Thus, in the superhorizon time scales (t L >> 1) of region L the amplitude of the normalised power spectrum of axion from generalised α vacua in the small wave number limit can be expressed as:
P(p << 1, α, t L >> 1) = p 3 2π 2 (cosh t L ) 2ν−3 exp (−2α) H 2 Q(p << 1, α, ν) × ( √ cos 2πν+1± √ cos 2πν+3) 2 |cosh 2 α+sinh 2 α e 2πiν | 2 |cosh 2 α+sinh 2 α e 2iπν +sinh 2α cos πν e iπν | 2 + 2 ( √ cos 2πν+1± √ cos 2πν+3) 2 |cosh 2 α+sinh 2 α e 2πiν | 2 |cosh 2 α+sinh 2 α e 2iπν +sinh 2α cos πν e iπν | 2 − 2 = (2 cosh t L ) 2ν−3 H 2π 2 Γ(ν) Γ 3 2 2 G(p << 1) × ( √ cos 2πν+1± √ cos 2πν+3) 2 |cosh 2 α+sinh 2 α e 2πiν | 2 |cosh 2 α+sinh 2 α e 2iπν +sinh 2α cos πν e iπν | 2 + 2 ( √ cos 2πν+1± √ cos 2πν+3) 2 |cosh 2 α+sinh 2 α e 2πiν | 2 |cosh 2 α+sinh 2 α e 2iπν +sinh 2α cos πν e iπν | 2 − 2 .
( 3.184) For the massless case (ν = 3/2) in the superhorizon time scales (t L >> 1) of region L, the amplitude of the normalised power spectrum of axion from generalised α vacua in the small wave number limit can be simplified in the present context as:
P(p << 1, α, t L >> 1) = p 3 2π 2 exp (−2α) H 2 Q(p << 1, α, ν = 3/2) = H 2π 2 exp (−2α) G(p << 1, ν = 3/2). (3.185)
For Bunch Davies vacuum state ( α = 0), the mean square vacuum fluctuation of axion can be expressed as:
P BD (p << 1, t L >> 1) = p 3 2π 2 (cosh t L ) 2ν−3 H 2 Q(p << 1, α = 0, ν) × √ cos 2πν + 1 ± √ cos 2πν + 3 2 + 2 √ cos 2πν + 1 ± √ cos 2πν + 3 2 − 2 = (2 cosh t L ) 2ν−3 H 2π 2 Γ(ν) Γ 3 2 2 G(p << 1) × √ cos 2πν + 1 ± √ cos 2πν + 3 2 + 2 √ cos 2πν + 1 ± √ cos 2πν + 3 2 − 2 . (3.186)
Also for the massless case (ν = 3/2) in the superhorizon time scales (t L >> 1) of region L the amplitude of the normalised power spectrum of axion from Bunch Davies vacuum in the small wave number limit can be simplified as: for α = 0 and α = 0.1 and for fixed values of the mass parameter ν = 1, 2, 3, 3, 4, 5 respectively. Moreover, in figure (10(e)) we have presented the behaviour of the power spectrum with respect to the mass parameter ν with fixed values of the parameter α = 0, 0.1, 0.2, 0.3, 0.4. For the mass parameter dependence here we get distinctive feature for RDM formalism compared to FOE formalism which we discussed in the last subsection and the NES formalism which we discuss in the next subsection. From the plot, it is observed that for ν = 1/2, 3/2, 5/2, 7/2 we get distinctive sharp peaks with constant and different magnitudes. On the other hand, in figure (10(b)) and figure (10(d)) we have shown the behaviour of the power spectrum in the small wave number regime for α = 0 and α = 0.1 with the fixed values of the mass parameter ν = 1/2, 3/2, 5/2, 7/2, 9/2. Here as the power spectrum is independent of the wave number, we get constant magnitude for different values of the mass parameter ν.
P BD (p << 1, t L >> 1) = p 3 2π 2 H 2 Q(p << 1, α = 0, ν = 3/2) = H 2π 2 G(p << 1, ν = 3/2).
Quantum vacuum fluctuation with non entangled state (NES)
In this subsection, we describe the quantum vacuum fluctuation and its cosmological consequences using non entangled state (NES) formalism. In this formalism we assume that the wave function of the full de Sitter universe is described in the region L. So we do not use anyt information from the region R. In figure (11)
Non entangled state (NES) formalism
In the region L the total wave function of the universe is described by the non entangled state (NES) and for generalised α vacua it is given by: (3.188) where the normalisation factorsÑ b andÑ b,(n) are :
φ I = φ L φ L * = 1 N b P L P L * + ∞ n=0 1 N b,(n) P L,(n) P L * ,(n) ,N b = √ 2p |Γ (1 + ip) | , (3.189) N b,(n) = √ 2p n |Γ (1 + ip n ) |
. (3.190) We can also express the total wave function of the universe in terms of the oscillator mode expansion as given by:φ
L (t L ) = H sinh t L b Iφ I (t L ) + ∞ n=0 b I,(n)φ I (n) (t L ) . (3.191)
Two point correlation function
Using the above wave function we can further derive the mean square vacuum fluctuation through the following two point correlation function : (3.192) where P (p, α, t L ) is the power spectrum for non entangled state involving generalised α vacua. We can also define the normalised power spectrum for non entangled state as: 3.193) To quantify the normalised power spectrum for non entangled state, it is crcial to derive the expression for the square of the magnitude of the total wave function of the universe in the region L, which is given by: ( 3.194) Further substituting the expressions for the normalisation factors, the above equation can be recast as: (3.195) Consequently, the normalised power spectrum for non entangled state with generalised α vacua can be written as:
L|φ L plmφ †L p l m |L = H 2 sinh 2 t L |φ L | 2 exp (−2α) δ(p − p )δ ll δ mm = P (p, α, t L )δ(p − p )δ ll δ mm ,P(p, α, t L ) = p 3 2π 2 P (p, α, t L ) = p 3 2π 2 H 2 sinh 2 t L |φ L | 2 exp (−2α) .(|φ L | 2 = 1 |Ñ b | 2P L * P L + ∞ n=0 1 N b N * b,(n) P L * (n)P L +P L * P L (n) + ∞ n=0 1 N * b N b,(n) P L *(|φ L | 2 = 1 2p |Γ(1 + ip)| 2P L * P L + ∞ n=0 1 √ 4pp n |Γ(1 + ip)||Γ(1 − ip n )| P L * (n)P L +P L * P L (n) + ∞ n=0 1 4 √ pp n |Γ(1 − ip)||Γ(1 + ip n )| P L * (n)P L +P L * P L (n) + ∞ n=0 ∞ m=0 1 √ 4p n p m |Γ(1 − ip n )||Γ(1 + ip m )| P L * (n)P L (m) +P L * (m)P L (n) .P(p, α, t L ) = p 3 2π 2 H 2 sinh 2 t L 1 2p |Γ(1 + ip)| 2P L * P L + ∞ n=0 1 √ 4pp n |Γ(1 + ip)||Γ(1 − ip n )| P L * (n)P L +P L * P L (n) + ∞ n=0 1 4 √ pp n |Γ(1 − ip)||Γ(1 + ip n )| P L * (n)P L +P L * P L (n) + ∞ n=0 ∞ m=0 1 4 √ p n p m |Γ(1 − ip n )||Γ(1 + ip m )| P L * (n)P L (m) +P L * (m)P L (n)
. (3.196) However, to extract further physical information from Eqn (3.162) for cosmological predictions, we consider the superhorizon time scales (t L >> 1) of region L. In this limit, the Legendre functions as appearing in the complementary part and the particular integral part of the time dependent solution can be approximated to the following simplified form:
P L , P L * ≡ P ±ip ν− 1 2 (cosh t L ) t L >> 1 − −−−− → 2 ν− 1 2 (cosh t L ) ν− 1 2 Γ(ν) √ πΓ ν ∓ ip + 1 2 , (3.197) P L (n) , P L * (n) ≡ P ±ipn ν− 1 2 (cosh t L ) t L >> 1 − −−−− → 2 ν− 1 2 (cosh t L ) ν− 1 2 Γ(ν) √ πΓ ν ∓ ip n + 1 2 .
( 3.198) Thus, in the superhorizon time scales (t L >> 1) of region L, eqn (3.195) can be further simplified as: 3.199) where the time independent function K(p, α, ν) for generalised α vacua is defined as:
|φ L | 2 t L >> 1 − −−−− → K(p, α, ν) (cosh t L ) 2ν−1(K(p, α, ν) = 2 2ν−1 (Γ(ν)) 2 π × |Γ(1 + ip)| 2 2p|Γ ν + ip + 1 2 | 2 + ∞ n=0 |Γ(1 − ip)||Γ(1 + ip n )| + |Γ(1 + ip)||Γ(1 − ip n )| 4 √ pp n Γ ν − ip + 1 2 Γ ν + ip n + 1 2 + ∞ n=0 ∞ m=0 |Γ(1 − ip n )||Γ(1 + ip m )| + |Γ(1 + ip n )||Γ(1 − ip m )| 4 √ p n p m Γ ν − ip n + 1 2 Γ ν + ip m + 1 2 . (3.200) p 3 2π 2 H 2 K(p, ν = 3/2) exp (−2α) . (3.204)
Like our result derived in the previous section, this result also implies that for the massless case (ν = 3/2), the amplitude of the vacuum fluctuation gets frozen with respect to the time scale when the associated modes exit horizon. Further, to know the exact wavenumber dependence of the amplitude of the normalised power spectrum from generalised α vacua, we need to know the behaviour of the power spectrum at very short wavelengths (p, p n >> 1). In this limit, it is expected that the power spectrum of axion in the non entangled case should match with the result obtained for spatially flat universe. The time independent function K(p, α, ν) in this limit and for arbitrary mass parameter ν can be expressed as:
K(p >> 1, α, ν) = 2 2(ν−1) (Γ(ν)) 2 p 3 π U(p >> 1) ∀α,(3.205)
where the function U(p >> 1) is defined as:
U(p >> 1) = 1 + ∞ n=0 p p n 3 2 + ∞ n=0 ∞ m=0 p 3 (p n p m ) 3 2
Quantumm correction factor for axion in short wave length limit
. (3.206)
Thus, for very large wave number (p, p n >> 1), we can write, U(p) ∼ 1 + · · · , where all · · · are small correction terms. This also implies that for large wavenumber and for any value of the mass parameter α, the time independent function U(p, α, ν), computed with generalised α vacua, matches with the result obtained for Bunch Davies vacua in the previous subsection at the leading order in M(p, ν). Also for the massless case (ν = 3/2) the time independent function K(p, α, ν = 3/2) in the short wave length limit can further be simplified as:
K(p >> 1, α, ν = 3/2) = U(p >> 1) 2p 3 ∀α. (3.207)
Finally, in the superhorizon time scales (t L >> 1) of region L the amplitude of the normalised power spectrum of axion from generalised α vacua for non entangled state in short wave length limit can be expressed as:
P(p >> 1, α, t L >> 1) = p 3 2π 2 (cosh t L ) 2ν−3 exp (−2α) H 2 K(p >> 1, α, ν) = (2 cosh t L ) 2ν−3 H 2π 2 Γ(ν) Γ 3 2 2 exp (−2α) U(p >> 1). (3.208)
For the massless case (ν = 3/2) in the superhorizon time scales (t L >> 1) of region L, the amplitude of the normalised power spectrum in short wave length limit can be simplified to:
P(p >> 1, α, t L >> 1) = p 3 2π 2 exp (−2α) H 2 K(p >> 1, α, ν = 3/2) = H 2π 2 exp (−2α) U(p >> 1)
. (3.209) Note that both the Eq (3.208) and Eq (3.216) are valid after horizon exit. From these results we also observe that the power spectrum computed from non entangled state formalism is same, at the leading order approximation, as that computed from the FOE and RDM formalism, computed in earlier subsections. This is true in the large wavenumber limit of superhorizon time scale in region L.
The result for the two point correlation function and the associated power spectrum for Bunch Davies vacuum can be obtained by setting α = 0 in the above equation and is found to be:
P BD (p >> 1, t L >> 1) = p 3 2π 2 (cosh t L ) 2ν−3 H 2 K(p >> 1, α = 0, ν) = (2 cosh t L ) 2ν−3 H 2π 2 Γ(ν) Γ 3 2 2 U(p >> 1). (3.210)
For the massless case (ν = 3/2) it reduces to:
P BD (p >> 1, t L >> 1) = p 3 2π 2 H 2 K(p >> 1, α = 0, ν = 3/2) = H 2π 2 U(p >> 1). (3.211)
In figure (12(a)) and figure (12(b)) we have presented the behaviour of the power spectrum of the mean square vacuum fluctuation computed inNES formalism for the large wave number regime. This is shown for α = 0 and α = 0.1 and for fixed values of the mass parameter ν = 3/2, 2, 5/2, 3, 7/2 respectively. For both the values of α, we get almost similar behaviour. In figure (12(c)) we have shown the behaviour of the power spectrum with respect to the mass parameter ν with fixed values of the parameter α = 0, 0.1, 0.2, 0.3, 0.4. Here for 1/2 < ν < 1 region and ν > 1 region mass parameter dependence show two distinctive features. In 1/2 < ν < 1 region amplitude of the normalised power spectrum initially decrease and then just after ν = 1 the amplitude of the power spectrum increase.
However, to examine the behaviour of the power spectrum in the long wavelength region and in the superhorizon time scale (t L >> 1), we take the limit p << 1. In the long wave length limit, the time independent function K(p, α, ν) for any arbitrary mass parameter ν can be expressed (for α vacua) as:
K(p << 1, α, ν) = 2 2(ν−1) (Γ(ν)) 2 pπ U(p << 1) ∀α,(3.212)
where the function U(p << 1) is given by:
U(p << 1) = 1 + |Γ ν + 1 2 | Γ ν + 1 2 2 ∞ n=0 p p n + ∞ n=0 ∞ m=0 p √ p n p m
Quantum correction factor for axion in long wave length limit
. (3.213)
For the massless case (ν = 3/2), this can be further simplified to:
K(p << 1, α, ν = 3/2) = U(p << 1) 2p ∀α. (3.214)
Moreover, in the superhorizon time scales (t L >> 1) of region L, the amplitude of the normalised power spectrum ( for α vacua ) for non entangled state (in the long wave length limit) can be expressed as:
P(p << 1, α, t L >> 1) = p 3 2π 2 (cosh t L ) 2ν−3 exp (−2α) H 2 K(p << 1, α, ν) = (2 cosh t L ) 2ν−3 H 2π 2 p 2 exp (−2α) Γ(ν) Γ 3 2 2 U(p << 1). (3.215)
Also, for the massless case (ν = 3/2), this reduces to:
P(p << 1, α, t L >> 1) = p 3 2π 2 exp (−2α) H 2 K(p << 1, α, ν = 3/2) = H 2π 2 p 2 exp (−2α) U(p << 1). (3.216)
The result for Bunch Davies vacuum is obtained by fixing α = 0 in above equation and is expressed as:
P BD (p << 1, t L >> 1) = p 3 2π 2 (cosh t L ) 2ν−3 H 2 K(p << 1, α = 0, ν) = (2 cosh t L ) 2ν−3 H 2π 2 p 2 Γ(ν) Γ 3 2 2 U(p << 1) (3.217)
which for the massless case (ν = 3/2) reduces to : In figure (13(a)) and figure (13(b)), we have shown the behaviour of the power spectrum of the mean square vacuum fluctuation in NES formalism in the small wave number regime for α = 0 and α = 0.1 with fixed values of the mass parameter ν = 3/2, 2, 5/2, 3, 7/2 respectively. Note that in both the cases we find almost similar behaviour. Also, in figure (13(c)) we have shown the behaviour of the power spectrum with respect to the mass parameter ν with fixed values of α = 0, 0. 1, 0.2, 0.3, 0.4. In this case we again observe two distinct regions of mass parameter dependence.
P BD (p << 1, t L >> 1) = p 3 2π 2 H 2 K(p << 1, α = 0, ν = 3/2) = H2π
We have explicitly presented the comparison among FOE, RDM and NES formalism for α vacua in table (1). The same table is valid for Bunch Davis vacuum when α = 0. We have quoted the differences, among the findings from these formalism, for the primordial power spectrum from mean square vacuum fluctuation at large and small scales.
Summary
To summarize, in this work, we have addressed the following issues:
• We have explicitly studied the power spectrum of mean squared vacuum fluctuation for axion field using the concept of quantum entanglement in de Sitter space. The effective action for the axion field, used here, has its origin from Type IIB String theory compactified to four dimensions. . For our analysis, we have chosen two initial vacuum states i.e. Bunch Davies and a generalised class of α vacua. The power spectrum of mean squared vacuum fluctuation is computed using three distinctive formalisms: (1) Field operator expansion (FOE), (2) Reduced density matrix (RDM) and (3) Non entangled state (NES). In all three cases, the computation has been done starting with two open charts in hyperbolic manifold of de Sitter space consisting of two regions: L and R. Though the starting point is same, the construction of these three formalisms are different from each other and have their own physical significance. Each of the formalism has been discussed in text of the papers and some details of approximations for them are presented in the appendix. Similarities and differences from each other are presented in a table.
• In case of FOE formalism we solve for the wave function in the region L and using this solution we compute the general expression for the mean square vacuum fluctuation and its quantum correction in terms of two point correlation function. The result is evaluated at all momentum scales. We considered two limiting approximation in the characteristic momentum scales, i.e. large wave number (small wave length in which the corresponding scale is smaller than the curvature radius of the de Sitter hyperbolic open chart) regime and small wave number (long wave length in which the corresponding scale is larger than the curvature radius of the de Sitter hyperbolic open chart) regime. We have observed distinctive features in the power spectrum of of mean squared vacuum fluctuation in these two different regimes. In the large wave number (small wave length) regime we found that the leading order result for the power spectrum is consistent with the known result for observed cosmological correlation function in the super horizon time scale. The correction to the leading order result that we computed for the power spectrum can be interpreted as the sub-leading effect in the observed cosmological power spectrum. This is a strong information from the perspective of cosmological observation since such effects, possibly due to quantum entanglement of states, can play a big role to break the degeneracy of the observed cosmological power spectrum in the small wave length regime. On the other hand, in the long wave length regime we found that the power spectrum follows completely different momentum dependence in the super horizon time scale. Since in this regime and in this time scale, at present, we lack adequate observational data on power spectrum we are unable to comment on our result with observation. But our result for the power spectrum in long wave length limit and super horizon time scale can be used as a theoretical probe to study the physical implications and its observational cosmological consequences in near future. Our result also implies that the mean square vacuum fluctuation for axion field, in super horizon time scale, gets enhanced in long wave length regime and freezes in the small wave length regime. We also observe that for a massive axion, the power spectrum is nearly scale invariant in all momentum scales. On the other hand, for massless axion we observe exact scale invariance only in large wave number (small wave length) regime and for the Bunch Davies initial quantum state. For generalised α initial state, we find slight modification in the corresponding power spectrum of the mean square vacuum fluctuation. The modification factor is proportional to exp(−2α) which is valid for all values of the parameter α. It also implies that for large value of the parameter α we get additional exponential suppression for the power spectrum. This information can be used to distinguish between the role of Bunch Davies vacuum (α = 0) and any α vacua quantum initial state during analysis of observational data.
• In RDM formalism, the wave function for the axion field is solved in L and R regions of the de Sitter open chart. This solution has been used to compute the mean square vacuum fluctuation and its quantum correction for both Bunch Davies and α vacuum state.
Corresponding results are evaluated at all momentum scales by partially tracing out all the information from the region R. Like in the case of FOE, we considered the small and large wavelength approximations in the characteristic momentum scales and found distinct features in the corresponding power spectrum. In the small wave length regime again the leading order result, in super horizon time scales matched with known result (same as FOE). However, the sub-leading order result for the power spectrum is different from the result obtained from FOE formalism which distinguishes the two approaches. Moreover, in the long wave length regime the power spectrum has completely different momentum dependence compared to FOE formalism. We also notice that the enhancement of mean square vacuum fluctuation for axion field, in long wave length regime, is different (slower) in nature compared to FOE formalism but the freezing in short wavelength regime is of same nature. The observation on scale invariance of power spectrum in this formalism remains similar to that in FOE formalism.
• In the last formalism i.e.NES, the wave function of axion field is solved in the region L of the de Sitter hyperbolic open chart. With the help of this solution, t we computed the mean square vacuum fluctuation using Bunch Davies and α vacuum state configuration. The corresponding result is evaluated at all momentum scales. Like the previous two cases, here also we reverted to two limiting approximations i.e. large wave number (small wave length ) regime and small wave number (long wave length) regime. We again observed distinctive behaviour in the power spectrum in these two different regimes. In the large wave number (small wave length) regime, the leading order result for power spectrum matches with the known result for observed cosmological correlation function just as the cases of FOE and RDM formalism. However, the sub-leading order result s completely different FOE as well as RDM formalism. Thus, it is the sub-leading terms which distinguish these formalisms from each other and they can be confronted with future observational data. On the other hand, in the small wave number (long wave length) regime, even the leading order result for the power spectrum differs, in momentum dependence, compared to the result obtained from FOE and RDM formalism. Also the nature of enhancement of the mean square vacuum fluctuation in NES formalism is found to be different from that in FOE and RDM formalism but the nature of freezing and the observation on scale invariance of power spectrum remains same in all the three cases.
• For completeness, we discuss the actual reason for the results obtained for the power spectra from quantum entangled state as appearing in FOE formalism and the mixed state which is used to construct the RDM formalism. To do so, we consider two subsystems, L and R using which one can construct the quantum mechanical state vector of axion field as |Ψ axion . In our computation, these subsystems are defined in the region L and R respectively in the de Sitter hyperbolic open chart. Now using this state vector of axion field we can define the density matrix as :
ρ axion = |Ψ axion Ψ axion |, (4.1) in both the subsystems, L and R for FOE and RDM formalism and only the system L for NES formalism. Using this density matrix we can express the expectation value (for the total system) of a quantum mechanical operator O axion , applicable for FOE and RDM formalism, as:
Tr ρ axion O axion = L R L, R|Ψ axion Ψ axion | O axion |L, R ≡ Ψ axion | O axion |Ψ axion ≡ O axion . (4.2)
This is an important observation as it is related to the measurement and quantification of any physical cosmological observable in the quantum regime. But in the case of NES formalism one can rewrite Eq (4.2) as :
Tr ρ axion O axion = L R L, R|Ψ axion Ψ axion | O axion |L, R = L R L R L, R|Ψ axion Ψ axion |L , R L , R | O L axion |L, R = L R L R L, R|Ψ axion Ψ axion |L , R L | O L axion |L δ RR = L R L L, R|Ψ axion Ψ axion |L , R L | O L axion |L = Tr ρ L axion O L axion ,(4.3)
where the operator O L axion solely in the region L is defined by the following expression for NES formalism: (4.4) Also in NES formalism the density matrix ρ L axion for the region L is described by the following expression: (4.5) This implies that in NES formalism, the physical operator is solely described by the information from the region L and consequently the expectation value of such operator satisfy the following condition:
L , R | O L axion |L, R = L | O L axion |L R |R = L | O L axion |L δ RR .ρ L axion = Tr R ρ axion = L L |L R L, R|Ψ axion Ψ axion |L , R L | = L L |L R Ψ axion (L, R)Ψ * axion (L , R ) L |.O axion = Tr ρ axion O axion = Tr ρ L axion O L axion = O L axion .(4.6)
The above analysis can help us to explain the differences between the power spectra of mean square vacuum fluctuation obtained from FOE, RDM and NES formalism on large scale (or small wave number or large wave length regime). It clearly points towards the fact that in FOE and RDM formalism the creation and annihilation operators for axion field includes new set of creation and annihilation operators coming from the Bogoliubov transformation from one quantum basis to the other. This means that the field operator in the FOE formalism also involves these extra creation and annihilation operators even if the computation is being performed on a particularly specified temporal slice defined in the region L of the Hilbert space. On the other hand, after applying the partial trace over the degrees of freedom from the region R, the mixed quantum state, using which we formulate the RDM formalism, is prepared by the creation and annihilation operators in the region L of the Hilbert space. Thus, in RDM formalism, the field operator is only defined in the region L and not in the region R of the Hilbert space. This implies that the field operator defined before partially tracing over the degrees of freedom from region R for FOE formalism is different from the field operator in region L used in RDM formalism since for this case we have performed the partial trace over the degrees of freedom in region R. Thus, any general quantum mechanical operator defined in the framework of FOE is not same as that of RDM formalism.
Before we conclude, we point out that apart from the quantification of the mean square vacuum fluctuation in the formalisms we discussed here, we have also computed the entanglement entropy using von Neumann measure and the Renyi entropy in our previous work [15,16].
where the time independent function M(p, ν) is defined as:
M(p, ν) = 2 2ν−1 (Γ(ν)) 2 π × σ=±1 |A σ L | 2 + |B σ L | 2 Γ ν + ip + 1 2 2 + A σ L B σ * L Γ ν − ip + 1 2 2 + A σ * L B σ L Γ ν + ip + 1 2 2 + ∞ n=0 A σ L A σ * L,(n) + B σ L B σ * L,(n) Γ ν − ip + 1 2 Γ ν + ip n + 1 2 + A σ L B σ * L,(n) + A σ L,(n) B σ * L Γ ν − ip + 1 2 Γ ν − ip n + 1 2 + A σ * L,(n) B σ L + A σ * L B σ L,(n) Γ ν + ip n + 1 2 Γ ν + ip + 1 2 + ∞ n=0 ∞ m=0 A σ L,(n) A σ * L,(m) + B σ L,(n) B σ * L,(m) Γ ν − ip n + 1 2 Γ ν + ip m + 1 2 + A σ L,(n) B σ * L,(m) Γ ν − ip n + 1 2 Γ ν − ip m + 1 2 + A σ * L,(n) B σ L,(m) Γ ν + ip n + 1 2 Γ ν + ip m + 1 2 . (A.2)
A.1 For large wave number
Further to know the exact wave number dependence of the amplitude of the normalized power spectrum from Bunch Davies vacuum we need to know the behaviour of the power spectrum at very short wavelengths (p, p n >> 1). After taking this limit it is expected that the power spectrum of axion match with the result obtained for spatially flat universe. In general for an arbitrary value of the mass parameter ν, we get the following approximated contributions in the short wavelength limit (p, p n >> 1), which are explicitly appearing in the expression for the amplitude of the normalized power spectrum from Bunch Davies vacuum:
σ=±1 |A σ L | 2 Γ ν + ip + 1 2 2 p>>1 ≈ πe −πp 2p 4 |Γ (ip)| 2 , (A.3) σ=±1 |B σ L | 2 Γ ν + ip + 1 2 2 p>>1 ≈ πe −5πp 2p 4 |Γ (ip)| 2 , (A.4) σ=±1 A σ L B σ * L Γ ν − ip + 1 2 2 p>>1 ≈ πe −3πp 2p 4 |Γ (ip)| 2 , (A.5) σ=±1 A σ * L B σ L Γ ν + ip + 1 2 2 p>>1 ≈ πe −3πp 2p 4 |Γ (ip)| 2 (A.6) σ=±1 A σ L A σ * L,(n) Γ ν − ip + 1 2 Γ ν + ip n + 1 2 p,pn>>1 ≈ πe −π(p+pn)/2 2p 2 p 2 n |Γ (ip)| |Γ (ip n )| (A.7) σ=±1 B σ L B σ * L,(n) Γ ν − ip + 1 2 Γ ν + ip n + 1 2 p,pn>>1 ≈ πe −5π(p+pn)/2 2p 2 p 2 n |Γ (ip)| |Γ (ip n )| (A.8) σ=±1 A σ L B σ * L,(n) Γ ν − ip + 1 2 Γ ν − ip n + 1 2 p,pn>>1 ≈ πe −π(p+5pn)/2 2p 2 p 2 n |Γ (ip)| |Γ (ip n )| (A.9) σ=±1 A σ L,(n) B σ * L Γ ν − ip + 1 2 Γ ν − ip n + 1 2 p,pn>>1 ≈ πe −π(5p+pn)/2 2p 2 p 2 n |Γ (ip)| |Γ (ip n )| (A.10) σ=±1 A σ * L,(n) B σ L Γ ν + ip + 1 2 Γ ν + ip n + 1 2 p,pn>>1 ≈ πe −π(5p+pn)/2 2p 2 p 2 n |Γ (ip)| |Γ (ip n )| (A.11) σ=±1 A σ * L B σ L,(n) Γ ν + ip + 1 2 Γ ν + ip n + 1 2 p,pn>>1 ≈ πe −π(p+5pn)/2 2p 2 p 2 n |Γ (ip)| |Γ (ip n )| (A.12) σ=±1 A σ L,(n) A σ * L,(m) Γ ν − ip n + 1 2 Γ ν + ip m + 1 2 pn,pm>>1 ≈ πe −π(pn+pm)/2 2p 2 n p 2 m |Γ (ip m )| |Γ (ip n )| (A.13) σ=±1 B σ L,(n) B σ * L,(m) Γ ν − ip n + 1 2 Γ ν + ip m + 1 2 pn,pm>>1 ≈ πe −5π(pn+pm)/2 2p 2 n p 2 m |Γ (ip m )| |Γ (ip n )| (A.14) σ=±1 A σ L,(n) B σ * L,(m) Γ ν − ip n + 1 2 Γ ν − ip m + 1 2 pn,pm>>1 ≈ πe −3π(pn+pm)/2 2p 2 n p 2 m |Γ (ip m )| |Γ (ip n )| (A.15) σ=±1 A σ * L,(n) B σ L,(m) Γ ν + ip n + 1 2 Γ ν + ip m + 1 2 pn,pm>>1 ≈ πe −3π(pn+pm)/2 2p 2 n p 2 m |Γ (ip m )| |Γ (ip n )| (A.16)
Further, we apply Stirling's formula to approximate Gamma functions for large wavenumbers p, p n >> 1 to simplify the expression for the power spectrum:
Γ(ip) ∼ √ 2π (ip) ip− 1 2 e −ip 1 + 1 12ip − 1 288p 2 + · · · , (A.17) Γ(ip n ) ∼ √ 2π (ip n ) ipn− 1 2 e −ipn 1 + 1 12ip n − 1 288p 2 n + · · · . (A.18)
Consequently, we get the following simplified expressions in large wavenumber (p, p n >> 1) limit:
σ=±1 |A σ L | 2 Γ ν + ip + 1 2 2 ∼ 1 2p 3 1 + 1 82944p 4 , (A.19) σ=±1 |B σ L | 2 Γ ν + ip + 1 2 2 ∼ e −4πp 2p 3 1 + 1 82944p 4 , (A.20) σ=±1 A σ L B σ * L Γ ν − ip + 1 2 2 ∼ e −2πp 2p 3 1 + 1 82944p 4 , (A.21) σ=±1 A σ * L B σ L Γ ν + ip + 1 2 2 ∼ e −2πp 2p 3 1 + 1 82944p 4 (A.22) σ=±1 A σ L A σ * L,(n) Γ ν − ip + 1 2 Γ ν + ip n + 1 2 ∼ 1 2p 3/2 p 3/2 n 1 + 1 82944p 4 1 + 1 82944p 4 n (A.23) σ=±1 B σ L B σ * L,(n) Γ ν − ip + 1 2 Γ ν + ip n + 1 2 ∼ e −2π(p+pn) 2p 3/2 p 3/2 n 1 + 1 82944p 4 1 + 1 82944p 4 n (A.24) σ=±1 A σ L B σ * L,(n) Γ ν − ip + 1 2 Γ ν − ip n + 1 2 ∼ e −2πpn 2p 3/2 p 3/2 n 1 + 1 82944p 4 1 + 1 82944p 4 n (A.25) σ=±1 A σ L,(n) B σ * L Γ ν − ip + 1 2 Γ ν − ip n + 1 2 ∼ e −2πp 2p 3/2 p 3/2 n 1 + 1 82944p 4 1 + 1 82944p 4 n (A.26) σ=±1 A σ * L,(n) B σ L Γ ν + ip + 1 2 Γ ν + ip n + 1 2 ∼ e −2πp 2p 3/2 p 3/2 n 1 + 1 82944p 4 1 + 1 82944p 4 n (A.27) σ=±1 A σ * L B σ L,(n) Γ ν + ip + 1 2 Γ ν + ip n + 1 2 ∼ e −2πpn 2p 3/2 p 3/2 n 1 + 1 82944p 4 1 + 1 82944p 4 n (A.28) σ=±1 A σ L,(n) A σ * L,(m) Γ ν − ip n + 1 2 Γ ν + ip m + 1 2 ∼ 1 2p 3/2 m p 3/2 n 1 + 1 82944p 4 m 1 + 1 82944p 4 n (A.29) σ=±1 B σ L,(n) B σ * L,(m) Γ ν − ip n + 1 2 Γ ν + ip m + 1 2 ∼ e −2π(pn+pm) 2p 3/2 m p 3/2 n 1 + 1 82944p 4 m 1 + 1 82944p 4 n (A.30) σ=±1 A σ L,(n) B σ * L,(m) Γ ν − ip n + 1 2 Γ ν − ip m + 1 2 ∼ e −π(pn+pm) 2p 3/2 m p 3/2 n 1 + 1 82944p 4 m 1 + 1 82944p 4 n (A.31) σ=±1 A σ * L,(n) B σ L,(m) Γ ν + ip n + 1 2 Γ ν + ip m + 1 2 ∼ e −π(pn+pm) 2p 3/2 m p 3/2 n 1 + 1 82944p 4 m 1 + 1 82944p 4 n (A.32)
As a result, in the short wave length approximation the time independent function M(p >> 1, ν) for any arbitrary mass parameter ν can be expressed as:
M(p >> 1, ν) = 2 2(ν−1) (Γ(ν)) 2 p 3 π G(p >> 1), (A.33)
where we define a new function G(p >> 1) in the short wave length limit as given by:
G(p) = 1 1 + 1 82944p 4 × 1 + e −2πp 2 + ∞ n=0 p p n 3 2 1 + 1 82944p 4 1 + 1 82944p 4 n 1 + 2 e −2πp + e −2πpn + e −2π(p+pn) + ∞ n=0 ∞ m=0 p 3 (p n p m ) 3/2 1 + 1 82944p 4 1 + 1 82944p 4 n 1 + 1 82944p 4 m 1 + e −π(pm+pn) 2 , (A.34)
A.2 For small wave number
Similarly to know the exact wavenumber dependence of the amplitude of the normalised power spectrum from Bunch Davies vacuum in the long wavelength limit we need to know the behaviour of the power spectrum for p, p n << 1. In this limit it is expected that the power spectrum of axion should match with the result obtained for spatially flat universe. In general for an arbitrary value of the mass parameter ν, we get the following approximated contributions in the long wavelength limit (p, p n << 1), which are explicitly appearing in the expression for the amplitude of the normalised power spectrum from Bunch Davies vacuum: As a result, the time independent function M(p << 1, ν) for any arbitrary mass parameter ν can be expressed as:
σ=±1 |A σ L | 2 Γ ν + ip + 1 2 2 p<<1 ≈ π 4 Γ ν + 1 2 2 , (A.35) σ=±1 |B σ L | 2 Γ ν + ip + 1 2 2 p<<1 ≈ π 4 Γ ν + 1 2 2 , (A.36) σ=±1 A σ L B σ * L Γ ν − ip + 1 2 2 p<<1 ≈ π 4 Γ ν + 1 2 2 , (A.37) σ=±1 A σ * L B σ L Γ ν + ip + 1 2 2 p<<1 ≈ π 4 Γ ν + 1 2 2 (A.38) σ=±1 A σ L A σ * L,(n) Γ ν − ip + 1 2 Γ ν + ip n + 1 2 p,pn<<1 ≈ πe −π(p+pn) 2 Γ ν + 1 2 2 (A.39) σ=±1 B σ L B σ * L,(n) Γ ν − ip + 1 2 Γ ν + ip n + 1 2 p,pn<<1 ≈ πe −π(p+pn) 2 Γ ν + 1 2 2 (A.40) σ=±1 A σ L B σ * L,(n) Γ ν − ip + 1 2 Γ ν − ip n + 1 2 p,pn<<1 ≈ πe −π(p+pn) 2 Γ ν + 1 2 2 (A.41) σ=±1 A σ L,(n) B σ * L Γ ν − ip + 1 2 Γ ν − ip n + 1 2 p,pn<<1
M(p << 1, ν) = 2 2(ν−1) (Γ(ν)) 2 π G(p << 1), (A.49)
where we define a new function G(p << 1) in the long wave length limit as given by:
G(p << 1) = π |Γ ν + 1 2 | 2 1 + |Γ ν + 1 2 | 2 Γ ν + 1
B Quantum correction to the power spectrum in RDM formalism
At the super horizon time scales (t L >> 1) of region L one can write the amplitude of the RDM power spectrum as:
|ψ L T | 2 = ψ L T †ψ L T t L >> 1 − −−−− → Q(p, α, ν) (cosh t L ) 2ν−1 (B.1)
where the time independent function Q(p, α, ν) for generalised α vacua is defined as:
Q(p, α, ν) = 2 2ν−1 (Γ(ν)) 2 π × |E L | 2 + |F L | 2 |Γ ν + ip + 1 2 | 2 + E L F * L Γ ν − ip + 1 2 2 + E * L F L Γ ν + ip + 1 2 2 + ∞ n=0 E L E * L,(n) + F L F * L,(n) Γ ν − ip + 1 2 Γ ν + ip n + 1 2 + E L F * L,(n) + E L,(n) F * L Γ ν − ip + 1 2 Γ ν − ip n + 1 2 + E * L,(n) F L + E * L F L,(n) Γ ν + ip + 1 2 Γ ν + ip n + 1 2 + ∞ n=0 ∞ m=0 E L,(n) E * L,(m) + F L,(n) F * L,(m) Γ ν − ip n + 1 2 Γ ν + ip m + 1 2 + E L,(n) F * L,(m) Γ ν − ip n + 1 2 Γ ν − ip m + 1 2 + E * L,(n) F L,(m) Γ ν + ip n + 1 2 Γ ν + ip m + 1 2 . (B.2)
B.1 For large wave number
Further to know the exact wave number dependence of the amplitude of the normalised power spectrum from generalised α vacua we need to know the behaviour of the power spectrum at very short wavelengths (p, p n >> 1). After taking this limit it is expected that the power spectrum of axion should match with the result obtained for spatially flat universe.
In general for an arbitrary value of the mass parameter ν, we get the following approximated contributions in the short wavelength limit (p, p n >> 1), which are explicitly appearing in the expression for the amplitude of the normalised power spectrum from generalised α vacua: Consequently, we get the following simplified expressions for large wavenumber p, p n >> 1 limit in the case of generalised α vacua: As a result, in the short wave length approximation the time independent function Q(p >> 1, α, ν) for any arbitrary mass parameter ν can be expressed for generalised α vacua as: Q(p >> 1, α, ν) = 2 2(ν−1) (Γ(ν)) 2 p 3 π G(p >> 1) = M(p, ν) ∀α, (B.33)
|E L | 2 Γ ν + ip + 1 2 2 p>>1 ≈ πe −πp 2p 4 |Γ (ip)| 2 , (B.3) |F L | 2 Γ ν + ip + 1 2 2 p>>1 ≈ πe −5πp 2p 4 |Γ (ip)| 2 , (B.4) E L F * L Γ ν − ip + 1 2 2 p>>1 ≈ πe −3πp 2p 4 |Γ (ip)| 2 , (B.5) E * L F L Γ ν + ip + 1 2 2 p>>1 ≈ πe −3πp|E L | 2 Γ ν + ip + 1 2 2 ∼ 1 2p 3 1 + 1 82944p 4 , (B.19) |F L | 2 Γ ν + ip + 1 2 2 ∼ e −4πp
where we have already defined the function G(p >> 1) in the earlier section of the Appendix.
B.2 For small wave number
Similarly to know the exact wave number dependence of the amplitude of the normalised power spectrum from generalised α vacua in the long wave length approximation we need to know the behaviour of the power spectrum at p, p n << 1. After taking this limit it is expected that the power spectrum of axion should match with the result obtained for spatially flat universe. In general for an arbitrary value of the mass parameter ν, we get the following approximated contributions in the in the long wave length approximation, which are explicitly appearing in the expression for the amplitude of the normalised power spectrum from generalised α vacua: On the other hand, if we set ν = q/2 (including the massless case for ν = 3/2) in the previous expressions obtained for general ν then due to the presence of the overall factor 1/| cos πν| the final expression for the power spectrum in small wave number limit diverges. This is very obvious from the obtained expressions but one can be able to avoid such unwanted divergent contributions very easily. To serve this purpose let us rewrite all the expressions for p, p n << 1 with ν = q/2 that we have mentioned earlier:
|E L | 2 Γ ν + ip + 1 2 2 p<<1 ≈ πp 2| cos πν| Γ ν + 1 2 2 |1 − γ (α) pmLR | 2 |1 − γ (α) pmLR | 2 − |m RR | 2 (B.34) |F L | 2 Γ ν + ip + 1 2 2 p<<1 ≈ πp 2| cos πν| Γ ν + 1 2 2 |m RR | 2 |1 − γ (α) pmLR | 2 − |m RR | 2 (B.35) E L F * L Γ ν − ip + 1 2 2 p<<1 × 1 + |m RR | 2 + 1 − γ (α) pmLR * m RR + 1 − γ (α) pmLR m * RR |1 − γ (α) pmLR | 2 + ∞ n=0 p n p |1 − γ (α) pmLR | 2 − |m RR | 2 |1 − Γ (α) p,nmLR,n | 2 − |m RR,n | 2 1 |1 − γ (α) pmLR | 2 1 − γ (α) pmLR 1 − Γ (α) p,|E L | 2 Γ ν + ip + 1 2 2 p<<1 ≈ π 2 (B.50) |F L | 2 Γ ν + ip + 1 2 2 p<<1 ≈ 0 (B.51) E L F * L Γ ν − ip + 1 2 2 p<<1 ≈ 0 (B.52) E * L F L Γ ν + ip
Figure 1 .
1correlation from Quantum entanglement in de Sitter space with stringy axions Study of vacuum fluctuation using Bunch Davies and generalised quantum state Field Operator Expansion (FOE) formalism (Entangled state) Reduced Density Matrix (RDM) formalism (Mixed state) Power spectrum from two point function using FOE Power spectrum from two point function using RDM Large wavenumber (short wavelength) limit in super horizon time scale both are same Small wavenumber (long wavelength) limit in super horizon time scale they are completely different Non Entangled state (NES) formalism Schematic diagram for the computation algorithm of long range effect of cosmological correlation function from quantum entanglement of axion in de Sitter open hyperbolic chart.
Figure 4 .
4Schematic diagram for the computation algorithm of solving the wave function of our universe in de Sitter hyperbolic open chart for stringy axion.
3. 1 Figure 5 .
15Quantum vacuum fluctuation using field operator expansion (FOE) (with entangled state) 3.1.1 Wave function in field operator expansion (FOE) Field operator expansion (FOE) formalism for entangled state Computation of mean square quantum vacuum fluctuation in terms of two point correlation function using Bunch Davies and generalised vacuum state configuration Large wavenumber (short wavelength) limit in super horizon time scale Small wavenumber (long wavelength) limit in super horizon time scale Solution of the total wave function of the universe in L region of dS space Result exactly matches with the cosmological two point correlation function and the power spectrum for massless and massive axion Result is different compared to the cosmological two point correlation function and the power spectrum for massless and massive axion Using this quantum state one can construct density matrix and entanglement entropy using von Neumann measure in region L Schematic diagram for the computation algorithm of field operator expansion method for entangled state of axion in de Sitter hyperbolic open chart.
Large wave number dependence (p>>1) of power spectrum for FOE for α=0(a) Large wave number dependence of FOE power spectrum for α = 0.
Large wave number dependence (p>>1) of power spectrum for FOE for α=0.1 (b) Large wave number dependence of FOE power spectrum for α = 0Mass parameter dependence (p>>1) of power spectrum for FOE for p=7000(c) Mass parameter dependence of FOE power spectrum for p >> 1.
Figure 6 .
6Features of FOE power spectrum in large wave number region.
Small wave number dependence (p<<1) of power spectrum for FOE for α=0(a) Small wave number dependence of FOE power spectrum for α = 0.Small wave number dependence (p<<1) of power spectrum for FOE for α=0.1 (b) Small wave number dependence of FOE power spectrum for α = 0Mass parameter dependence (p<<1) of power spectrum for FOE for p=0.1 (c) Mass parameter dependence of FOE power spectrum for p << 1.
Figure 7 .
7Features of FOE power spectrum in small wave number region.
Figure 8 .
8we have presented a schematic diagram for the computation algorithm of reduced density matrix formalism for mixed quantum state of axion in de Sitter hyperbolic open chart. Reduced Density Matrix (RDM) formalism (Mixed state) Computation of mean square quantum vacuum fluctuation in terms of two point correlation function using Bunch Davies and generalised vacuum state configuration by tracing out region R Large wavenumber (short wavelength) limit in super horizon time scale Small wavenumber (long wavelength) limit in super horizon time scale Solution of the wave function of the in L and R region of dS space Result exactly matches with the cosmological two point correlation function and the power spectrum for massless and massive axion Result is different compared to the cosmological two point correlation function and the power spectrum for massless and massive axion Using this quantum mixed state by tracing out region R one can construct reduced density matrix and entanglement entropy using von Neumann measure in region L Schematic diagram for the computation algorithm of reduced density matrix formalism for mixed quantum state of axion in de Sitter hyperbolic open chart.
, r; p, l, m is the Bunch Davies counterpart of the quantum state in the newly Bogoliubov transformed basis and is obtained by simply setting α = 0 in the definition of the quantum state introduced in terms of the new oscillators.
F
L,(n) = −V N c,(n) .(3.150)
Large wave number dependence (p>>1) of power spectrum for RDM for α=0(a) Large wave number dependence of RDM power spectrum for α = 0.Large wave number dependence (p>>1) of power spectrum for RDM for α=0.1 (b) Large wave number dependence of RDM power spectrum for α = 0Mass parameter dependence (p>>1) of power spectrum for RDM for p=7000(c) Mass parameter dependence of RDM power spectrum in p >> 1.
Figure 9 .
9Features of RDM power spectrum in large wave number region.
1 1
11(10(a)) and figure(10(c)) we have shown the behaviour of the power spectrum of the mean square vacuum fluctuation computed from RDM formalism in the small wave number regimeSmall wave number dependence (p<<1) of power spectrum for RDM for α=0 (a) Small wave number dependence of RDM power spectrum for α = 0 and ν = 1Small wave number dependence (p<<1) of power spectrum for RDM for α=0 (b) Small wave number dependence of RDM power spectrum for α = 0 and ν = Small wave number dependence (p<<1) of power spectrum for RDM for α=0.1 (c) Small wave number dependence of RDM power spectrum for α = 0.1 and ν = 1Small wave number dependence (p<<1) of power spectrum for RDM for α=0.1 (d) Small wave number dependence of RDM power spectrum for α = 0.1 and ν = Mass parameter dependence (p<<1) of power spectrum for RDM for p=0.1(e) Mass parameter dependence of RDM power spectrum in p << 1.
Figure 10 .
10Features of RDM power spectrum in small wave number region.
Figure 11 .
11we have presented a schematic diagram for the computation algorithm of NES formalism for non entangled quantum state of axion in de Sitter hyperbolic open chart. Non entangled state (NES) formalism Computation of mean square quantum vacuum fluctuation in terms of two point correlation function using Bunch Davies and generalised vacuum state configuration in region L Large wavenumber (short wavelength) limit in super horizon time scale Small wavenumber (long wavelength) limit in super horizon time scale Solution of the total wave function of the universe in L region of dS space Result exactly matches at leading order with the cosmological two point correlation function and the power spectrum for massless and massive axion Result is different compared to the cosmological two point correlation function and the power spectrum for massless and massive axion von Neumann measure of entanglement entropy is zero for NES Schematic diagram for the computation algorithm of NES formalism for non entangled quantum state of axion in de Sitter hyperbolic open chart.
0
0Large wave number dependence (p>>1) of power spectrum for NES for α=0(a) Large wave number dependence of NES power spectrum for α = 0.Large wave number dependence (p>>1) of power spectrum for NES for α=0.1 (b) Large wave number dependence of NES power spectrum for α =Mass parameter dependence (p>>1) of power spectrum for NES for p=7000(c) Mass parameter dependence of NES power spectrum for p >> 1.
Figure 12 .
12Features of NES power spectrum in large wave number region.
Small wave number dependence (p<<1) of power spectrum for NES for α=0(a) Small wave number dependence of NES power spectrum for α = 0.Small wave number dependence (p<<1) of power spectrum for NES for α=0.1 (b) Small wave number dependence of NES power spectrum for α = 0Mass parameter dependence (p<<1) of power spectrum for NES for p=0.1 (c) Mass parameter dependence of NES power spectrum in p << 1.
Figure 13 .
13Features of NES power spectrum in small wave number region.
e
−π(pn+pm) . (A.50)
Γ
ν − ip + 1 2 Γ ν + ip n + 2 p 2 n |Γ (ip)| |Γ (ip n )| (B.7) F L F * L,(n) Γ ν − ip + 1 2 Γ ν + ip n + 1 2 p,pn>>1 ≈ πe −5π(p+pn)/2 2p 2 p 2 n |Γ (ip)| |Γ (ip n )|Γ (ip m )| |Γ (ip n |Γ (ip m )| |Γ (ip n |Γ (ip m )| |Γ (ip n )| (B.15) E σ * L,(n) F L,(m) Γ ν + ip n + 1 2 Γ ν + ip m |Γ (ip m )| |Γ (ip n )| (B.16)Further, we apply Stirling's formula to approximate Gamma functions for large wavenumbers p, p n >> 1 to simplify the expression for the power spectrum:
E
L E * L,(n) Γ ν − ip + 1 2 Γ ν + ip n +
E
πp e −pπ e iθ |1 ± πp e −pπ e iθ | 1 ± πp n e −pnπ e −iθ |1 ± πp n e −pnπ e iθ| (B.54)F L F * L,(n) Γ ν − ip + 1 2 Γ ν + ip n + L F * L,(n) Γ ν − ip + 1 2 Γ ν − ip n +
156 )
156Results for generalised α vacuaResults for Bunch Davies vacuum .where the expression for (m LR ,m RR ) and (γ
(α)
p , Γ
2α cos πν e iπν ]2
+ 2
(
√
cos 2πν+1±
√
cos 2πν+3)
2 (cosh 2 α+sinh 2 α e 2πiν )
2
[cosh 2 α+sinh 2 α e 2iπν +sinh 2α cos πν e iπν ]
2
− 2
+
1 +
√
2[cosh 2 α+sinh 2 α e 2iπν +sinh 2α cos πν e iπν ]
(
√
cos 2πν+1±
√
cos 2πν+3)(cosh 2 α+sinh 2 α e 2πiν )
2
1 −
√
2[cosh 2 α+sinh 2 α e 2iπν +sinh 2α cos πν e iπν ]
(
√
cos 2πν+1±
√
cos 2πν+3)(cosh 2 α+sinh 2 α e 2πiν )
2 4
∞
s=0
1
−1
=0
=
(
√
cos 2πν+1±
√
cos 2πν+3)
2 |cosh 2 α+sinh 2 α e 2πiν |
2
|cosh 2 α+sinh 2 α e 2iπν +sinh 2α cos πν e iπν |
2
+ 2
(
√
cos 2πν+1±
√
cos 2πν+3)
2 |cosh 2 α+sinh 2 α e 2πiν |
2
|cosh 2 α+sinh 2 α e 2iπν +sinh 2α cos πν e iπν |
2
− 2
∀α and ν = 3/2,
(3.182)
Table 1. Comparison between FOE, RDM and NES formalism for α vacua.2
p 2 U(p << 1).
(3.218)
Feuatures
FOE
RDM
NES
Wave
Here we solve the
Here we solve the
Here we only solve the
function
wave function in L region
wave function in L and R region
wave function in L region
of dS space.
of dS space.
of dS space.
Quantum
Here we deal with
Here we deal with
Here we deal with
state
entangled quantum state.
mixed quantum state.
non-entangled quantum state.
Quantum
Power spectrum is
Power spectrum is
Power spectrum is
number
only dependent on SO(1,3)
only dependent on SO(1,3)
only dependent on SO(1,3)
dependence
quantum number p
quantum number p
quantum number p
and independent on l,m.
and independent on l,m.
and independent on l,m.
Time
Analysis is performed on
Analysis is performed on
Analysis is performed on
scale
superhorizon
superhorizon
superhorizon
for computation
time scale.
time scale.
time scale.
Power
Leading order term
Leading order term
Leading order term
spectrum
is H
2π
2 exp(−2α)
is H
2π
2 exp(−2α)
is H
2π
2 exp(−2α)
spectrum
and the next
and the next
and the next
at large
order effects are different
order effects are different
order effects are different
wave
from RDM and NES
from FOE and NES
from FOE and RDM
number
for massless axion (ν = 3/2).
for massless axion (ν = 3/2).
for massless axion (ν = 3/2).
Power
Leading order term
Leading order term
Leading order term
spectrum
is H
2π
2 p 3 exp(−2α)
is H 2
8π exp(−2α)
is H
2π
2 p 2 exp(−2α)
at small
and the next
and the next
and the next
at small
order effects are different
order effects are different
order effects are different
wave
from RDM and NES
from FOE and NES
from FOE and RDM
number
for massless axion (ν = 3/2).
for massless axion (ν = 3/2).
for massless axion (ν = 3/2).
nmLR,n * p,nmLR,n | 2 − |m RR,n | 2 |1 − Γ (α) p,mmLR,m | 2 − |m RR,m | 2 +m RR,nm * RR,m + 1 − Γ (α) p,nmLR,n m * RR,m + 1 − Γ (α) p,nmLR,n m * RR,m + 1 − Γ (α) p,nmLR,n * m RR,m + 1 − Γ (α)+m RRm
*
RR,n
+ 1 − γ (α)
pmLR m *
RR,n + 1 − Γ (α)
p,nmLR,n m *
RR
+ 1 − γ (α)
pmLR
* m
RR,n + 1 − Γ (α)
p,nmLR,n
* m
RR
+
∞
n=0
∞
m=0
p n p m
p 2
|1 − γ
(α)
pmLR | 2 − |m RR | 2
2
|1 − Γ
(α)
1
|1 − γ
(α)
pmLR | 2
1 − Γ (α)
p,nmLR,n
1 − Γ (α)
p,mmLR,m
*
p,nmLR,n
* m
RR,m
(B.49)
Here the wave number p mimics the role of SO(3,1) principal quantum number in the de Sitter hyperbolic open chart which is continuous and lying within the range 0 < p < ∞. The other SO(3, 1) quantum numbers m (azimuthal) and l (orbital) play no significant role in the final result as the expression for the power spectrum for mean square vacuum fluctuation only depends on the quantum number p.
i [(cosh 2α + sinh 2α) cosh π|ν| + sinh π|ν|] (e 2πp + e 2π|ν| ) cosh 2 α + (e 2πp + e 2π|ν| ) sinh 2 α ,(3.77)
Here q is any positive odd integer.
Here it is important to note the expression for the time dependent function G(p << 1) for ν = q/2 (where q is any positive odd integer) in all cases are same. The only difference is appearing in the expression for the power spectrum. For ν = 3/2 case the power spectrum is scale invariant exactly. But for the other values of ν = 1/2, 5/2, 7/2, · · · the power spectrum is not scale invariant and small deviation from the scale invariant feature can be observed easily.
t L >> 1, ν = 3/2 − −−−−−−−−−−− →
AcknowledgmentsSC would like to thank Quantum Gravity and Unified Theory and Theoretical Cosmology Group, Max Planck Institute for Gravitational Physics, Albert Einstein Institute for providing the Post-Doctoral Research Fellowship. SC take this opportunity to thank sincerely to Jean-Luc Lehners, Shiraz Minwalla and Varun Sahni for their constant support and inspiration. SC also thank the organisers of Advanced String School 2017, ST 4 2017 and Kavli Asian Winter School on Strings, Particles and Cosmology 2018, Summer School on Cosmology 2018, ICTP, Trieste, 15 th Marcel Grossman Meeting, Rome for providing the local hospitality during the work. SC also thank ICTP, Trieste, La Sapienza University, Rome, DTP, TIFR, Mumbai, ICTS, TIFR, Bengaluru, IOP, CMI, SINP and IACS for providing the academic visit during the work. SP acknowledges the J. C. Bose National Fellowship for support of his research. Last but not the least, we would like to acknowledge our debt to the people of India for their generous and steady support for research in natural sciences, especially for theoretical high energy physics, string theory and cosmology.Also in the super horizon time scale (t L >> 1) we get the following simplification in the normalised power spectrum for non entangled state :.(3.201)In this limit, for the massless case ( ν = 3/2), the time dependent contribution can be approximated into the following simplified form:(3.202)This implies that for an arbitrary value of the parameter ν one can write:Consequently, in the superhorizon time scales (t L >> 1) of region L and for the massless case (ν = 3/2), the amplitude of the normalised power spectrum can be expressed as:A Quantum correction to the power spectrum in FOE formalism At the superhorizon time scales (t L >> 1) of region L one can write the amplitude of the FOE power spectrum as:where all the entries of the right hand side of the above expressions for p, p n << 1 are explicitly computed earlier in this paper.As a result, the time independent function Q(p << 1, α, ν) for the mass parameter ν = q/2 (where q is any half integer) can be expressed for generalised α vacua as:where the function G(p << 1) is defined for ν = 3/2 as:Also for the massless case (ν = 3/2) the time independent function Q(p << 1, α, ν = 3/2) can be further simplified as:where the function G(p << 1) is defined for ν = 3/2 as:(1 ± e iθ πp n e −pnπ ) |1 ± e iθ πp n e −pnπ |(1 ± e −iθ πp m e −pmπ ) |1 ± e iθ πp m e −pmπ | (B.65)C Quantum correction to the power spectrum in NES formalismAt the superhorizon time scales (t L >> 1) of region L the amplitude of the NES power spectrum can be expressed as:where the time independent function K(p, α, ν) for generalised α vacua is defined as:C.1 For large wave numberFurther, to know the exact wave number dependence of the amplitude of the normalised power spectrum from generalised α vacua we need to know the behaviour of the power spectrum at very short wavelengths (p, p n >> 1). After taking this limit it is expected that the power spectrum of axion in the non entangled case should match with the result obtained for spatially flat universe. In general for an arbitrary value of the mass parameter ν, we get the following approximated contributions in the short wavelength limit (p, p n >> 1), which are explicitly appearing in the expression for the amplitude of the normalised power spectrum from generalised α vacua:As a result, the time independent function K(p, α, ν) in the short wave length limit for any arbitrary mass parameter ν can be expressed for generalised α vacua as:where the function U(p >> 1) is defined as:Here for very large wave number p, p n >> 1 one can write, U(p) ∼ 1 + · · · , where all · · · are small correction terms. This also implies to the nice fact that for large wave number limit for any values of the parameter α the time independent function U(p, α, ν) computed for generalised α vacua is exactly matches with the result obtained for Bunch Davies vacua in the earlier section at the leading order in M(p, ν).Also for the massless case (ν = 3/2) the time independent function K(p, α, ν = 3/2) in the short wave length limit can be further simplified as:C.2 For small wave numberSimilarly to see the behaviour of the power spectrum in the long wavelength region in the super horizon time scale (t L >> 1) we take the limit p << 1 and further expand the expression for the power spectrum in p. In general for an arbitrary value of the mass parameter ν, we get the following approximated contributions in the long wavelength limit (p, p n << 1), which are explicitly appearing in the expression for the amplitude of the normalised power spectrum from generalised α vacua:As a result, in the long wave length limit the time independent function K(p, α, ν) for any arbitrary mass parameter ν can be expressed for generalised α vacua as:K(p << 1, α, ν) = 2 2(ν−1) (Γ(ν)) 2 pπ U(p << 1) ∀α,(C.12)where the function U(p << 1) is defined in the long wave length limit as:(C.13)Also for the massless case (ν = 3/2) the time independent function K(p, α, ν = 3/2) can be further simplified as:K(p << 1, α, ν = 3/2) = U(p << 1) 2p ∀α.(C.14)
Quantum entanglement. R Horodecki, P Horodecki, M Horodecki, K Horodecki, quant-ph/0702225Rev. Mod. Phys. 81865R. Horodecki, P. Horodecki, M. Horodecki and K. Horodecki , Quantum entanglement, Rev. Mod. Phys. 81 (2009) 865 [quant-ph/0702225].
Cosmological quantum entanglement. E Martin-Martinez, N C Menicucci, arXiv:1204.4918Class. Quant. Grav. 29224003gr-qcE. Martin-Martinez and N. C. Menicucci, Cosmological quantum entanglement, Class. Quant. Grav. 29 (2012) 224003 [arXiv:1204.4918 [gr-qc]].
Entanglement of Quantum Fluctuations in the Inflationary Universe. Y Nambu, arXiv:0805.1471Phys. Rev. D. 7844023gr-qcY. Nambu, Entanglement of Quantum Fluctuations in the Inflationary Universe, Phys. Rev. D 78 (2008) 044023 [arXiv:0805.1471 [gr-qc]].
On the Einstein-Podolsky-Rosen paradox. J S Bell, Physics. 1195J. S. Bell, On the Einstein-Podolsky-Rosen paradox, Physics 1 (1964) 195.
Gravitational Effects on and of Vacuum Decay. S R Coleman, F De Luccia, Phys. Rev. D. 213305S. R. Coleman and F. De Luccia, "Gravitational Effects on and of Vacuum Decay," Phys. Rev. D 21 (1980) 3305.
Observer dependence of bubble nucleation and Schwinger pair production. J Garriga, S Kanno, M Sasaki, J Soda, A Vilenkin, arXiv:1208.1335JCAP. 12126hep-thJ. Garriga, S. Kanno, M. Sasaki, J. Soda and A. Vilenkin, "Observer dependence of bubble nucleation and Schwinger pair production," JCAP 1212 (2012) 006 [arXiv:1208.1335 [hep-th]].
Rest frame of bubble nucleation. J Garriga, S Kanno, T Tanaka, arXiv:1304.6681JCAP. 130634hep-thJ. Garriga, S. Kanno and T. Tanaka, "Rest frame of bubble nucleation," JCAP 1306 (2013) 034 [arXiv:1304.6681 [hep-th]].
Schwinger effect in de Sitter space. M B Frb, J Garriga, S Kanno, M Sasaki, J Soda, T Tanaka, A Vilenkin, arXiv:1401.4137JCAP. 14049hep-thM. B. Frb, J. Garriga, S. Kanno, M. Sasaki, J. Soda, T. Tanaka and A. Vilenkin, "Schwinger effect in de Sitter space," JCAP 1404 (2014) 009 [arXiv:1401.4137 [hep-th]];
Holographic Schwinger effect in de Sitter space. W Fischler, P H Nguyen, J F Pedraza, W Tangarife, arXiv:1411.1787Phys. Rev. D. 91886015hep-thW. Fischler, P. H. Nguyen, J. F. Pedraza and W. Tangarife, Holographic Schwinger effect in de Sitter space, Phys. Rev. D 91 (2015) no.8, 086015 [arXiv:1411.1787 [hep-th]].
Quantum entanglement in condensed matter systems. N Laflorencie, arXiv:1512.03388Phys. Rept. 6461cond-mat.str-elN. Laflorencie, Quantum entanglement in condensed matter systems, Phys. Rept. 646 (2016) 1 [arXiv:1512.03388 [cond-mat.str-el]].
Holographic derivation of entanglement entropy from AdS/CFT. S Ryu, T Takayanagi, hep-th/0603001Phys. Rev. Lett. 96181602S. Ryu and T. Takayanagi, "Holographic derivation of entanglement entropy from AdS/CFT," Phys. Rev. Lett. 96 (2006) 181602 [hep-th/0603001].
Entanglement Entropy from a Holographic Viewpoint. T Takayanagi, arXiv:1204.2450Class. Quant. Grav. 29153001gr-qcT. Takayanagi, "Entanglement Entropy from a Holographic Viewpoint," Class. Quant. Grav. 29 (2012) 153001 [arXiv:1204.2450 [gr-qc]];
Holographic derivation of entanglement entropy from AdS/CFT. S Ryu, T Takayanagi, ; S Ryu, T Takayanagi ; T. Nishioka, S Ryu, T Takayanagi ; M. Rangamani, T E Takayanagi ; V, M Hubeny, T Rangamani, Takayanagi, arXiv:0905.0932arXiv:0705.0016Lect. Notes Phys. 96060862Holographic Entanglement EntropyJHEP. hep-thS. Ryu and T. Takayanagi, Holographic derivation of entanglement entropy from AdS/CFT, Phys. Rev. Lett. 96 (2006) 181602 [hep-th/0603001]. ; S. Ryu and T. Takayanagi, Aspects of Holographic Entanglement Entropy, JHEP 0608 (2006) 045 [hep-th/0605073]. ; T. Nishioka, S. Ryu and T. Takayanagi, Holographic Entanglement Entropy: An Overview, J. Phys. A 42 (2009) 504008 [arXiv:0905.0932 [hep-th]]. ; M. Rangamani and T. Takayanagi, Holographic Entanglement Entropy, Lect. Notes Phys. 931 (2017) [arXiv:1609.01287 [hep-th]]. ; V. E. Hubeny, M. Rangamani and T. Takayanagi, A Covariant holographic entanglement entropy proposal, JHEP 0707 (2007) 062 [arXiv:0705.0016 [hep-th]].
Entanglement entropy in de Sitter space. J Maldacena, G L Pimentel, arXiv:1210.7244JHEP. 130238hep-thJ. Maldacena and G. L. Pimentel, "Entanglement entropy in de Sitter space," JHEP 1302 (2013) 038 [arXiv:1210.7244 [hep-th]].
Entanglement entropy of α-vacua in de Sitter space. S Kanno, J Murugan, J P Shock, J Soda, arXiv:1404.6815JHEP. 140772hep-thS. Kanno, J. Murugan, J. P. Shock and J. Soda, "Entanglement entropy of α-vacua in de Sitter space," JHEP 1407 (2014) 072 [arXiv:1404.6815 [hep-th]].
Vacuum States in de Sitter Space. B Allen, Phys. Rev. D. 323136B. Allen, Vacuum States in de Sitter Space, Phys. Rev. D 32 (1985) 3136;
A Note on alpha vacua and interacting field theory in de Sitter space. K Goldstein, D A Lowe, hep-th/0302050Nucl. Phys. B. 669325K. Goldstein and D. A. Lowe, A Note on alpha vacua and interacting field theory in de Sitter space, Nucl. Phys. B 669 (2003) 325 [hep-th/0302050];
Alpha-states in de Sitter space. J Boer, V Jejjala, D Minic, hep-th/0406217Phys. Rev. D. 7144013J. de Boer, V. Jejjala and D. Minic, Alpha-states in de Sitter space, Phys. Rev. D 71 (2005) 044013 [hep-th/0406217];
A Remark on alpha vacua for quantum field theories on de Sitter space. R Brunetti, K Fredenhagen, S Hollands, hep-th/0503022JHEP. 050563R. Brunetti, K. Fredenhagen and S. Hollands, A Remark on alpha vacua for quantum field theories on de Sitter space, JHEP 0505 (2005) 063 [hep-th/0503022].
Entangled de Sitter from stringy axionic Bell pair I: an analysis using BunchDavies vacuum. S Choudhury, S Panda, arXiv:1708.02265Eur. Phys. J. C. 78152hep-thS. Choudhury and S. Panda, "Entangled de Sitter from stringy axionic Bell pair I: an analysis using BunchDavies vacuum," Eur. Phys. J. C 78 (2018) no.1, 52 [arXiv:1708.02265 [hep-th]].
Quantum entanglement in de Sitter space from Stringy Axion: An analysis using α vacua. S Choudhury, S Panda, arXiv:1712.08299hep-thS. Choudhury and S. Panda, "Quantum entanglement in de Sitter space from Stringy Axion: An analysis using α vacua," arXiv:1712.08299 [hep-th].
A model with cosmological Bell inequalities. J Maldacena, arXiv:1508.01082Fortsch. Phys. 6410hep-thJ. Maldacena, A model with cosmological Bell inequalities, Fortsch. Phys. 64 (2016) 10 [arXiv:1508.01082 [hep-th]].
Bell violation in the Sky. S Choudhury, S Panda, R Singh, arXiv:1607.00237Eur. Phys. J. C. 77260hep-thS. Choudhury, S. Panda and R. Singh, Bell violation in the Sky, Eur. Phys. J. C 77 (2017) no.2, 60 [arXiv:1607.00237 [hep-th]].
Bell violation in primordial cosmology. S Choudhury, S Panda, R Singh, arXiv:1612.09445Universe. 3113hep-thS. Choudhury, S. Panda and R. Singh, Bell violation in primordial cosmology, Universe 3 (2017) no.1, 13 [arXiv:1612.09445 [hep-th]].
S Kanno, J Soda, arXiv:1705.06199Infinite violation of Bell inequalities in inflation. hep-thS. Kanno and J. Soda, Infinite violation of Bell inequalities in inflation, arXiv:1705.06199 [hep-th].
Axions as Quintessence in String Theory. S Panda, Y Sumitomo, S P Trivedi, arXiv:1011.5877Phys. Rev. D. 8383506hep-thS. Panda, Y. Sumitomo and S. P. Trivedi, "Axions as Quintessence in String Theory," Phys. Rev. D 83 (2011) 083506 [arXiv:1011.5877 [hep-th]];
Gravity Waves and Linear Inflation from Axion Monodromy. L Mcallister, E Silverstein, A Westphal, arXiv:0808.0706Phys. Rev. D. 8246003hep-thL. McAllister, E. Silverstein and A. Westphal, Gravity Waves and Linear Inflation from Axion Monodromy, Phys. Rev. D 82 (2010) 046003 [arXiv:0808.0706 [hep-th]] ;
Monodromy in the CMB: Gravity Waves and String Inflation. E Silverstein, A Westphal, arXiv:0803.3085Phys. Rev. D. 78106003hep-thE. Silverstein and A. Westphal, Monodromy in the CMB: Gravity Waves and String Inflation, Phys. Rev. D 78 (2008) 106003 [arXiv:0803.3085 [hep-th]];
The Powers of Monodromy. L Mcallister, E Silverstein, A Westphal, T Wrase, arXiv:1405.3652JHEP. 1409123hep-thL. McAllister, E. Silverstein, A. Westphal and T. Wrase, The Powers of Monodromy, JHEP 1409 (2014) 123 [arXiv:1405.3652 [hep-th]].
Impact of quantum entanglement on spectrum of cosmological fluctuations. S Kanno, arXiv:1405.7793JCAP. 140729hep-thS. Kanno, Impact of quantum entanglement on spectrum of cosmological fluctuations, JCAP 1407 (2014) 029 [arXiv:1405.7793 [hep-th]].
COSMOS-e -GTachyon from string theory. S Choudhury, S Panda, arXiv:1511.05734Eur. Phys. J. C. 765278hep-thS. Choudhury and S. Panda, COSMOS-e -GTachyon from string theory, Eur. Phys. J. C 76 (2016) no.5, 278 [arXiv:1511.05734 [hep-th]] ;
COSMOS-e -soft Higgsotic attractors. S Choudhury, arXiv:1703.01750Eur. Phys. J. C. 777469hep-thS. Choudhury, COSMOS-e -soft Higgsotic attractors, Eur. Phys. J. C 77 (2017) no.7, 469 [arXiv:1703.01750 [hep-th]] ;
Primordial non-Gaussian features from DBI Galileon inflation. S Choudhury, S , arXiv:1210.4478Eur. Phys. J. C. 756241hep-thS. Choudhury and S. Pal, Primordial non-Gaussian features from DBI Galileon inflation, Eur. Phys. J. C 75 (2015) no.6, 241 [arXiv:1210.4478 [hep-th]] ;
DBI Galileon inflation in background SUGRA. S Choudhury, S , arXiv:1208.4433Nucl. Phys. B. 87485hep-thS. Choudhury and S. Pal, DBI Galileon inflation in background SUGRA, Nucl. Phys. B 874 (2013) 85 [arXiv:1208.4433 [hep-th]] ;
Fourth level MSSM inflation from new flat directions. S Choudhury, S , arXiv:1111.3441JCAP. 120418hep-phS. Choudhury and S. Pal, Fourth level MSSM inflation from new flat directions, JCAP 1204 (2012) 018 [arXiv:1111.3441 [hep-ph]] ;
Brane inflation in background supergravity. S Choudhury, S , arXiv:1102.4206Phys. Rev. D. 8543529hep-thS. Choudhury and S. Pal, Brane inflation in background supergravity, Phys. Rev. D 85 (2012) 043529 [arXiv:1102.4206 [hep-th]] ;
Can Effective Field Theory of inflation generate large tensor-to-scalar ratio within RandallSundrum single braneworld?. S Choudhury, arXiv:1406.7618Nucl. Phys. B. 89429hep-thS. Choudhury, Can Effective Field Theory of inflation generate large tensor-to-scalar ratio within RandallSundrum single braneworld?, Nucl. Phys. B 894 (2015) 29 [arXiv:1406.7618 [hep-th]] ;
Quantum Gravity Effect in Torsion Driven Inflation and CP violation. S Choudhury, B K Pal, B Basu, P Bandyopadhyay, arXiv:1409.6036JHEP. 1510194hep-thS. Choudhury, B. K. Pal, B. Basu and P. Bandyopadhyay, Quantum Gravity Effect in Torsion Driven Inflation and CP violation, JHEP 1510 (2015) 194 [arXiv:1409.6036 [hep-th]] ;
Reconstructing inflationary paradigm within Effective Field Theory framework. S Choudhury, arXiv:1508.00269Phys. Dark Univ. 1116astro-ph.COS. Choudhury, Reconstructing inflationary paradigm within Effective Field Theory framework, Phys. Dark Univ. 11 (2016) 16 [arXiv:1508.00269 [astro-ph.CO]] ;
An accurate bound on tensor-to-scalar ratio and the scale of inflation. S Choudhury, A Mazumdar, arXiv:1306.4496Nucl. Phys. B. 882386hep-phS. Choudhury and A. Mazumdar, An accurate bound on tensor-to-scalar ratio and the scale of inflation, Nucl. Phys. B 882 (2014) 386 [arXiv:1306.4496 [hep-ph]] ;
Primordial blackholes and gravitational waves for an inflection-point model of inflation. S Choudhury, A Mazumdar, arXiv:1307.5119Phys. Lett. B. 733270astro-ph.COS. Choudhury and A. Mazumdar, Primordial blackholes and gravitational waves for an inflection-point model of inflation, Phys. Lett. B 733 (2014) 270 [arXiv:1307.5119 [astro-ph.CO]] ;
S Choudhury, A Mazumdar, arXiv:1403.5549Reconstructing inflationary potential from BICEP2 and running of tensor modes. hep-thS. Choudhury and A. Mazumdar, Reconstructing inflationary potential from BICEP2 and running of tensor modes, arXiv:1403.5549 [hep-th] ;
Constraining N = 1 supergravity inflationary framework with non-minimal Khler operators. S Choudhury, A Mazumdar, E Pukartas, arXiv:1402.1227JHEP. 140477hep-thS. Choudhury, A. Mazumdar and E. Pukartas, Constraining N = 1 supergravity inflationary framework with non-minimal Khler operators, JHEP 1404 (2014) 077 [arXiv:1402.1227 [hep-th]] ;
Constraining N = 1 supergravity inflation with non-minimal Kaehler operators using δN formalism. S Choudhury, arXiv:1402.1251JHEP. 1404105hep-thS. Choudhury, Constraining N = 1 supergravity inflation with non-minimal Kaehler operators using δN formalism, JHEP 1404 (2014) 105 [arXiv:1402.1251 [hep-th]] ;
Low & High scale MSSM inflation, gravitational waves and constraints from Planck. S Choudhury, A Mazumdar, S , arXiv:1305.6398JCAP. 130741hep-phS. Choudhury, A. Mazumdar and S. Pal, Low & High scale MSSM inflation, gravitational waves and constraints from Planck, JCAP 1307 (2013) 041 [arXiv:1305.6398 [hep-ph]].
Notes on axion, inflation and graceful exit in stringy cosmology. J Maharana, S Mukherji, S Panda, hep-th/9701115Mod. Phys. Lett. A. 12447J. Maharana, S. Mukherji and S. Panda, Notes on axion, inflation and graceful exit in stringy cosmology, Mod. Phys. Lett. A 12 (1997) 447 [hep-th/9701115] ;
Assisted inflation via tachyon condensation. A Mazumdar, S Panda, A Perez-Lorenzana, hep-ph/0107058Nucl. Phys. B. 614101A. Mazumdar, S. Panda and A. Perez-Lorenzana, Assisted inflation via tachyon condensation, Nucl. Phys. B 614 (2001) 101 [hep-ph/0107058] ;
Hybrid inflation and brane -anti-brane system. D Choudhury, D Ghoshal, D P Jatkar, S Panda, hep-th/0305104JCAP. 03079D. Choudhury, D. Ghoshal, D. P. Jatkar and S. Panda, Hybrid inflation and brane -anti-brane system, JCAP 0307 (2003) 009 [hep-th/0305104] ;
On the cosmological relevance of the tachyon. D Choudhury, D Ghoshal, D P Jatkar, S Panda, hep-th/0204204Phys. Lett. B. 544231D. Choudhury, D. Ghoshal, D. P. Jatkar and S. Panda, On the cosmological relevance of the tachyon, Phys. Lett. B 544 (2002) 231 [hep-th/0204204] ;
Non-minimally coupled tachyonic inflation in warped string background. P Chingangbam, S Panda, A Deshamukhya, hep-th/0411210JHEP. 050252P. Chingangbam, S. Panda and A. Deshamukhya, Non-minimally coupled tachyonic inflation in warped string background, JHEP 0502 (2005) 052 [hep-th/0411210] ;
Warm tachyonic inflation in warped background. A Deshamukhya, S Panda, arXiv:0901.0471Int. J. Mod. Phys. D. 182093hep-thA. Deshamukhya and S. Panda, Warm tachyonic inflation in warped background, Int. J. Mod. Phys. D 18 (2009) 2093 [arXiv:0901.0471 [hep-th]] ;
P Vargas Moniz, S Panda, J Ward, arXiv:0907.0711Higher order corrections to Heterotic M-theory inflation. 26245003astro-ph.COP. Vargas Moniz, S. Panda and J. Ward, Higher order corrections to Heterotic M-theory inflation, Class. Quant. Grav. 26 (2009) 245003 [arXiv:0907.0711 [astro-ph.CO]] ;
Inflation with improved D3-brane potential and the fine tunings associated with the model. A Ali, A Deshamukhya, S Panda, M Sami, arXiv:1010.1407Eur. Phys. J. C. 711672hep-thA. Ali, A. Deshamukhya, S. Panda and M. Sami, Inflation with improved D3-brane potential and the fine tunings associated with the model, Eur. Phys. J. C 71 (2011) 1672 [arXiv:1010.1407 [hep-th]] ;
A note on low energy effective theory of chromo-natural inflation in the light of BICEP2 results. A Bhattacharjee, A Deshamukhya, S Panda, arXiv:1406.5858Mod. Phys. Lett. A. 30111550040astro-ph.COA. Bhattacharjee, A. Deshamukhya and S. Panda, A note on low energy effective theory of chromo-natural inflation in the light of BICEP2 results, Mod. Phys. Lett. A 30 (2015) no.11, 1550040 [arXiv:1406.5858 [astro-ph.CO]] ;
Inflation and dark energy arising from geometrical tachyons. S Panda, M Sami, S Tsujikawa, hep-th/0510112Phys. Rev. D. 7323515S. Panda, M. Sami and S. Tsujikawa, Inflation and dark energy arising from geometrical tachyons, Phys. Rev. D 73 (2006) 023515 [hep-th/0510112] ;
Inflation from D3-brane motion in the background of D5-branes. S Panda, M Sami, S Tsujikawa, J Ward, hep-th/0601037Phys. Rev. D. 7383512S. Panda, M. Sami, S. Tsujikawa and J. Ward, Inflation from D3-brane motion in the background of D5-branes, Phys. Rev. D 73 (2006) 083512 [hep-th/0601037] ;
Prospects of inflation in delicate D-brane cosmology. S Panda, M Sami, S Tsujikawa, arXiv:0707.2848Phys. Rev. D. 76103512hep-thS. Panda, M. Sami and S. Tsujikawa, Prospects of inflation in delicate D-brane cosmology, Phys. Rev. D 76 (2007) 103512 [arXiv:0707.2848 [hep-th]].
D Baumann, arXiv:0907.5424TASI lectures on Inflation. hep-thD. Baumann, TASI lectures on Inflation 2009, arXiv:0907.5424 [hep-th];
Towards an Explicit Model of D-brane Inflation. D Baumann, A Dymarsky, I R Klebanov, L Mcallister, arXiv:0706.0360JCAP. 080124hep-thD. Baumann, A. Dymarsky, I. R. Klebanov and L. McAllister, Towards an Explicit Model of D-brane Inflation, JCAP 0801 (2008) 024 [arXiv:0706.0360 [hep-th]];
Advances in Inflation in String Theory. D Baumann, L Mcallister, arXiv:0901.0265Ann. Rev. Nucl. Part. Sci. 5967hep-thD. Baumann and L. McAllister, Advances in Inflation in String Theory, Ann. Rev. Nucl. Part. Sci. 59 (2009) 67 [arXiv:0901.0265 [hep-th]];
Symmetries and Loops in Inflation. V Assassi, D Baumann, D Green, arXiv:1210.7792JHEP. 1302151hep-thV. Assassi, D. Baumann and D. Green, Symmetries and Loops in Inflation, JHEP 1302 (2013) 151 [arXiv:1210.7792 [hep-th]];
D Baumann, L Mcallister, arXiv:1404.2601Inflation and String Theory. hep-thD. Baumann and L. McAllister, Inflation and String Theory, arXiv:1404.2601 [hep-th];
Holographic Systematics of D-brane Inflation. D Baumann, A Dymarsky, S Kachru, I R Klebanov, L Mcallister, arXiv:0808.2811JHEP. 090393hep-thD. Baumann, A. Dymarsky, S. Kachru, I. R. Klebanov and L. McAllister, Holographic Systematics of D-brane Inflation, JHEP 0903 (2009) 093 [arXiv:0808.2811 [hep-th]];
Phenomenology of D-Brane Inflation with General Speed of Sound. H V Peiris, D Baumann, B Friedman, A Cooray, arXiv:0706.1240Phys. Rev. D. 76103517astro-ph, H. V. Peiris, D. Baumann, B. Friedman and A. Cooray, Phenomenology of D-Brane Inflation with General Speed of Sound, Phys. Rev. D 76 (2007) 103517 [arXiv:0706.1240 [astro-ph]].
P Svrcek, E Witten, hep-th/0605206Axions In String Theory. 51P. Svrcek and E. Witten, Axions In String Theory, JHEP 0606 (2006) 051 [hep-th/0605206].
| []
|
[
"RATIONAL CURVES ON SMOOTH CUBIC HYPERSURFACES OVER FINITE FIELDS",
"RATIONAL CURVES ON SMOOTH CUBIC HYPERSURFACES OVER FINITE FIELDS"
]
| [
"T D Browning ",
"P Vishe "
]
| []
| []
| Let k be a finite field with characteristic exceeding 3. We prove that the space of rational curves of fixed degree on any smooth cubic hypersurface over k with dimension at least 11 is irreducible and of the expected dimension. | null | [
"https://arxiv.org/pdf/1502.05028v2.pdf"
]
| 5,805,793 | 1502.05028 | 9baef0de6547844d84470c8dfcd324c2c4675c9b |
RATIONAL CURVES ON SMOOTH CUBIC HYPERSURFACES OVER FINITE FIELDS
17 Feb 2015
T D Browning
P Vishe
RATIONAL CURVES ON SMOOTH CUBIC HYPERSURFACES OVER FINITE FIELDS
17 Feb 2015
Let k be a finite field with characteristic exceeding 3. We prove that the space of rational curves of fixed degree on any smooth cubic hypersurface over k with dimension at least 11 is irreducible and of the expected dimension.
Introduction
The geometry of a variety is intimately linked to the geometry of the space of rational curves on it. Given a field k and a projective variety X defined over k, a natural object to study is the moduli space of k-rational curves on X. There are many results in the literature establishing the irreducibility of such mapping spaces, but most such statements are only proved for generic X, there being relatively few results which are valid for all X in a family. The aim of this paper is to prove such a result for all smooth cubic hypersurfaces of large enough dimension which are defined over a finite field of characteristic exceeding 3.
Suppose that k = C and X ⊂ P n−1 C is a smooth cubic hypersurface with n 6. Let Mor d (P 1 C , X) be the Kontsevich moduli space of rational curves of degree d on X. Then it has been shown by Coskun and Starr [2] that Mor d (P 1 C , X) is irreducible and of the expected dimension d(n − 3) + n − 5. We would like to prove a similar result when k = F q is a finite field with q elements and X ⊂ P n−1 Fq is a smooth cubic hypersurface defined over it. Rather than working with Mor d (P 1 Fq , X), which corresponds to "unparametrized" maps, we will study the moduli space Mor d (P 1 Fq , X) of actual maps (see §2 for its construction). The expected dimension of Mor d (P 1 Fq , X) is D(d, n) = d(n − 3) + n − 2, (1.1) since P 1 Fq has automorphism group of dimension 3. For a smooth cubic hypersurface X ⊂ P n−1 Fq , the Lang-Tsen theorem (see [3,Thm. 3.6]) ensures that X(F q (t)) = ∅ as soon as n 10, in which case X contains a rational curve defined over F q . One can go further if one enlarges the size of the finite field. Let n 4. Then, according to Kollár [6,Example 7.6], there exists a constant c n depending only on n such that for any q > c n and any point x ∈ X(F q ), the cubic hypersurface X contains a rational curve (of degree at most 216) which is defined over F q and passes through x.
Following a suggestion of Ellenberg and Venkatesh, Pugin developed an "algebraic circle method" in his 2011 Ph.D. thesis [7] to study the spaces Mor d (P 1
Fq , X). Thus, when n 13 and X ⊂ P n−1 Fq is the diagonal cubic hypersurface a 1 x 3 1 + · · · + a n x 3 n = 0, (for a 1 , . . . , a n ∈ F * q ), he succeeds in showing that the associated moduli space Mor d (P 1
Fq , X) is irreducible and of the expected dimension D(d, n), provided that char(F q ) = 3. Our main result extends Pugin's result to non-diagonal hypersurfaces, as follows.
Theorem 1.1. Let char(F q ) > 3 and let X ⊂ P n−1 Fq be a smooth cubic hypersurface defined over F q , with n 13. Then for each d 1 the moduli space
Mor d (P 1
Fq , X) is irreducible and of dimension D(d, n). Inspired by Pugin's approach, our proof of this result rests on an estimate for # Mor d (P 1
Fq , X)(F q ), as q → ∞. The cardinality of F q -points on Mor d (P 1 Fq , X) is roughly equal to the number of F q (t)-points on X of degree d. We shall access the latter quantity through a function field version of the Hardy-Littlewood circle method. The traditional setting for this is a fixed finite field F q , with the goal being to understand the F q (t)-points on X of degree d, as d → ∞. In contrast to this, Theorem 1.1 requires us to handle any fixed d 1, as q → ∞.
The key ingredients will be drawn from work of Lee [4] on a F q (t) version of Birch's work on systems of forms in many variables and our own recent contribution to the subject [1], which is specific to cubic forms. Perhaps the chief interest of Theorem 1.1 lies in the fact that a result in algebraic geometry can be proved using methods of analytic number theory.
Acknowledgement. While working on this paper the first author was supported by ERC grant 306457 and the second author by EPSRC programme grant EP/J018260/1.
From moduli spaces to counting
Let k be a field and let X ⊂ P n−1 k be a hypersurface cut out by an equation F = 0, where F ∈ k[x 1 , . . . , x n ] is a homogeneous cubic polynomial. Let G d (k) be the set of all homogeneous polynomials in u, v of degree d 1,
with coefficients in k. A rational curve on X is a non-constant morphism f : P 1 k → X. A morphism of degree d is given by
f = (f 1 (u, v), . . . , f n (u, v)), with f 1 , . . . , f n ∈ G d (k), with no non-constant common factor in k[u, v], such that F (f 1 (u, v), .
. . , f n (u, v)) is identically zero. Using the coefficients of f 1 , . . . , f n we can regard f as a point in P n(d+1)−1 k
. The morphisms of degree d on X are parameterised by Mor d (P 1 k , X), which is an open subvariety of P n(d+1)−1 k cut out by a system of 3d + 1 equations of degree 3. This directly leads to the naive expectation that Mor d (P 1 k , X) should have dimension
n(d + 1) − 1 − (3d + 1) = D(d, n),
in the notation of (1.1). The complement to Mor d (P 1 k , X) in its closure is the set of (f 1 , . . . , f n ) with a common zero. We can obtain explicit equations by noting that f 1 , . . . , f n have a common zero if and only if the resultant Res( i λ i f i , j µ j f j ) is identically zero as a polynomial in λ i , µ j . This gives a system of equations of degree 2d in the coefficients of f 1 , . . . , f n . Now let k = F q with char(F q ) > 3 in the above discussion. Assuming that d 1 and n 13 we need to show that Mor d (P 1 Fq , X) is irreducible and of dimension D(d, n). We note that Mor d (P 1
Fq , X) is also defined over any finite extension F q ℓ of F q . Following Pugin's approach [7], our proof of Theorem 1.1 relies on estimating # Mor d (P 1
Fq , X)(F q ℓ ), as ℓ → ∞. According to Kollár [5, Thm. II.1.2/3], all irreducible components of Mor d (P 1
Fq , X) have dimension at least D(d, n). Hence, in view of the Lang-Weil estimate, Theorem 1.1 is a direct consequence of the following result.
Theorem 2.1. Let char(F q ) > 3 and let X ⊂ P n−1 Fq be a smooth cubic hypersurface defined over F q , with n 13. Then for each d 1 we have
lim ℓ→∞ q −ℓD(d,n) # Mor d (P 1 Fq , X)(F q ℓ ) 1.
We henceforth redefine q ℓ to be q. Our proof of Theorem 2.1 is based on the Hardy-Littlewood circle method over the function field F q (t), always under the assumption that char(F q ) > 3. The main input comes from our previous work [1] and a straightforward adaptation of work due to Lee [4]. We will adhere to the notation described in [1, §2.1 and §2.2] without further comment.
Assume that F (x) = i a i x i , with variables x = (x 1 , . . . , x n ) and coefficients a i ∈ F q . In particular the height H F and discriminant ∆ F of F satisfy
H F = max i |a i | = 1 and |∆ F | = 1.
We will make frequent use of these facts in what follows. To establish Theorem 2.1 we work with the naive space
M d = {x = (x 1 , . . . , x n ) ∈ G d (F q ) n \ {0} : F (x) = 0} ,
which corresponds to the F q -points on the affine cone of Mor d (P 1
Fq , X). Let us set
E(d, n) = D(d, n) + 1 = (n − 3)(d + 1) + 2.
It will clearly suffice to show that
lim q→∞ q −E(d,n) #M d 1, (2.1)
for n 13. We proceed by relating the counting function #M d to the counting function that lies at the heart of our earlier investigation [1].
Let w : K n ∞ → {0, 1} be given by w(x) = 1 i n w ∞ (x i ), where w ∞ (x) = 1, if |x| < 1, 0, otherwise. Putting P = t d+1 , we then have #M d N(P ), where N(P ) = x∈O n F (x)=0 w(x/P ). (2.2)
It follows from [1, Eq. (4.1)] that for any Q 1 we have
N(P ) = r∈O |r| Q r monic * |a|<|r| |θ|<|r| −1 Q −1 S a r + θ dθ, (2.3)
where * means that the sum is taken over residue classes |a| < |r| for which (a, r) = 1, and where
S(α) = x∈O n ψ(αF (x))w(x/P ),(2.4)
for any α ∈ T. We will work with the choice Q = 3(d+1)/2, so that Q = |P | 3/2 . We henceforth set
δ = 3 d + 1 .
Let A(P ) denote the contribution to N(P ) in (2.3) from values of r, θ such that either |θ| < Q −4 , or else r = 1 and |θ| < |P | −3+δ .
Lemma 2.2. We have lim q→∞ q −E(d,n) A(P ) = 1.
Proof. Let us put A 1 (P ) for the contribution from r = 1 and |θ| < |P | −3+δ , and A 2 (P ) for the remaining contribution. Taking the trivial bound |S(α)| |P | n , it is easy to check that lim q→∞ q −E(d,n) A 2 (P ) = 0 and so our attention shifts to A 1 (P ). For this we invoke [1, Lemma 2.2], which gives
A 1 (P ) = |θ|<|P | −3+δ S(θ)dθ = |P | −3+δ # x ∈ O n : |x| < |P |, |F (x)| < |P | 3−δ .
Note that our choice of δ implies that |P | 3−δ = q 3(d+1)−3 = q 3d and so this result is applicable since 3d is an integer. Any x to be counted is an n-tuple of polynomials with jth component x j = a 0,j t d + · · · + a d,j for coefficients a i,j ∈ F q . The condition |F (x)| < |P | 3−δ is therefore equivalent to the condition F (a 0,1 , . . . , a 0,n ) = 0. Since F is non-singular it is certainly absolutely irreducible over F q . Thus the Lang-Weil estimate implies that the total number of available x is q dn+n−1 (1+O n (q −1/2 )), where the implied constant depends only on n. Thus
A 1 (P ) = q −3d+dn+n−1 (1 + O n (q −1/2 )),
from which the statement of the lemma follows.
Let us put B(P ) for the contribution to N(P ) in (2.3) from values of r, θ with |θ| Q −4 , such that either |r| > 1, or else r = 1 and |θ| |P | −3+δ . The remainder of this paper is devoted to a proof of the following result. Recalling that #M d A(P ) + B(P ), we see that (2.1) follows from Lemmas 2.2 and 2.3. Thus it remains to prove Lemma 2.3 in order to complete the proof of Theorem 2.1.
In our analysis of B(P ) it will be convenient to sort the sum according to the size of |r| and |θ|. Consequently, we let S(d) denote the set of (Y, Θ)
∈ Z 2 such that 0 Y Q and − 4Q Θ < −(Y + Q),
with either Y 1, or else Y = 0 and Θ |P | −3+δ . In particular it is clear that #S(d) 7(d + 1) = c d , say. We then have
B(P ) (Y,Θ)∈S(d) |N(P, Y, Θ)| c d max (Y,Θ)∈S(d) |N(P, Y, Θ)|, where N(P, Y, Θ) = r∈O |r|= Y r monic * |a|<|r| |θ|= Θ S a r + θ dθ. (2.5)
We will use two basic methods for analysing N(P, Y, Θ). Let
S 1 (d) = {(Y, Θ) ∈ S(d) : Y 1 and Θ (n/6 − 4/3)Y − 2Q} .
For (Y, Θ) belonging to this set we will apply our previous work [1], which is founded on Poisson summation. This is the object of §3. Alternatively, in §4, we will use a function field version of Weyl differencing to handle (Y, Θ) belonging to the set
S 2 (d) = {(Y, Θ) ∈ S(d) : If Y 1 then Θ > (n/6 − 4/3)Y − 2Q} .
This part of the argument is essentially due to Lee [4]. It will be convenient to set B i (P ) = max
I r (θ; c) = K n ∞ w(x)ψ θP 3 F (x) + P c.x r dx.
It will be convenient to put γ = θP 3 in I r (θ; c). The definition of w implies that the integral is over T n , whence an application of [1,Lemma 2.7] shows that
|I r (θ; c)| meas{x ∈ T n : |γ∇F (x) + r −1 P c| max{1, |γ| 1/2 } = J(Θ) 1/2 .
The exponential sum S r (c) is a multiplicative function of r by [1,Lemma 4.5]. We will adopt the notation conceived in [1,Definition 4.6], so that associated to any r ∈ O and i ∈ Z >0 are the elements
b i = ̟ i r ̟ i and r i = ̟ e r e i ̟ e .
Applying [1, Lemma 5.1], we therefore find that there exists a constant A n > 0 depending only on n such that
c∈O n |c| C |S r (c)I r (θ; c)| A ω(b 1 b 2 ) n |b 1 b 2 | n/2+1 T n c∈C (x) |S r 3 (c)|dx, where C (x) = c ∈ O n : |c + rθP 2 ∇F (x)| |P | −1 Y J(Θ) 1/2 .
It now follows from [1, Lemma 6.4] that for any ε > 0 there is a constant c n,ε > 0, depending only on n and ε, such that
c∈C (x)
|S r 3 (c)| c n,ε |r 3 | n/2+1+ε |r 3 | n/3 + Y n J(Θ) n/2 |P | n .
According to [1, Lemma 2.2] we have
|θ|= Θ dθ = Θ + 1 − Θ Θ + 1.
Hence, on integrating trivially over x and then over θ, we deduce the existence of a constant c n,ε > 0 such that
|P | n |r| n |θ|= Θ c∈O n |c| C |S r (c)I r (θ; c)|dθ c n,ε Y n/2+1+ε Θ + 1 |r 3 | n/3 |P | n Y n + J(Θ) n/2 .
It remains to sum this over all monic r ∈ O such that |r| = Y , of which there are precisely Y . For this we note that r∈O |r|= Y r monic
|r 3 | n/3 Y n/3 r=b 1 b 2 r 3 ∈O |r|= Y r monic 1 |b 1 b 2 | n/3 c n Y n/3+1/3 ,
for an appropriate constant c n > 0 such that there are at most c n Y 1/3 values of |r 3 | Y . Recalling that Y Q and Θ < −(Y + Q), we easily deduce that
J(Θ) n/2 max 1, |P | 3 Y Q n/2 = Q n/2 Y n/2 .
Hence there is a constant c n,ε > 0 such that
|N(P, Y, Θ)| c n,ε Y n/2+1+ε Θ + 1 Y n/3+1/3 |P | n Y n + Q n/2 Y n/2−1 , whence in fact |N(P, Y, Θ)| c n,ε Θ + 1 |P | n Y n/6−4/3−ε + Y 2 Q n/2+ε .
Taking Θ + 1 Y −1 Q −1 we see that the second term is at most c n,ε Y Q n/2−1+ε c n,ε Q n/2+ε c n,ε |P | 3n/4+2ε .
But we also have Θ + 1 q Y n/6−4/3 / Q 2 for any (Y, Θ) ∈ S 1 (d), whence B 1 (P ) c n,ε q|P | n−3+2ε + |P | 3n/4+2ε .
Assuming that ε > 0 is taken to be sufficiently small in term of d, it easily follows that lim q→∞ q −E(d,n) B 1 (P ) = 0 for n 13.
Weyl differencing
The goal of this section is to show that lim q→∞ q −E(d,n) B 2 (P ) = 0 for n 13. Our starting point is an analysis of the exponential sum (2.4), for which we will use the function field version of Birch's Weyl differencing that was worked out by Lee [4]. Our task is to make the dependence on q completely explicit, but the argument is very standard and so we shall be brief where possible. Since we are only concerned with cubic forms one needs to take R = 1 and d = 3 in Lee's work [4, §3]. As usual we will assume that char(F q ) > 3.
Define the Hessian matrix
H(x) = ∂ 2 F ∂x i ∂x j 1 i,j n
that is associated to our cubic form F . For any β = −∞<i N b i t i ∈ K ∞ , we let β = | −∞<i<0 b i t i |. Beginning with an application of [4,Cor. 3.3], it follows that
|S(α)| 4 |P | 2n # u, v ∈ O n : |u|, |v| < |P |, αH(u)v < |P | −1 .
for any α ∈ T. We are only interested in values of α with rational approximation α = a/r + θ, where |r| = Y and |θ| = Θ for (Y, Θ) ∈ S 2 (d). We recall here, for the sake of convenience, that this means
1 Y Q and Θ < 1 Y Q ,
with either Y q and Θ > Y n/6−4/3 / Q 2 , or else Y = 1 and Θ |P | −3+δ . In either case we therefore have Θ > Y n/6−4/3 / Q 2 . We note that S 2 (d) is non-empty only when Y < |P | 9/(n−2) , which we now assume.
The next stage in the analysis of S(α) is a double application of the function field analogue of Davenport's "shrinking lemma", as proved in [4,Lemma 3.4]. Let Γ = (γ ij ) be a symmetric n × n matrix with entries in K ∞ . For 1 i n we introduce the linear forms
L i (u 1 , . . . , u n ) = n j=1 γ ij u j .
(4.1)
Next, for given real numbers a, Z, we let N(a, Z) denote the number of vectors (u 1 , . . . , u 2n ) ∈ O 2n such that |u j | < a Z and |L j (u 1 , . . . , u n ) + u j+n | < Z a for 1 j n.
In due course we will adapt the argument of [4,Lemma 3.4] to show that for any a, Z 1 ,
Z 2 ∈ R with Z 1 Z 2 0, we have N(a, Z 1 ) N(a, Z 2 ) K n ,(4.2)
where K = ⌈Z 1 − {a}⌉ − ⌈Z 2 + {a}⌉ and {a} denotes the fractional part of a.
Taking this on faith for the moment, let Z be such that
Z = Y Θ|P |.
Our assumptions on Y, Θ easily imply that Z 1 and Z ∈ 1 2 Z. We may therefore apply the shrinking lemma first with ( a, Z 1 , Z 2 ) = (|P |, Z, 1). This allows us to take K Z 1 in (4.2). Next we apply the lemma a second time with ( a, Z 1 , Z 2 ) = ( Z −1/2 |P |, Z 3/2 , Z 1/2 ). We may write Z/2 = N + k/4 for some integer N and k ∈ {0, 1, 2, 3}. Thus
⌈Z 1 − {a}⌉ − ⌈Z 2 + {a}⌉ = (3N + k) − N = 2N + k Z 1 − Z 2 .
This therefore implies
|S(α)| 4 |P | 2n Z 2n # u, v ∈ O n : |u|, |v| < Z|P |, αH(u)v < Z 2 |P | −1 .
The next step is an application of the function field analogue of Heath-Brown's Diophantine approximation lemma, as worked out in [4,Lemma 3.6]. Noting that |H(u)v| |u||v|, we shall apply this with M = ( Z|P |) 2 and Y 0 = Z −2 |P |. (In order to avoid a clash of notation we let Y 0 denote the parameter Y that features in [4,Lemma 3.6].) This result allows us to conclude that H(u)v = 0 provided that Y 0 > |r| and M −1 > |rθ| Y −1 0 . Since |r| = Y and |θ| = Θ for (Y, Θ) ∈ S 2 (d) it is easy to check that our choice of Z ensures that all of these inequalities are satisfied. Hence
|S(α)| 4 |P | 2n Z 2n # u, v ∈ O n : |u|, |v| < Z|P |, H(u)v = 0 .
The proof of [1, Lemma 6.5] directly yields the existence of a constant c n > 0 such that the remaining cardinality is bounded by c n ( Z|P |) n . In conclusion we have shown that
|S(α)| c n |P | n ( Y Θ|P | 3 ) n/8 .
Turning now to the estimation of N(P, Y, Θ), it follows from (2.5) that
B 2 (P ) c n max (Y,Θ)∈S 2 (d) Y 2 Θ + 1|P | n ( Y Θ|P | 3 ) n/8 = c n q max (Y,Θ)∈S 2 (d) Y 2−n/8 Θ 1−n/8 |P | 5n/8 .
Note that the exponent of Θ is negative for n 13. Let (Y, Θ) ∈ S 2 (d). Taking Θ > Y n/6−4/3 / Q 2 , we get
Y 2−n/8 Θ 1−n/8 |P | 5n/8 < Y 2−n/8 |P | n−3 Y (n/8−1)(n/6−4/3) |P | n−3 ,
since Y 1 and (2 − n/8) − (n/8 − 1)(n/6 − 4/3) 0 for n 13. Hence lim q→∞ q −E(d,n) B 2 (P ) = 0 for n 13.
Our final task is to show that (4.2) holds with K = ⌈Z 1 − {a}⌉ − ⌈Z 2 + {a}⌉. The argument is based on the geometry of numbers. Every matrix corresponds to an O-lattice spanned by its columns. We will abuse notation and identify a matrix with its corresponding lattice. Given a lattice M, the adjoint lattice Λ is defined to satisfy Λ T M = I. Let Γ = (γ ij ) be a symmetric n × n matrix with entries in K ∞ . Given any integer m, we define the special lattice
M m = t −m I n 0 t m Γ t m I n ,
with corresponding adjoint lattice
Λ m = t m I n −t m Γ 0 t −m I n .
Let R 1 , ..., R 2n denote the successive minima of the lattice corresponding to M m and note that the lattices M m and Λ m can be identified with one another. It follows from [4, Lemma B.6] that R ν + R 2n−ν+1 = 0 for each 1 ν 2n. Let L i (u 1 , . . . , u n ) be the linear forms (4.1) for 1 i n. Then for any real number Z, it is easy to see that N(m, Z) = {x ∈ M m : |x| < Z}, in the notation of (4.2). We denote the right hand side by M m (Z) and proceed to establish the following inequality. Proof. Let 1 µ, ν 2n be such that R µ < Z 1 R µ+1 and R ν < Z 2 R ν+1 . Since R j is a non-decreasing sequence which satisfies R j + R 2n−j+1 = 0, we must have 0 R n+1 , whence in fact µ ν n. It follows from [4,Lemma B.5] that
M m (Z 1 ) M m (Z 2 ) = 1 if Z 1 , Z 2 < R 1 , ν j=1 R j / Z 1 ( Z 1 / Z 2 ) ν if Z 1 < R 1 Z 2 , ν j=µ+1 R j / Z 1 ( Z 1 / Z 2 ) ν if R 1 Z 1 Z 2 ,
The statement of the lemma is now obvious.
Now let a ∈ R and put m = ⌊a⌋. For any real number Z it is clear that
Date: February 18, 2015. 2010 Mathematics Subject Classification. 14H10 (11P55, 14G05).
Lemma 2. 3 .
3We have lim q→∞ q −E(d,n) B(P ) = 0 for n 13.
(
Y,Θ)∈S i (d) |N(P, Y, Θ)|, for i = 1, 2, so that B(P ) c d {B 1 (P ) + B 2 (P )}. Assuming that n 13, it now suffices to show that lim q→∞ q −E(d,n) B i (P ) = 0 for i = 1, 2. 3. Poisson summation The counting function (2.2) is equal to the counting function N(P ) considered in [1, §4] with M = 1 and b = 0. (Equivalently this is [1, Eq. (7.4)] with M = 1, b = 0, L = 0 and x 0 = 0.) Throughout this section we shall assume that the cubic form F has n 13 variables. The main part of [1] is actually concerned with non-singular cubic forms in only n 8 variables. Intrinsic to the success of this endeavour is the choice of counting function, in which F q (t)-solutions are singled out for consideration if they are sufficiently close to a conveniently chosen solution over K ∞ . The fact that we must consider all F q (t)-solutions in (2.2) directly accounts for this loss of precision. Let J(Θ) = max{1, Θ|P | 3 }. Appealing to [1, Lemma 7.2], we find that N(P, Y, Θ) = |P | n r∈O |r|= Y r monic |r| −n |θ|= Θ c∈O n |c| C S r (c)I r (θ; c)dθ, where C = Y |P | −1 J(Θ) and S r (c) = * |a|<|r| y∈O n |y|<|r| ψ aF (y) − c.y r ,
Lemma 4 . 1 .
41Let m, Z 1 , Z 2 ∈ Z such that Z 1 Z 2 0. Then we have M m (Z 1 ) M m (Z 2 )
M
m (Z − {a}) N(a, Z) M m (Z + {a}).
Lemma 4.1 therefore yields (4.2) with K = ⌈Z 1 −{a}⌉−⌈Z 2 +{a}⌉, as required.
Cubic hypersurfaces over F q (t). T D Browning, P Vishe, arXiv:1502.00772SubmittedT.D. Browning and P. Vishe, Cubic hypersurfaces over F q (t). Submitted, 2015. (arXiv:1502.00772)
Rational curves on smooth cubic hypersurfaces. I Coskun, J Starr, Int. Math. Res. Not. 24I. Coskun and J. Starr, Rational curves on smooth cubic hypersurfaces. Int. Math. Res. Not. 24 (2009), 4626-4641.
M Greenberg, Lectures on forms in many variables. Benjamin. New YorkM. Greenberg, Lectures on forms in many variables. Benjamin, New York, 1969.
Birch's theorem in function fields. S A Lee, arXiv:1109.4953SubmittedS.A. Lee, Birch's theorem in function fields. Submitted, 2011. (arXiv:1109.4953)
Rational curves on algebraic varieties. J Kollár, Springer-VerlagJ. Kollár, Rational curves on algebraic varieties. Springer-Verlag, 1996.
Looking for rational curves on cubic hypersurfaces. J Kollár, NATO Sci. Peace Secur. Ser. D Inf. Commun. Secur. 16Higher-dimensional geometry over finite fieldsJ. Kollár, Looking for rational curves on cubic hypersurfaces. Higher-dimensional ge- ometry over finite fields, 92-122, NATO Sci. Peace Secur. Ser. D Inf. Commun. Secur. 16, IOS, Amsterdam, 2008.
An algebraic circle method. T Pugin, Columbia UniversityPhD thesisT. Pugin, An algebraic circle method. PhD thesis, Columbia University, 2011.
| []
|
[
"On Fair Virtual Conference Scheduling: Achieving Equitable Participant and Speaker Satisfaction",
"On Fair Virtual Conference Scheduling: Achieving Equitable Participant and Speaker Satisfaction"
]
| [
"K Gourab ",
"Patro ",
"Abhijnan Chakraborty ",
"Niloy Ganguly ",
"Krishna P Gummadi ",
"\nMax-Planck Institute for Software Systems\nIndian Institute of Technology Kharagpur\nIndia, Germany\n",
"\nMax-Planck Institute for Software Systems\nIndian Institute of Technology Kharagpur\nIndia, Germany\n"
]
| [
"Max-Planck Institute for Software Systems\nIndian Institute of Technology Kharagpur\nIndia, Germany",
"Max-Planck Institute for Software Systems\nIndian Institute of Technology Kharagpur\nIndia, Germany"
]
| []
| The (COVID-19) pandemic-induced restrictions on travel and social gatherings have prompted most conference organizers to move their events online. However, in contrast to physical conferences, virtual conferences face a challenge in efficiently scheduling talks, accounting for the availability of participants from different timezones as well as their interests in attending different talks. In such settings, a natural objective for the conference organizers would be to maximize some global welfare measure, such as the total expected audience participation across all talks. However, we show that optimizing for global welfare could result in a schedule that is unfair to the stakeholders, i.e., the individual utilities for participants and speakers can be highly unequal. To address the fairness concerns, we formally define fairness notions for participants and speakers, and subsequently derive suitable fairness objectives for them. We show that the welfare and fairness objectives can be in conflict with each other, and there is a need to maintain a balance between these objective while caring for them simultaneously. Thus, we propose a joint optimization framework that allows conference organizers to design talk schedules that balance (i.e., allow tradeoffs) between global welfare, participant fairness and the speaker fairness objectives. We show that the optimization problem can be solved using integer linear programming, and empirically evaluate the necessity and benefits of such joint optimization approach in virtual conference scheduling. | null | [
"https://arxiv.org/pdf/2010.14624v1.pdf"
]
| 225,094,655 | 2010.14624 | 326fc1e2e4c0fc5434470e52ed2313664b442781 |
On Fair Virtual Conference Scheduling: Achieving Equitable Participant and Speaker Satisfaction
K Gourab
Patro
Abhijnan Chakraborty
Niloy Ganguly
Krishna P Gummadi
Max-Planck Institute for Software Systems
Indian Institute of Technology Kharagpur
India, Germany
Max-Planck Institute for Software Systems
Indian Institute of Technology Kharagpur
India, Germany
On Fair Virtual Conference Scheduling: Achieving Equitable Participant and Speaker Satisfaction
The (COVID-19) pandemic-induced restrictions on travel and social gatherings have prompted most conference organizers to move their events online. However, in contrast to physical conferences, virtual conferences face a challenge in efficiently scheduling talks, accounting for the availability of participants from different timezones as well as their interests in attending different talks. In such settings, a natural objective for the conference organizers would be to maximize some global welfare measure, such as the total expected audience participation across all talks. However, we show that optimizing for global welfare could result in a schedule that is unfair to the stakeholders, i.e., the individual utilities for participants and speakers can be highly unequal. To address the fairness concerns, we formally define fairness notions for participants and speakers, and subsequently derive suitable fairness objectives for them. We show that the welfare and fairness objectives can be in conflict with each other, and there is a need to maintain a balance between these objective while caring for them simultaneously. Thus, we propose a joint optimization framework that allows conference organizers to design talk schedules that balance (i.e., allow tradeoffs) between global welfare, participant fairness and the speaker fairness objectives. We show that the optimization problem can be solved using integer linear programming, and empirically evaluate the necessity and benefits of such joint optimization approach in virtual conference scheduling.
INTRODUCTION
Due to the restrictions on travel and social gatherings to tackle the COVID-19 pandemic, most of the conferences have moved online and some of them may remain online in the years to come. Online conferences are not only hugely economical-due to the reduction in organization and travel costs, thus enabling participation from resource/budget constrained regions-they are also more environmentally sustainable than their physical in-person counterparts (by reducing the carbon footprint from long-distance air travels, huge power and non-renewable consumption at event venues, etc.). In addition, they also provide a unique opportunity to significantly improve the scale and outreach of the conferences, along with focused discussions through interactive tools like live messaging [16].
However, online conferences come with their own set of challenges. For example, scaling up participation is subject to the availability of stable and high-speed Internet in different regions; time consuming training of participants and speakers on different interactive conferencing tools is essential for efficient participation [17]. Another big challenge in organizing online conferences is optimal scheduling of the conference talks; this is because, online conferences usually have participants from different timezones all around the globe unlike the physical conference setup where participants assemble at a single place to participate in the conference. Thus, traditional timezone-specific conference schedules-usually followed in physical conferences based on the timezone of the venue-is no longer convenient for conferences being held online, as the participants from the other distant parts of the globe will find it hard to attend. This demands for conference schedules which are not timezone-specific but timezone-aware, and may stretch beyond the usual 7-8 hours of a day, to cater to the participants from different timezones. In this paper, we focus on this conference scheduling problem and relevant concerns of efficiency and fairness.
A natural objective for conference scheduling would be to maximize some social welfare measure like the total expected audience participation across all talks (formally defined in §2.3). However, optimizing for such a social welfare objective could result in a schedule that is unfair to the stakeholders (as illustrated in §4.1), i.e., the level of satisfaction enjoyed by individual participants (formally defined in §2.1) can be very different, and the expected exposure (audience size) at different talks can be disproportionately skewed -leading to disparity in speaker satisfactions (formally defined in §2.2). Intuitively, a participant would be less satisfied if her favorite talks are scheduled in timeslots that are unfavorable for her, and similarly a speaker would be less satisfied if her talk is scheduled in a timeslot which adversely affects her deserved exposure (expected audience or crowd). Thus, in conference scheduling, fairness for participants and speakers are also desired along with social welfare. We formally define the problem setup alongside suitable measures of participant satisfaction, speaker satisfaction, and social welfare in §2. Intuitively, stakeholder fairness would be in bringing parity to their normalized satisfactions (parity in individual participant satisfactions and parity in individual speaker satisfactions). However, as absolute parity is often hard to achieve in discrete real-world problems, we propose suitable relaxed fairness notions for participants and speakers in sections 3.1 and 3.2 respectively. Subsequently, we reduce these fairness notions to fairness objectives through suitable unfairness measures. Both fairness objectives along with welfare objective are important in conference scheduling. However, it may be impossible to optimize all three simultaneously as there are some fundamental tensions among these objectives (more details in §4.1); optimizing for one objective could cause losses in other objectives.
We propose a joint optimization framework ( §4.2) that allows conference organizers to design schedules that balance (i.e., allow trade-offs) between the global welfare, the participant fairness and the speaker fairness objectives. We show that the joint optimization problem can be solved using integer linear programming, and we also empirically evaluate the necessity and benefits of such joint optimization approach in virtual conference scheduling along with the analysis on the pitfalls of baseline approaches ( §5). Our focus, in this paper, is more towards bringing out the fundamental tensions, trade-offs, and difficulties involved in online conference scheduling. With this work, we begin to lay the foundation towards more focused research into-very timely and important problem-efficient and fair online conference scheduling, and hope to motivate a new line of work-both of theoretical and empirical interests-on such multi-stakeholder scheduling settings. In this regard, we also provide a detailed description on possible future works ( §7) to consider many new nuances observed in online conferences.
In summary, we make the following contributions.
• We formally define the problem of online conference scheduling ( §2), and the notions of social welfare ( §2.3), participant fairness ( §3.1) and speaker fairness ( §3.2). Through suitable unfairness measures, we reduce them to fairness objectives. To our knowledge, we are the first to do so. • We illustrate the some fundamental tensions, trade-offs, and possibility of conflicts among the two fairness objectives and welfare objective ( §4.1). • We propose a joint optimization problem to suitably balance these objectives, and empirically illustrate the difficulties involved in the problem and the benefits of our approach (sections 4.2 and 5). We also detail other nuances involved in online conference and possible future works ( §7).
PRELIMINARIES
Problem Setup: In a conference, let P, T , and S represent the sets of participants, planned talks, and available slots (non-overlapping) respectively; |P | = , |T | = , |S| = ; let ∈ P, ∈ T , and ∈ S be instances of participant, talk, and slot respectively. Assuming a talk to be scheduled only once, a conference schedule Γ is a mapping Γ : T → S. Note that, in this paper, we limit ourselves to the case with no parallel or overlapping time slots; this implies that each slot refers to a unique time interval. Thus, the conference schedule Γ is a one-to-one mapping with ≤ . The goal of a conference scheduling problem is to find a schedule Γ which satisfies some specified constraint(s) or optimizes some specified objective(s).
Interest Scores [ (·)]: The participants may have different preference levels over the set of talks. We model this phenomenon using participant-specific interest scores. Let ( | ) = ( ) represent 's interest score for talk . Note that the interest score represents the probability of satisfaction of the participant on attending the corresponding talk; i.e., ( ) ∈ [0, 1], ∀ ∈ P, ∈ T .
Ease of Availability [ (·)]: In a virtual conference setting, the participants are located in different parts of the world which makes it convenient for them to attend talks only in specific times of the day (usually during the day time of their timezone). Note that participants belonging to same timezone can still have different ease of availability throughout the 24-hour period. We model this phenomenon using participant-specific availability scores. Let ( | ) = ( ) represent the ease of availability score or the probability of making herself available in slot ; i.e., ( ) ∈ [0, 1], ∀ ∈ P, ∈ S.
Participant Satisfaction ( )
In virtual conference settings, a participant's satisfaction depends on both her interest for the talks and her ease of availability for the time slots when the talks are scheduled. For simplicity, we assume (·) and (·) to be independent of each other, i.e., the interest score of a talk does not affect the ease of availability of a participant in a slot and vice-versa. However, a participant may still attend a talk scheduled in a slot with less availability score if she has a very high interest score for the talk. This means that the expected gain of a participant from a talk in slot will depend on the joint probability of making herself available in and she getting satisfied after attending : i.e, ( ) × ( ). Note that here ( ) × ( ) also represents the probability of the participant attending talk in slot . Thus, given a conference schedule Γ, we define the cumulative gain ( ) of a participant as below.
(
|Γ) = (Γ) = ∑︁ ∈T ( ) × (Γ( ))(1)
Now, let's imagine a situation wherein the participant is asked to choose the conference schedule Γ. Assuming to be a selfish and rational agent, she would choose the schedule which benefits her the most; i.e., the one which gives her the highest cumulative gain.
Here, the best conference schedule for would be the one in which the talk with the highest is scheduled in the slot with the highest , the talk with second highest is scheduled in the slot with the second highest , and so on; let Γ * be that best conference schedule for . We call the cumulative gain of given Γ * as her ideal cumulative gain ( ) as defined below.
( ) = = max Γ (Γ) = (Γ * )(2)
represents the maximum possible cumulative gain depending on the interest scores of the participant and her availability scores. Thus, a participant with higher overall interests or higher overall availability will naturally have higher . We now define the overall satisfaction of a participant as her normalized cumulative gain ( ) as below.
( |Γ) = (Γ) = (Γ)(3)
The denominator is the maximum possible cumulative gain for the participant . Thus, (Γ) ∈ [0, 1], ∀ ∈ P, ∀Γ.
Speaker Satisfaction ( )
The speaker of a talk gets satisfied if her talk gets participation; more the participation more will be the speaker satisfaction. To have higher participation, the talk needs to be scheduled in a slot with high ease of availability of the participants who are highly interested in the talk. Thus, given a schedule Γ, we define expected crowd ( ) at talk as below.
( |Γ) = (Γ) = ∑︁ ∈ P ( ) × (Γ( ))(4)
Now if the speaker of the talk is given the task to design the conference schedule, she would try to maximize her expected crowd assuming that each speaker is a selfish and rational agent; let that best schedule for be denoted as Γ * . We call the expected crowd at talk with schedule Γ * as the ideal expected crowd ( ) of .
( ) = = max Γ (Γ) = (Γ * )(5)
represents the maximum value for expected crowd depending on the overall interest scores of the participants and their availability scores. Thus, a talk with higher overall interests from the participants will have higher . We now define the overall satisfaction of a speaker as the normalized expected crowd ( ) at her talk as below.
( |Γ) = (Γ) = (Γ)(6)
Note that the denominator is the maximum possible value of the expected crowd at talk . Thus, (Γ) ∈ [0, 1], ∀ ∈ T , ∀Γ.
Social Welfare Objective ( )
A natural objective of the conference organizers is maximizing the social welfare, i.e., the total participation in the conference. Given a talk scheduled in slot , the expectation of the participant attending it, can be written as ( ) × ( ). Thus, the total expected participation ( ) in the conference with schedule Γ can be written as below.
(
Γ) = ∑︁ ∈ P ∑︁ ∈ T ( ) × (Γ( ))(7)
It is worth noting that the social welfare, here, is same as: (i) the sum of cumulative gains of all the participants; (ii) the sum of expected crowd at all the talks; i.e.,
(Γ) = ∑︁ ∈ P ∑︁ ∈ T ( ) × (Γ( )) = ∑︁ ∈ P ( |Γ) = ∑︁ ∈ T ∑︁ ∈ P ( ) × (Γ( )) = ∑︁ ∈ T ( |Γ) (8)
Now the natural social welfare objective for conference scheduling can be written as argmax Γ (Γ). We use Γ SW to represent the schedule which maximizes the social welfare; i.e., Γ SW = argmax Γ (Γ).
FAIRNESS IN CONFERENCE SCHEDULING
Optimizing for just participation-based objective could result in the schedule being unfair to the stakeholders involved, i.e., participants and speakers. Optimum participation could result in very high satisfaction for some participants while other participants may end up very less satisfied; same could happen for the speakers too. Thus, we first define the fairness notions for the participants and the speakers in §3.1 and §3.2 respectively.
Participant Fairness
Disparity in normalized satisfactions of participants is the cause of participant unfairness. Thus, to ensure fairness for participants, the conference schedule should equally satisfy all the participants. However, such hard constraint can be infeasible in real-world cases. Thus, we define a relaxed fairness notion for participants below.
Definition 1. -Fairness for Participants: For a non-negative real number , a schedule Γ is said to be -fair for participants iff the following condition is satisfied.
( |Γ) − ( |Γ) ≤ , ∀ , ∈ P
Note that, smaller the value of , lesser is the disparity between participant satisfactions, and fairer is the conference schedule for the participants. Thus, to relate participant fairness notions with different values, we can write the following lemma. Lemma 1. If a schedule Γ is ′ -fair for the participants, then it is also -fair for participants ∀ ≥ ′ .
Proof. If pairwise disparities in participant satisfaction are less than ′ , then they are also less than , as ′ ≤ . □
In our setting, the value of can also be thought of as the tolerance level for disparity or unfairness in participant satisfactions. Thus, given a schedule Γ, we can find the smallest possible and use that to represent how unfair is Γ to the participants. We formally define the metric to measure participant unfairness below.
Definition 2. Participant Unfairness Ψ P (Γ) :
The participant unfairness caused by a schedule Γ, is the smallest non-negative value of such that Γ is -fair for the participants.
Ψ P (Γ) = inf : ≥ 0 & Γ is -fair for participants(10)
(Here, the notation inf {·} represents infimum of a set.) Proposition 3.1. Participant unfairness metric from eq. (10) can be reduced as below.
Ψ P (Γ) = max ∈ P ( |Γ) − min ∈ P ( |Γ)(11)
Proof. For a schedule Γ, let's assume Ψ P (Γ) = ′ . Let ∈ [0, ′ ), then Γ is not -fair for participants (from definition 2). This implies that, there is a pair of participants , ∈ P such that
< ( |Γ) − ( |Γ) ≤ ′ (from definition 1). Let's in- crease till = ( |Γ) − ( |Γ) .
Thus, now the satisfaction disparity between and is at most , however may still be less than ′ . If we keep on increasing like this, there will not be any such opportunity to increase further when we reach = max , ∈ P ( |Γ) − ( |Γ) . Now with the current value of , Γ can be called -fair for the first time. Therefore, Ψ P (Γ) = ′ = max , ∈ P ( |Γ) − ( |Γ) , which is the maximum pairwise disparity in participant satisfaction.
As ∀ ∈ P, max
∈ P ( |Γ) ≥ ( |Γ) ≥ min ∈ P ( |Γ) ≥ 0,
the maximum pairwise disparity will be in between the most satisfied participant(s) and the least satisfied participant(s).
∴ Ψ P (Γ) = max ∈ P ( |Γ) − min ∈ P ( |Γ) □
Using the metric for unfairness above, we can define the following fairness objective for participants. Definition 3. Fairness objective for participants can be defined as finding the schedule Γ which minimizes Ψ P (Γ).
argmin Γ Ψ P (Γ) ≡ argmin Γ max ∈ P ( |Γ) − min ∈ P ( |Γ)(12)
Speaker Fairness
Similar to participant fairness, we follow a parity-based notion for speaker fairness and define the relaxed speaker fairness below. Definition 4. -Fairness for Speakers: For a non-negative real number , a schedule Γ is said to be -fair for speakers iff the following condition is satisfied.
( |Γ) − ( |Γ) ≤ , ∀ , ∈ T(13)
The lemma 1 can be repeated, here, for speakers too.
Lemma 2. If a schedule Γ is ′ -fair for the speakers, then it is also -fair for speakers ∀ ≥ ′ .
Following arguments similar to participant unfairness, we define the metric to measure speaker unfairness below.
Definition 5. Speaker Unfairness Ψ S (Γ) :
The speaker unfairness caused by a schedule Γ, is the smallest non-negative value of such that Γ is -fair for the speakers.
Ψ S (Γ) = inf : ≥ 0 & Γ is -fair for speakers(14)
Proposition 3.2. Participant unfairness metric from eq. (10) can be reduced as below.
Ψ S (Γ) = max ∈ T ( |Γ) − min ∈ T ( |Γ)(15)
We skip the proofs for lemma 2 and proposition 3.2, as they will follow arguments and approach similar to those in lemma 1 and proposition 3.1 respectively. Using the speaker unfairness metric from eq. (15), we define the following fairness objective for speakers. Definition 6. Fairness objective for speakers can be defined as finding the schedule Γ which minimizes Ψ S (Γ).
argmin Γ Ψ S (Γ) ≡ argmin Γ max ∈ T ( |Γ) − min ∈ T ( |Γ)(16)
BALANCING WELFARE AND FAIRNESS
Given the conference scheduling problem (as defined in §2), the ultimate goal is to find a schedule Γ which optimizes social welfare while minimizing participant unfairness and speaker unfairness (as defined in eq. (7), eq. (12), eq. (16) respectively). However, simultaneously optimizing for them could prove to be challenging and present disagreement between the objectives. Thus, first in §4.1, we bring out certain difficulties in simultaneously ensuring fairness and social welfare while also highlighting the potential conflicts between them. Then, in §4.2, we propose a joint optimization framework for the problem.
Tension between Fairness and Welfare
In this section, through a set of claims, we illustrate some fundamental tensions between social welfare and fairness. Claim 1. In the virtual conference scheduling problem (as defined in §2), it is not always possible to gain participant fairness without losing social welfare.
Participants
( ) ( ) Proof. Let's assume the opposite argument to be true; i.e., we can always gain participant fairness with no loss to social welfare. We disprove this using a counter example with two participants, one talk and three slots as given in table 1. Both participants have interest score of 1 for the talk. Now looking at the availability scores, while 1 and 2 have full ease of availability in 1 and 3 respectively, they both can make themselves available 2 with 0.49 probability. If we consider a social welfare objective (participation maximization) here, we would end up scheduling the talk in either
in 1 or in 3 ; if Γ( ) = 1 or Γ( ) = 3 , then (Γ) = 1; if Γ( ) = 2 , then (Γ) = 0.
98 which is less. However, maximizing participation will either end up with [
( 1 ) = 1, ( 2 ) = 0] if Γ( ) = 1 or [ ( 1 ) = 0, ( 2 ) = 1] if Γ( ) = 3 ;
As both of these results from social welfare optimization provide disparate satisfaction to the participants, they both are unfair. On the other hand, if we schedule the talk in 2 (Γ( ) = 2 ), then it becomes fair to both the participants as they will get similar satisfaction [
( 1 ) = 0.49, ( 2 ) = 0.49]-they both get a chance to make themselves available in 2 to attend the talk. Even though scheduling the talk in 2 , ensures participant fairness, it has come at a loss in social welfare; i.e., (Γ) reduced from 1 to 0.98. □ Note that, in the example given in table 1, introducing participant fairness has caused a loss in social welfare, and also a loss in the speaker satisfaction ( ) (also reduced from 1 to 0.98). Thus, it is important to ask: Q1: to what extent the conference organizer is ready to lose social welfare and speaker satisfaction while bringing in participant fairness? Claim 2. In the virtual conference scheduling problem (as defined in §2), it is not always possible to gain speaker fairness with out losing social welfare.
Proof. Let's assume the opposite argument to be true; i.e., we can always gain speaker fairness with no loss to social welfare. We disprove this using a counter example with just one participant, two talks, and three available slots as given in table 2. Now, to maximize social welfare, we can just match talks in decreasing order of overall interest scores to slots in decreasing order of availability scores; i.e., Γ SW ( 1 ) = 1 and Γ SW ( 2 ) = 3 , which will yield (Γ SW ) = 1.4. The speaker satisfactions for the talks with this schedule will be:
( 1 |Γ SW ) = 1 (as ( 1 |Γ SW ) = 1 and ( 1 ) = 1), and ( 2 |Γ SW ) = 0.8 (as ( 1 |Γ SW ) = 0.4 and ( 1 ) = 0.5).
Such disparity in speaker satisfactions can be attributed to speaker unfairness. In order to reduce speakerside disparity, we can use a different schedule: Γ( 1 ) = 3 and Γ( 2 ) = 2 ; this yields speaker satisfactions There are two important points to note from the example in table 2: (i) the most fair solution for speakers leaves a very valuable slot 1 -with the highest overall availability scores for participantsunused thereby losing a huge opportunity for larger participation; (ii) speaker fairness has introduced a loss in social welfare ( (Γ SW ) = 1.4 to (Γ) = 1.175) and also a loss in participant satisfaction (
( |Γ SW ) = 1 to ( |Γ) ≈ 0.84).
Thus, it is important to ask; Q2: to what extent the conference organizer is ready to lose social welfare and participant satisfaction while bringing in speaker fairness? Claim 3. In the virtual conference scheduling problem (as defined in §2), it is not always possible to get speaker fairness with out losing participant fairness and vice-versa.
Proof. Let's assume the opposite argument to be true; we can always get participant fairness and speaker fairness simultaneously. We disprove this using a counter example with two participants, two talks and four available slots as given in table 3. In this example, the schedule Γ = {( 1 , 2 ), ( 2 , 3 )} achieves speaker fairness-
( 1 |Γ) = ( 2 |Γ) = 0.5 (as ( 1 |Γ) = 1, ( 2 |Γ) = 0.7, while ( 1 ) = 2, ( 2 ) = 1.4).
However, Γ is unfair for the participants- It is very evident from the given examples that, both fairness and satisfaction of participants and speakers could often come in conflict with each other and also with the overall social welfare in such conference scheduling problems. Thus, there is a need to maintain suitable balance between both fairness notions and social welfare while still simultaneously caring for them in conference scheduling.
( 1 |Γ) = 1 1.7 < 0.7 1.7 = ( 2 |Γ) (as ( 1 |Γ) = 1, ( 2 |Γ) = 0.7, while ( 1 ) = ( 2 ) = 1.7). On the other hand, schedule Γ ′ = {( 1 , 1 ), ( 2 , 4 )} is fair for the participants- ( 1 |Γ ′ ) = ( 2 |Γ ′ ) = 1.
Joint Optimization for Welfare and Fairness
We combine participant and speaker fairness objectives with our natural objective of social welfare maximization, and design the following joint optimization problem.
argmax Γ (Γ) + 1 × min ∈ P ( |Γ) −max ∈ P ( |Γ) + 2 × min ∈ T ( |Γ) − max ∈ T ( |Γ)(17)
Participants ( ) ( ) Here we normalize the social welfare objective to bring all the three components to similar scales; i.e., is divided by |P | · |T | = (it is the maximum possible value for -occurs when ( ) = ( ) = 1, ∀ , , ). We also reverse the fairness objective functions from eq. (12) and eq. (16) while inserting them in eq. (17) as it features argmax instead of argmin, and use 1 , 2 as weights for participant fairness and speaker fairness respectively.
We take a matrix of dimensions |T | × |S|. The elements of : , is a binary indicator variable for talk ∈ T being scheduled in slot ∈ S, i.e., , = 1 if is scheduled in and 0 otherwise. Now to operationalize the joint optimization objective in eq. (17), we express it as the following integer linear program.
argmax 1 ∑︁ ∈T ∑︁ ∈ P ∑︁ ∈S ( ) · ( ) · , + 1 min ∈ P ∑︁ ∈T ∑︁ ∈S ( ) · ( ) ( ) · , − max ∈ P ∑︁ ∈T ∑︁ ∈S ( ) · ( ) ( ) · , + 2 min ∈T ∑︁ ∈ P ∑︁ ∈S ( ) · ( ) ( ) · , − max ∈T ∑︁ ∈ P ∑︁ ∈S ( ) · ( ) ( ) · , s.t. , ∈ {0, 1} ∀ ∈ T , ∈ S ∑︁ ∈S , = 1, ∀ ∈ T ∑︁ ∈T , ≤ 1, ∀ ∈ S(18)
Here the first constraint is the integral constraint for the interger program. Second constraint ensures that, each talk gets scheduled once. On the other hand, one slot can be allocated to atmost one talk which is ensured by the third constraint. We use cvxpy (https: //www.cvxpy.org/) paired with Gurobi (https://www.gurobi.com/) solver for the integer linear programs. 1 1 Even though we solve the joint optimization problem by converting it into an integer linear program which works well for problems of small size, more research is needed in developing approaches that are more scalable and computationally efficient. In this paper, we use the integer program approach and focus more towards empirically bringing out fundamental tensions, trade-offs and difficulties involved in fair conference scheduling problem.
EXPERIMENTAL EVALUATION
In this section, we present the baselines, introduce the metrics for evaluations and then show how our proposal compares with the baselines along the presented metrics.
Baselines
We use the following baselines and empirically compare them with our approach from §4.2 (further on referred to as FairConf).
(1) Social Welfare Maximization (SWM): In this baseline, we just optimize the schedule for social welfare; i.e., Γ SW or argmax Γ (Γ) without any fairness consideration. ( ), and the slots in descending order of the overall availability scores received by them, i.e., ∈ P ( ). Now, we assign the talk with the highest overall interest score to the slot with the highest overall availability score, the talk with the second highest overall interest score to the slot with the second highest overall availability score, and so on (with random tie-breaks). IAM is one of the naive alternatives when scheduling is done manually (as natural objectives like SWM usually need computing resources). It is also worth noting that, in the usual physical conference settings, both SWM and IAM give results which maximize social welfare, as we prove in claim 4. Claim 4. In physical conference settings (i.e., all participants have identical ease of availability over all available slots ( ) = ( ), ∀ ∈ S, ∈ P), IAM yields a conference schedule which maximizes social welfare.
Proof. As this is a special case of the scenario covered in lemma 3, refer to case (a) of lemma 3 for the proof. □ Lemma 3. IAM maximizes social welfare, if the participants are identical either in terms of their interests in the talks or in terms of their ease of availability over the available slots, or both.
Proof. There are three cases where we need to prove that IAM maximizes social welfare; (a) if all participants have identical ease of availability over all available slots ( ) = ( ), ∀ ∈ S, ∈ P (this case is similar to physical conference settings where all participants gather at the same place, thus, have identical ease of availability); (b) if all participants have identical interests over all talks ( ) = ( ), ∀ ∈ T , ∈ P; (c) if both (a) and (b) are true.
We, first, reduce the SWM objectives in the following cases, and observe that they take a particular form where IAM gives solution. Note that, when there are ties in scenarios mentioned in lemma 3, IAM approach could also be made to give multiple solutions which maximize social welfare; however, we use random tie breaks.
Evaluation Metrics
We use the following metrics to capture the performances from participant, speaker, and social welfare perspectives.
Participant-Side Metrics.
We measure the mean satisfaction of participants ( mean = ∈P | P | ) as it is an indicator of how efficient is the schedule for participants. We also measure the participant unfairness (as defined in eq. (11): Ψ P = max − min ). Lower values of Ψ P represent better participant fairness. 5.2.2 Speaker-Side Metrics. Similar to participant-side metrics, on speaker-side also, we measure the mean satisfaction of speakers ( mean = ∈T | T | ) as an indicator of efficiency of the schedule for speakers, and also the speaker unfairness (as defined in eq. (15):
Ψ S = max − min )
. Lower values of Ψ S represent better speaker fairness.
Social Welfare Metric.
While taking into account the participant fairness and speaker fairness, there could be some loss in social welfare or the total expected participation as illustrated in examples in §3. Thus, we also measure the social welfare achieved by the conference schedules obtained by our approach and the baseline approaches to gauge the loss of welfare in comparison to the maximum possible social welfare. We use metric (as defined in eq. (7)) for this.
Experimental Results
First, in §5.3.1, we show results on a synthetic dataset with random interest and availability scores. Then, in sections 5.3.2 to 5.3.5, we experiment on some special cases of synthetic datasets-with specific patterns in the data-to illustrate interesting nuances and difficulties involved in the scheduling problem. Note that, we vary the hyperparameters 1 and 2 in between 0 and 1 in separate trials.
Random Interests and Availability:
We take |P | = 10, |T | = 10, |P | = 10, and generate the synthetic dataset where the slots represent non-overlapping equal-sized time intervals-not particularly required to be in any sequence-available for scheduling. For this dataset, the interest scores and availability scores are sampled from a uniform random distribution in FairConf Results: Note that, for the plots in first row ( fig. 1a to fig. 1e), we fix 2 = 0.5 and vary 1 from 0 to 1; for the plots in second row ( fig. 1f to fig. 1j), we fix 1 = 0.5 and vary 2 from 0 to 1. The general trends observed in FairConf's results are: with increase in the weight for participant fairness ( 1 ), FairConf achieves better participant fairness (decrease in participant unfairness in fig. 1b) but worse speaker fairness (increase in speaker unfairness in fig. 1d); with increase in the weight for speaker fairness ( 2 ), FairConf achieves better speaker fairness (decrease in speaker unfairness in fig. 1i) but worse participant fairness (increase in participant unfairness in fig. 1g). We find that FairConf with the setting of 1 = 2 = 0.5 gives a balanced performance across all the metrics; it performs good in both participant fairness (very small unfairness in fig. 1b-also close to PFair) and speaker fairness (very small unfairness in fig. 1d-only little higher than SFair) while causing only marginal losses in mean participant satisfaction ( fig. 1a), mean speaker satisfaction ( fig. 1c), and the social welfare ( fig. 1e).
Balanced Participant Groups (Indentical Interests, Segregated Availability):
Here, we take a dataset with |P | = 10, |T | = 10, |S| = 15. While all the participants have identical interest scores same as 1 in fig. 2a, the first 5 participants have availability scores as 1 in fig. 2b, and the next 5 have 2 in fig. 2b. We plot the results for this dataset in fig. 3. Baseline Results: Both SWM and IAM yield same social welfare ( fig. 3e), as it also follows from lemma 3 under the special condition of identical interest scores of the participants; however, they could result in different optimal-welfare schedules; here also we see different schedules given by SWM and IAM (difference in fig. 3b). As all participants have identical interests but segregated availability for two equal sized groups, there is huge scope for bringing in participant fairness by balancing the talks in favorable slots of both participant groups; thus, PFair brings a huge improvement in participant fairness ( fig. 3b) . 3i) with increase in 2 due to very less scope for improving speaker fairness in this case. However, it is worth noting that, just because there is less scope for improving speaker fairness, we should not just remove it from joint optimization by setting 2 = 0; Setting 2 = 0 could adversely impact speaker fairness (see 2 = 0 point in fig. 3i). Setting 2 = 0 can give the joint optimization an opportunity to further improve participant fairness (see 2 = 0 point in fig. 3g) at the cost of losing speaker fairness. Thus, it is important to keep reasonable non-zero weight 2 for speaker fairness-even in absence of any scope for improvement-as it can work both as optimizer and defender/preserver of speaker fairness.
Imbalanced Participant Groups (Identical Interests, Segregated Availability):
We use dataset same as previous, but with just one change: here the first 7 participants have availability scores as 1 in fig. 2b, and the next 3 have 2 in fig. 2b. We plot the results for this dataset in fig. 4. Baseline Results: In comparision to the case in §5.3.2, here SWM achieves higher participant satisfaction (compare SWM in fig. 3a and fig. 4a) and higher participation unfairness (compare SWM in fig. 3b and fig. 4b) too. This is because, SWM can just assign the high interest talks to favorable slots of the majority participant group in order to maximize social welfare. Thus, there is huge scope for improvement in participant fairness as the participant groups have segregated availability too. That's why PFair brings a huge improvement in participant fairness ( fig. 4b). However, just like the case in §5.3.2, there is no such scope to improve speaker fairness, thus, we see almost no improvement with SFair ( fig. 4d). FairConf Results: Here also FairConf causes significant improvements in participant fairness ( fig. 4b) with increase in 1 . However, as there is no scope to improve speaker fairness, we see no improvement in speaker fairness ( fig. 4i) with increase in 2 . Here also we see, it is not wise to set 2 = 0 just because there is no scope to improve speaker fairness; here also, setting 2 = 0 adversely impacts speaker fairness and satisfaction (see 2 = 0 point in figs. 4h fig. 2b (Cosine slope). Thus, two talks with same overall interest scores, can be assigned two consecutive slots without causing too high disparity in individual participant satisfactions and individual speaker satisfactions, as the difference between the overall availability scores of two consecutive slots is not too large. This is why here, we see SWM not only optimizes social welfare but also achieves high participant and speaker fairness ( fig. 5)-close to FairConf.
It is also worth noting that even though SWM provides a solution with the best speaker fairness ( fig. 5d), SFair gives a different solution with same speaker fairness but with significantly poorer performances in other metrics. Similarly, PFair also gives a different solution which improves participant fairness by a very small amount ( fig. 5b), but causes significant losses in other metrics. This happens because both PFair and SFair are agnostic to the other fairness objective and welfare objective. Thus, they may or may not result in the same schedule as SWM even if it is optimal. Figure 6: Results on data with imbalanced participant groups (segregated interests, identical availability). For the plots in first row, 2 is fixed at 0.5, and 1 is varied. For the plots in second row, 1 is fixed at 0.5, and 2 is varied.
In contrast to the case in §5.3.4, here, there is an imbalance in the interest segregation. Thus, it provides SWM an opportunity to be biased towards the majority participant group and improve social welfare. Thus, we see a higher participant unfairness ( fig. 6b). FairConf significantly improves participant fairness while causing marginal loss in social welfare ( fig. 6e).
5.3.6
Results Summary: While SWM maximizes social welfare, it often results in high participant unfairness and speaker unfairness. On the other hand, naive approach IAM also optimizes social welfare in special conditions (lemma 3), but due to random tie breaks both SWM and IAM may not differentiate between two optimalwelfare schedules in terms of fairness. Moreover, in absence of any explicit fairness consideration, both SWM and IAM often perform poorly in term fairness. While PFair achieves maximum participant fairness, it often becomes unfair to the speakers, and also causes loss in mean speaker satisfaction; the opposite happens in case of SFair. Our joint optimization approach FairConf with similar weights for participant and speaker fairness, i.e., 1 = 2 = 0.5 is found to be performing very good across all the metrics in all the tested cases (achieves good participant and speaker fairness with only marginal losses in social welfare, and overall participant satisfaction and overall speaker satisfaction).
RELATED WORK
We briefly review related works in the following two directions.
Job and Network Scheduling: The most commonly studied scheduling problem in computing research is on job or network scheduling. In this problem, there are multiple agents (e.g., system processes, computing jobs, data packets, networked users or machines) who have shared access to common resource(s) (e.g., fixed number of processors, limited internet bandwidth), and the agents raise requests for the use the common resource(s) from time to time; now the goal is to allocate the resource(s) to the agents in fair and optimal manner. For example: fair-share scheduling for system processes [9,11,12]; scheduling of packet transfers to ensure fair sharing of network channels [19]; fair scheduling of computing jobs on computing clusters [8,13]; fair scheduling for devices in shared wireless charging systems [4]; fair scheduling of retrieval queries for databases [7]. Our problem setup for fair conference scheduling is of very different form than typical job scheduling setup. While conference scheduling has two types of stakeholders-participants and speakers-whose functions and fairness requirements are different from each other, job scheduling problems are usually modelled for one type of stakeholders-the agents who use the shared resource.
Meeting/Event Scheduling: The problem which is closely related to our conference scheduling problem is meeting or event scheduling where there are mulltiple agents with different availability in different time intervals, and the goal is to find an optimal schedule for meeting(s). These problems have been explored for different types of optimality; for example: finalizing schedules with minimal negotiations with agents [18]; schedules with optimal availability of the agents [2,6]. To solve these problems, methods like distributed constraint optimization [14] have been proposed. Even though there is a similarity between meeting scheduling and our conference scheduling, in meeting scheduling problems, utility/satisfaction is modelled only for the agents attending the event(s), and there is no consideration of satisfaction from the side of the event (i.e., no concept of speakers as individuals with self interests). In addition to that most of these works consider only binary availability of the agents, and are often focussed more on optimizing efficiency. While there has been work on strategy-proof scheduling [3] and privacy-aware scheduling [5,10], fairness has not been considered in such settings (except for Baum et al. [1] dealing with a very different setting). In constrast, we allow non-binary availability of the agents, and care for both efficiency and fairness in our conference scheduling problem.
DISCUSSION
In this work, we modeled a very timely and important problem of online conference scheduling with welfare and fairness concerns. Apart from formally defining the fairness notions and objectives, we brought out several fundamental tensions among participant fairness, speaker fairness, and social welfare, and showed the benefits of our proposed joint optimization framework. We believe that this work will lay the ground for further research (both theoretical and empirical) into such multi-stakeholder scheduling problems (going beyond the virtual conference scheduling). Next, we present a set of open questions and possible future works.
Future Work
(i) Even though, in this paper, we solved the proposed joint optimization problem by converting it into an integer linear program which works well for problems of small size, more research is needed in developing approaches which are more scalable and computationally efficient with provable guarantees.
(ii) We also limited ourselves with non-overlapping time slots; however, larger conferences often have overlapping and parallel sessions to accommodate their higher number of talks. Accounting for such overlapping slots would require changes in the formulation of participant and speaker satisfactions as participants would have to choose which of the parallel session to attend, thereby also changing the expected audience in a particular talk. (iii) We considered speaker satisfaction in terms of the expected crowd at their talks, however, in virtual conferences, the ease of availability of the speakers in the assigned time slot also plays a role in their satisfaction; so one need to factor in both expected crowd and ease of availability of the speaker to define an encompassing measure for speaker satisfaction. (iv) Although we considered the participants and speakers to be separate agents in our model, a single agent could play the roles of both a participant and a speaker; thus for such agents with dual roles, the participant satisfaction measure can be modified to exclude the time slot(s) in which they play the role of a speaker.
(v) Another important aspect which may be of great importance in conference scheduling is to consider group fairness: that is to ensure fair satisfaction for timezone-specific groups of participants, as well as domain-specific groups of speakers etc. Group fairness notions form a long line of work in machine learning [15]; these works can be explored and extended for this purpose.
( 1
1|Γ) = 0.8 and ( 2 |Γ) = 0.75 (as ( 1 |Γ) = 0.8 and ( 2 |Γ) = 0.375). This above schedule has in fact the lowest possible disparity in speaker satisfactions, i.e., the highest possible speaker fairness. Even though this schedule {( 1 , 3 ), ( 2 , 2 )} is fairer to the speakers than the earlier {( 1 , 1 ), ( 2 , 3 )}, the gain in speaker fairness has come at a loss in social welfare;(Γ) reduced from 1.4 to 1.175. □
14 1. 7
7, while being unfair for the speakers as( 1 |Γ ′ ) = 1 > 0.2 = ( 2 |Γ ′ ). □As seen in the example in table 3, participant fairness and speaker fairness can not always be achieved simultaneous. Thus, an important question is: Q3: to what extent the conference organizer is ready to lose participant fairness while bringing in speaker fairness and vice-versa?
( 2 )
2Participant Fairness Maximization (PFair): Here, we just optimize for participant fairness; i.e., minimize participant unfairness [argmin Γ Ψ P (Γ)] as defined in eq. (12). (3) Speaker Fairness Maximization (SFair): Here, we just optimize for speaker fairness; i.e., minimize speaker unfairness [argmin Γ Ψ S (Γ)] as defined in eq. (16). (4) Interest-Availability Matching (IAM): Here, we sort the talks in descending order of the overall interest scores received by them, i.e., ∈ P
::
Case-(a): Given ( ) = ( ), ∀ ∈ S, ∈ P. From eq. (7): Given ( ) = ( ), ∀ ∈ T , ∈ P. From eq. Given ( ) = ( ), ( ) = ( ), ∀ ∈ S, ∀ ∈ T , ∈ P.From eq.In all the three cases above, SWM reduces to a form where the terms are independent of individual participant interests and availability while depending only on the overall interest levels (V in case(a), and participant-independent in case (b) & (c)) and overall availability (A in case (b), and participant-independent in case (a) & (c)). Thus, to maximize the reduced objectives in all the cases, top values of overall availability or A need to be matched with top values of overall interests V or -making it identical to IAM. □
[0, 1]; i.e., ( ) ∼ Uniform( [0, 1]) and ( ) ∼ Uniform([0, 1]), ∀ , , . We plot the results for this dataset in fig. 1. Note that, unlike FairConf, the baseline approaches do not have hyperparameters 1 , 2 ; thus, baseline results are just horizontal straight lines while FairConf's results vary with hyperparameter settings. Baseline Results: SWM achieves the highest expected participation (by definition it should), the highest mean participant satisfaction and mean speaker satisfaction (refer figs. 1a, 1c and 1e) while performing poorly on participant and speaker fairness (figs. 1b and 1d). On the other hand, the naive IAM performs poorly in all the metrics. As PFair optimizes only for participant fairness, it has the highest participant fairness (least unfairness in fig. 1b) while losing in all the other metrics. The opposite happens with SFair; it performs the best in speaker fairness (least unfairness in fig. 1d) while losing in all other metrics as SFair optimizes only for speaker fairness.
in comparison to SWM. However, there is no such scope for improvement in speaker fairness which is why SFair achieves only a marginal improvement (fig. 3d). FairConf Results: Again FairConf with 1 = 2 = 0.5 performs very good in all the metrics here too. While FairConf shows improvement in participant fairness (fig. 3b) with increase in 1 , it does not improve speaker fairness (fig
Figure 1 :
1Results on synthetic dataset with random interests and availability. For the plots in first row, 2 is fixed at 0.5, and 1 is varied. For the plots in second row, 1 is fixed at 0.5, and 2 is varied.
Figure 2 :
2Interest and Availability Patterns and 4i). Looking at the FairConf results with 1 = 2 = 0.5, we can say that FairConf has handled the case of imbalanced availability segregation very well, and has produced very good results across all metrics.5.3.4 Balanced Participant Groups (Segregated Interests, Identical Availability): Here, we take a dataset with |P | = 10, |T | = 10, |S| = 15. While all the participants have identical availability scores same as 1 in fig. 2b, the first 5 participants have interests scores as 1 in fig. 2a, and the next 5 have 2 infig. 2a. We plot the results for this dataset infig. 5.As the participant groups have segregated interest scores for the talks and identical availability scores over all slots, one would expect SWM to show large unfairness in the results. However, the slopes of the chosen interest score pattern and availability score pattern also play a role in this. Looking at the slopes of 1 , 2fig. 2a (power law slope), one can easily see that they decrease significantly faster than 1 in
Figure 3 :Figure 4 :Figure 5 :
3455.3.5 Imbalanced Participant Groups (Segregated Interests, Identical Availability):We use dataset same as previous, but with just one change: here the first 7 participants have interest scores as 1 infig. 2a, and the next 3 have 2 infig. 2a. We plot the results for this dataset infig. 6. Results on data with balanced participant groups (indentical interests, segregated availability). For the plots in first row, 2 is fixed at 0.5, and 1 is varied. For the plots in second row, 1 is fixed at 0.5, and 2 is varied. Results on data with imbalanced participant groups (identical interests, segregated availability). For the plots in first row, 2 is fixed at 0.5, and 1 is varied. For the plots in second row, 1 is fixed at 0.5, and 2 is varied. Results on data with balanced participant groups (segregated interests, identical availability). For the plots in first row, 2 is fixed at 0.5, and 1 is varied. For the plots in second row, 1 is fixed at 0.5, and 2 is varied.
Table 2 :
2Example Problem 2Participants
( )
( )
1
2
1
2
3
4
1
1 0.7 1 1 0 0.2
2
1 0.7 1 0 1 0.2
Table 3: Example Problem 3
Acknowledgements: G. K Patro is supported by a fellowship from Tata Consultancy Services. This research was supported in part
Scheduling, revenue management, and fairness in an academic-hospital radiology division. Richard Baum, Dimitris Bertsimas, Nathan Kallus, Academic radiology. 21Richard Baum, Dimitris Bertsimas, and Nathan Kallus. 2014. Scheduling, revenue management, and fairness in an academic-hospital radiology division. Academic radiology 21, 10 (2014), 1322-1330.
Event scheduling with optimization. William Peter George Capek, Paul Andrew Grey, Clifford A Moskowitz, Dailun Pickover, Shi, US Patent. 7312Peter George Capek, William Grey, Paul Andrew Moskowitz, Clifford A Pickover, and Dailun Shi. 2008. Event scheduling with optimization. US Patent 7,343,312.
A non-manipulable meeting scheduling system. Eithan Ephrati, Gilad Zlotkin, Jeffrey S Rosenschein, Proceedings of the 13th international workshop on distributed artificial intelligence. the 13th international workshop on distributed artificial intelligenceEithan Ephrati, Gilad Zlotkin, and Jeffrey S Rosenschein. 1994. A non-manipulable meeting scheduling system. In Proceedings of the 13th international workshop on distributed artificial intelligence. 105-125.
Fair scheduling in resonant beam charging for IoT devices. Wen Fang, Qingqing Zhang, Qingwen Liu, Jun Wu, Pengfei Xia, IEEE Internet of Things Journal. 6Wen Fang, Qingqing Zhang, Qingwen Liu, Jun Wu, and Pengfei Xia. 2018. Fair scheduling in resonant beam charging for IoT devices. IEEE Internet of Things Journal 6, 1 (2018), 641-653.
Privacy/efficiency tradeoffs in distributed meeting scheduling by constraint-based agents. C Eugene, Marius Freuder, Richard J Minca, Wallace, Proc. IJCAI DCR. Citeseer. IJCAI DCR. CiteseerEugene C Freuder, Marius Minca, and Richard J Wallace. 2001. Privacy/efficiency tradeoffs in distributed meeting scheduling by constraint-based agents. In Proc. IJCAI DCR. Citeseer, 63-72.
Multi-agent meeting scheduling: Preliminary experimental results. Leonardo Garrido, Katia Sycara, Proceedings of the Second International Conference on Multiagent Systems. the Second International Conference on Multiagent SystemsLeonardo Garrido and Katia Sycara. 1996. Multi-agent meeting scheduling: Prelim- inary experimental results. In Proceedings of the Second International Conference on Multiagent Systems. 95-102.
Fair scheduling for mixedquery loads. Michael Harris, John Carrino, Eric Wong, US Patent. 9482Michael Harris, John Carrino, and Eric Wong. 2015. Fair scheduling for mixed- query loads. US Patent 9,092,482.
Quincy: fair scheduling for distributed computing clusters. Michael Isard, Vijayan Prabhakaran, Jon Currey, Udi Wieder, Kunal Talwar, Andrew Goldberg, Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles. the ACM SIGOPS 22nd symposium on Operating systems principlesMichael Isard, Vijayan Prabhakaran, Jon Currey, Udi Wieder, Kunal Talwar, and Andrew Goldberg. 2009. Quincy: fair scheduling for distributed computing clusters. In Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles. 261-276.
A fair share scheduler. Judy Kay, Piers Lauder, Commun. ACM. 31Judy Kay and Piers Lauder. 1988. A fair share scheduler. Commun. ACM 31, 1 (1988), 44-55.
Probabilistic Matrix Inspection and Group Scheduling. Hooyeon Lee, Ashish Goel, IJCAI. Hooyeon Lee and Ashish Goel. 2016. Probabilistic Matrix Inspection and Group Scheduling.. In IJCAI. 322-328.
Efficient and scalable multiprocessor fair scheduling using distributed weighted round-robin. Tong Li, Dan Baumberger, Scott Hahn, ACM Sigplan Notices. 44Tong Li, Dan Baumberger, and Scott Hahn. 2009. Efficient and scalable multi- processor fair scheduling using distributed weighted round-robin. ACM Sigplan Notices 44, 4 (2009), 65-74.
The Linux scheduler: a decade of wasted cores. Jean-Pierre Lozi, Baptiste Lepers, Justin Funston, Fabien Gaud, Vivien Quéma, Alexandra Fedorova, Proceedings of the Eleventh European Conference on Computer Systems. the Eleventh European Conference on Computer SystemsJean-Pierre Lozi, Baptiste Lepers, Justin Funston, Fabien Gaud, Vivien Quéma, and Alexandra Fedorova. 2016. The Linux scheduler: a decade of wasted cores. In Proceedings of the Eleventh European Conference on Computer Systems. 1-16.
Kshiteej Mahajan, Arjun Singhvi, Arjun Balasubramanian, Varun Batra, Shivaram Surya Teja Chavali, Aditya Venkataraman, Amar Akella, Shuchi Phanishayee, Chawla, arXiv:1907.01484Themis: Fair and efficient gpu cluster scheduling for machine learning workloads. arXiv preprintKshiteej Mahajan, Arjun Singhvi, Arjun Balasubramanian, Varun Batra, Surya Teja Chavali, Shivaram Venkataraman, Aditya Akella, Amar Phanishayee, and Shuchi Chawla. 2019. Themis: Fair and efficient gpu cluster scheduling for machine learning workloads. arXiv preprint arXiv:1907.01484 (2019).
Taking DCOP to the real world: Efficient complete solutions for distributed event scheduling. Rajiv Maheswaran, Milind Tambe, Emma Bowring, Jonathan Pearce, Pradeep Varakantham, Rajiv Maheswaran, Milind Tambe, Emma Bowring, Jonathan Pearce, and Pradeep Varakantham. 2004. Taking DCOP to the real world: Efficient complete solutions for distributed event scheduling. (2004).
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, arXiv:1908.09635Kristina Lerman, and Aram Galstyan. 2019. A survey on bias and fairness in machine learning. arXiv preprintNinareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2019. A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635 (2019).
Conference management: 5 benefits of hosting a virtual conference. Oxfordabstracts, OxfordAbstracts. 2020. Conference management: 5 benefits of hosting a virtual conference. https://oxfordabstracts.com/blog/2020-03-30-5-benefits-of-hosting- a-virtual-conference/
Getting to grips with online conferences. Michael Saliba, Nature Energy. 5Michael Saliba. 2020. Getting to grips with online conferences. Nature Energy 5, 7 (2020), 488-490.
A formal study of distributed meeting scheduling. Sandip Sen, H Edmund, Durfee, Group Decision and Negotiation. 7Sandip Sen and Edmund H Durfee. 1998. A formal study of distributed meeting scheduling. Group Decision and Negotiation 7, 3 (1998), 265-289.
Distributed fair scheduling in a wireless LAN. Nitin Vaidya, Anurag Dugar, Seema Gupta, Paramvir Bahl, IEEE Transactions on Mobile Computing. 4Nitin Vaidya, Anurag Dugar, Seema Gupta, and Paramvir Bahl. 2005. Distributed fair scheduling in a wireless LAN. IEEE Transactions on Mobile Computing 4, 6 (2005), 616-629.
| []
|
[
"**FULL TITLE** ASP Conference Series, Vol. **VOLUME**, **YEAR OF PUBLICATION** **NAMES OF EDITORS** 2 types of spicules \"observed\" in 3D realistic models",
"**FULL TITLE** ASP Conference Series, Vol. **VOLUME**, **YEAR OF PUBLICATION** **NAMES OF EDITORS** 2 types of spicules \"observed\" in 3D realistic models"
]
| [
"Juan Martínez-Sykora \nInstitute of Theoretical Astrophysics\nLockheed Martin Solar & Astrophysics Lab\nUniversity of Oslo\nPalo AltoNorway, USA\n"
]
| [
"Institute of Theoretical Astrophysics\nLockheed Martin Solar & Astrophysics Lab\nUniversity of Oslo\nPalo AltoNorway, USA"
]
| []
| Realistic numerical 3D models of the outer solar atmosphere show two different kind of spicule-like phenomena, as also observed on the solar limb. The numerical models are calculated using the Oslo Staggered Code (OSC) to solve the full MHD equations with non-grey and NLTE radiative transfer and thermal conduction along the magnetic field lines. The two types of spicules arise as a natural result of the dynamical evolution in the models. We discuss the different properties of these two types of spicules, their differences from observed spicules and what needs to be improved in the models. | null | [
"https://arxiv.org/pdf/1001.1256v1.pdf"
]
| 118,141,968 | 1001.1256 | aaa729863628d206704f69dcf3b50f4aa46b4681 |
**FULL TITLE** ASP Conference Series, Vol. **VOLUME**, **YEAR OF PUBLICATION** **NAMES OF EDITORS** 2 types of spicules "observed" in 3D realistic models
8 Jan 2010
Juan Martínez-Sykora
Institute of Theoretical Astrophysics
Lockheed Martin Solar & Astrophysics Lab
University of Oslo
Palo AltoNorway, USA
**FULL TITLE** ASP Conference Series, Vol. **VOLUME**, **YEAR OF PUBLICATION** **NAMES OF EDITORS** 2 types of spicules "observed" in 3D realistic models
8 Jan 2010
Realistic numerical 3D models of the outer solar atmosphere show two different kind of spicule-like phenomena, as also observed on the solar limb. The numerical models are calculated using the Oslo Staggered Code (OSC) to solve the full MHD equations with non-grey and NLTE radiative transfer and thermal conduction along the magnetic field lines. The two types of spicules arise as a natural result of the dynamical evolution in the models. We discuss the different properties of these two types of spicules, their differences from observed spicules and what needs to be improved in the models.
Numerical Methods and description of the model
The nature of spicules observed at the solar limb have long been a mystery, in this paper we discuss two types of jets that occur naturally in 3D numberical models of the solar atmosphere. The MHD equations are solved in a model spanning the upper convection and corona using the Oslo Stagger Code (OSC). In addition, this code solves a rather realistic NLTE radiative transfer, including scattering, and thermal conduction along the field lines as explained in Martínez-Sykora et al. (2008).
The models described below have a grid size of 256 × 128 × 160 points spanning 16 × 8 × 16 Mm 3 . The grid is uniform in the horizontal direction with a grid spacing of 65 km. In the vertical direction the grid is non-uniform, ensuring that the vertical resolution is good enough to resolve the photosphere and transition region with a grid spacing of 32.5 km, while becoming larger at coronal heights. At these resolutions the models have been run for roughly 1.5 hours solar time. We have created two models with different average unsigned field in the photosphere; one with 16 G (A2) and the other 160 G (B1). In addition to this ambient field, we introduce a magnetic flux tube into the lower boundary at the bottom boundary (Martínez-Sykora et al. 2008).
Results and discussions
In the models we found two types of spicule-like structures, i.e the so-called type i Table 1 shows the differences between the two types of the spicules in our models and compared to observations. The reader is referred to work that has recently been completed related to spicules; Rouppe van der Voort et al. (2007) Most likely, the differences between the two types seem to agree with the observations. However, a deeper study needs to be done with the type ii spicules found in the model (work in progress). Moreover, a closer comparison with the observations is required. It is interesting to note that the appearance of type i does not show a clear preference between models with or without flux emergence, while type ii only appear in the model with the largest ambient ambient field (B1), and only after emerging flux cross the photosphere. Spicules of both types in the models are located at the footpoints of the atmospheric coronal loops, where the field lines are open field lines or at least penetrate into the corona. Moreover, the footpoint that is closer to the emerging flux tube is the one that shows most jets. The type ii spicules shows a corresponding nearby hot loop Figure 2. Histograms, normalized to the total number of spicules, for decelerations, maximal velocity, maximum lengths, and duration, from left to right, respectively, measured from the two models (B1 dash and A2 dash-dot line) and the sum (continuum line). The vertical line is the median value of the distribution. The two models show different distributions of the deceleration, maximum lengths, and duration, as well as some differences in maximum velocity. The model B1 shows on average slightly lower decelerations, shorter length, and shorter velocities than A2.
which also seems observed in the Sun (De Pontieu et al. 2009). The hot loop (> 10 6 K) can be observed with coronal emission lines.
In brief, we can summarize the differences between observations and models for type i spicules by noting that the upper limits of the deceleration, length, duration, and maximal velocity are smaller in the models (Martínez-Sykora et al. 2009). Histograms for deceleration, maximum length, maximal velocity and duration for the type i from the model are shown in fig 2. These can be compared with the histograms from the observations done by De Pontieu et al. (2007). They show an agreement in the lower part to the histograms, and the differences between the B1 and A2 seems similar to the differences of the two regions observed by De Pontieu et al. (2007). However, the models do not fit with the upper part of the observed histograms.
In order to improve the models we consider that the resolution of the box is important. The chromosphere is poorly resolved numerically and this affects the size of the structures of the spicules. In addition, low resolution might bring other effects like the diffusion of the shocks (type i) or of the magnetic discontinuity (type ii). With higher resolution we expect sharper shocks, larger range of velocities, better resolved and more frequent spicules.
In models, it is also important to take time-dependent hydrogen ionization into account in the the upper chromosphere. The ionization of hydrogen in the solar chromosphere and transition region does not obey LTE, or instantaneous statistical equilibrium, as the timescales of ionization and recombination are long compared with HD timescales, especially for magneto-acoustic shocks. The shock temperatures are higher, and the intershock temperatures are lower, in models where time-dependent ionization is considered. This effect will likely change the range of parameter of the spicules type i (Leenaarts et al. 2007). Modeling the chromosphere is strongly important to study properly the radiative losses approximations, NLTE with scattering as has discussed byCarlsson (2010); Leenaarts (2010). The partial ionization might have other effects on both types of spicules, as well. When considering partial ionization we find that ambipolar diffusion, Hall diffusion and ohmic diffusion contribute at differing rates throughout the chromosphere. The ratio between these three diffusion terms changes from the photosphere up to the transition region (see fig 3). This will possibly have important effects in the chromosphere as the parameters controlling reconnection and the damping of waves change.
Finally, we note that the range of ambient magnetic field structures that have been modeled only form a small subset of those expected when considering supergranulation, plage and the chromospheric network. In addition, a continuous weak magnetic flux emergence may need to be added, since it has been observed in the models that chromosphere and transition region heights are considerably increased with flux emergence.
(Martínez-Sykora et al. 2009) and type ii (McIntosh et al. 2007). A synthetic image of Ca ii at the limb is shown in fig 1 which shows the two types of spicules. These structures look rather similar to what is observed at the limb in the Sun in Ca ii.
2 Figure 1 .
21Synthetic image of Ca ii H from the limb of the model. Observe the two types of spicules, type i located at x = 14 Mm and type ii at x = 7 Mm. The synthetic image is done with MULTI 3D.
; De Pontieu et al. (2007); Hansteen et al. (2006); Martínez-Sykora et al. (2009).
Figure 3 .
3Temperature, ion fraction, magnetic field intensity, and ohm, hall and ambipolar diffusion calculated as post-processing in a 2D cut of the model, from left to right and top to bottom. Observe that Ohm and Hall diffusion are rather important in the lower chromosphere and the ambipolar diffusion in the upper chromosphere.
Velocities ≈ [5, 35] km/s Velocities ≈ 150 km/s Type i reach larger velocities Observed in Ca ii Counterpart in Transition region emission lines Table 1. Properties of the two types of spicules "observed" in the models and compared with observations.Type i
Type ii
Observations
150 examples in both
models
2 examples only in B1
model
Type ii ubiquitous
Length ≈ [0.4, 1.5] Mm
Length ≈ 5 Mm
Type i are longer
Duration ≈ [2, 5] min
Duration ≈ 1 min
Type i have longer dura-
tions
Parabolic profile in time
(deceleration)
Complex velocity profiles
due to acceleration at dif-
ferent height
Seems to agree (see bibli-
ography)
Up-downflow profile
Only upflow
Seems to agree
Seems to agree
Driven by magneto-
acustic shocks
Reconnection
Similar drivers suggested
. De Pontieu, B , ApJ. 655624De Pontieu B. et al 2007, ApJ, 655, 624
. S Mcintosh, PASJ. 38219McIntosh S., et al., 2007, PASJ, 38, 219
. Rouppe Van Der, Voort, ApJ. 660169Rouppe van der Voort, et al., 2007, ApJ, 660,L169
. V H Hansteen, ApJ. 64773Hansteen V. H. et al. 2006, ApJ, 647, L73
. J Leenaarts, M Carlsson, V Hansteen, R J Rutten, A&A. 473625Leenaarts J., Carlsson M., Hansteen V., Rutten R. J., 2007, A&A, 473, 625
. B De Pontieu, ApJ. 7011De Pontieu, B. et al., 2009, ApJ, 701, L1
. J Martínez-Sykora, V Hansteen, M Carlsson, ApJ. 679871Martínez-Sykora J., Hansteen V., Carlsson M., 2008, ApJ, 679, 871
. J Martínez-Sykora, V Hansteen, B Depontieu, M Carlsson, ApJ. 7011569Martínez-Sykora J., Hansteen V., DePontieu B., Carlsson M., 2009, ApJ, 701, 1569
. M Carlsson, MmSAI. 80606Carlsson M., 2010, MmSAI, 80, 606
. J Leenaarts, in preparationLeenaarts J., 2010, in preparation
| []
|
[
"Compact and Noncompact Gauged Maximal Supergravities in Three Dimensions",
"Compact and Noncompact Gauged Maximal Supergravities in Three Dimensions"
]
| [
"H Nicolai [email protected] \nMax-Planck-Institut für Gravitationsphysik\nAlbert-Einstein-Institut\nLaboratoire de Physique Théorique de l'École Normale Supérieure, * , † 24 Rue Lhomond\n* Mühlenberg 1D-14476, F-75231Potsdam, Paris Cedex 05Germany, France\n",
"H Samtleben \nUMR 8549: Unité Mixte du Centre National de la Recherche Scientifique, et de l'École Normale Supérieure\n\n"
]
| [
"Max-Planck-Institut für Gravitationsphysik\nAlbert-Einstein-Institut\nLaboratoire de Physique Théorique de l'École Normale Supérieure, * , † 24 Rue Lhomond\n* Mühlenberg 1D-14476, F-75231Potsdam, Paris Cedex 05Germany, France",
"UMR 8549: Unité Mixte du Centre National de la Recherche Scientifique, et de l'École Normale Supérieure\n"
]
| []
| We present the maximally supersymmetric three-dimensional gauged supergravities. Owing to the special properties of three dimensions -especially the on-shell duality between vector and scalar fields, and the purely topological character of (super)gravity -they exhibit an even richer structure than the gauged supergravities in higher dimensions. The allowed gauge groups are subgroups of the global E 8(8) symmetry of ungauged N = 16 supergravity. They include the regular series SO(p, 8 − p) × SO(p, 8 − p) for all p = 0, 1, . . . , 4, the group E 8(8) itself, as well as various noncompact forms of the exceptional groups E 7 , E 6 and F 4 ×G 2 . We show that all these theories admit maximally supersymmetric ground states, and determine their background isometries, which are superextensions of the anti-de Sitter group SO(2, 2). The very existence of these theories is argued to point to a new supergravity beyond the standard D = 11 supergravity. | 10.1088/1126-6708/2001/04/022 | [
"https://arxiv.org/pdf/hep-th/0103032v2.pdf"
]
| 6,044,100 | hep-th/0103032 | 541776ed326bffa1e540dd421059380f2171dff6 |
Compact and Noncompact Gauged Maximal Supergravities in Three Dimensions
Apr 2001 March 2001
H Nicolai [email protected]
Max-Planck-Institut für Gravitationsphysik
Albert-Einstein-Institut
Laboratoire de Physique Théorique de l'École Normale Supérieure, * , † 24 Rue Lhomond
* Mühlenberg 1D-14476, F-75231Potsdam, Paris Cedex 05Germany, France
H Samtleben
UMR 8549: Unité Mixte du Centre National de la Recherche Scientifique, et de l'École Normale Supérieure
Compact and Noncompact Gauged Maximal Supergravities in Three Dimensions
Apr 2001 March 2001arXiv:hep-th/0103032v2 21 * Supported in part by the European Union under Contracts No.
We present the maximally supersymmetric three-dimensional gauged supergravities. Owing to the special properties of three dimensions -especially the on-shell duality between vector and scalar fields, and the purely topological character of (super)gravity -they exhibit an even richer structure than the gauged supergravities in higher dimensions. The allowed gauge groups are subgroups of the global E 8(8) symmetry of ungauged N = 16 supergravity. They include the regular series SO(p, 8 − p) × SO(p, 8 − p) for all p = 0, 1, . . . , 4, the group E 8(8) itself, as well as various noncompact forms of the exceptional groups E 7 , E 6 and F 4 ×G 2 . We show that all these theories admit maximally supersymmetric ground states, and determine their background isometries, which are superextensions of the anti-de Sitter group SO(2, 2). The very existence of these theories is argued to point to a new supergravity beyond the standard D = 11 supergravity.
Introduction
In this article we explain in detail the construction of maximal gauged supergravities in three dimensions, recently announced in [1]. While maximal gauged supergravities in higher dimensions have been known for a long time, starting with the gauged N = 8 theory in four dimensions [2], and subsequently for dimensions 5 ≤ D ≤ 8 [3,4,5,6,7], the results on gauged supergravities in three dimensions and below have remained somewhat fragmentary until now. The results presented in this paper close this gap. In addition they open up new perspectives: unlike maximal gauged supergravities in higher dimensions, the maximal AdS 3 supergravities, which we obtain here, are neither contained in nor derivable by any known mechanism from the known maximal supergravities in higher dimensions. The new, and purely field theoretic, evidence for a theory beyond D = 11 supergravity [8] and type IIB supergravity [9,10] that we have thus obtained is perhaps the most important consequence of the present work.
Topological gauged supergravities in three dimensions were first constructed in [11]; these theories are supersymmetric extensions of Chern-Simons (CS) theories with (n L , n R ) supersymmetry and gauge group SO(n L ) × SO(n R ), but have no propagating matter degrees of freedom (see also [12] for earlier work on D = 3 supergravity). Matter coupled gauged supergravities can, of course, be obtained by direct dimensional reduction of gauged supergravities in D ≥ 4 to three dimensions and below, but these do not preserve the maximal supersymmetry [13]. Another matter coupled theory with half maximal supersymmetry, obtained by compactifying the ten-dimensional N = 1 supergravity on a seven-sphere, has been discussed in [14] (however, [14] deals only with the bosonic part of the Lagrangian). In a different vein, [15] constructs an abelian gauged supergravity by deforming the D = 3, N = 2 supergravity whose matter sector is described by an SO(n, 2)/SO(n) × SO(2) coset space sigma model. This model bears some resemblance to the present work in that the vector fields appear via a CS term rather than a Yang-Mills term, unlike the matter-coupled theories mentioned before. However, the construction is limited to the abelian case, whereas the present construction yields non-abelian CS theories, thereby providing the first examples of a non-abelian duality between scalars and vector fields in three space-time dimensions.
Gauged supergravities have attracted strong interest again recently in the context of the conjectured duality between AdS supergravities and superconformal quantum field theories on the AdS boundary [16]. For instance, classical supergravity domain wall solutions are claimed to encode the information on the renormalization group flow of the strongly coupled gauge theory [17]. The theories admitting AdS 3 ground states are expected to be of particular interest for the AdS/CFT duality due to the rich and rather well understood structure of two-dimensional superconformal field theories. However, a large part of the recent work dealing with the conjectured AdS/CFT correspondence in AdS 3 has been based on the BTZ black hole solution of [18], which has no propagating matter degrees of freedom in the bulk. We will see that the gauged N = 16 theories yield a rich variety of supersymmetric groundstates, virtually exhausting all the possible vacuum symmetries of AdS type listed in [19], and thus an equally rich variety of superconformal theories on the boundary.
As is well known [20], the scalar fields in the toroidal compactification of D = 11 supergravity [8] on a d-torus form a coset space sigma model manifold G/H with the exceptional group G = E d(d) and H its maximally compact subgroup; in particular, for d = 8 one obtains a theory with global E 8 (8) symmetry and local SO (16) [21,22]. The complete list of ungauged matter coupled supergravities in three dimensions (which unlike topological supergravities only exist with N ≤ 16 supersymmetries) has been presented in [23]. Gauging any of these theories corresponds to promoting a subgroup G 0 of the rigid G symmetry group to a local symmetry in such a way that the full local supersymmetry is preserved. The latter requirement engenders additional Yukawa-like couplings between the scalars and fermions, as well as a rather complicated potential for the scalar fields. As we will demonstrate by explicit construction, the possible compact and non-compact nonabelian gauge groups, all of which are subgroups of the global E 8 (8) symmetry of the ungauged maximal supergravity theory and preserve the full local N = 16 supersymmetry, are more numerous in three dimensions than in higher dimensions.
There are essentially two properties which distinguish the three dimensional models from all their higher dimensional relatives. First, the gravitational sector does not contain any propagating degrees of freedom such that the theories without matter coupling may be formulated as CS theories of AdS supergroups [11]; see also the classic article [24] for a description of the peculiarities of gravity in three space-time dimensions. In fact, pure quantum gravity [25,26] and quantum supergravity [27] are exactly solvable in three space-time dimensions. Second, in three dimensions scalar fields are on-shell equivalent to vector fields. At the linearized level, this duality is encapsulated in the relation
ǫ µνρ ∂ ρ ϕ m = ∂ [µ B ν] m .(1.1)
This relation plays a special role in the derivation of maximal N = 16 supergravity in three dimensions [21,22,28,29]: in order to expose its rigid E 8 (8) symmetry, all vector fields obtained by dimensional reduction of D = 11 supergravity [8] on an 8-torus must be dualized into scalar fields. Vice versa, the duality (1.1) allows us to redualize part of the scalar fields into vector fields, such that the ungauged theory possesses different equivalent formulations which are related by duality [28]. As explained there, the replacement of scalar fields by vector fields breaks the exceptional E 8 (8) symmetry; when attempting to gauge this theory while maintaining its E 8 (8) structure and thus keeping all the scalars, it is therefore a priori not clear how to re-incorporate the vector fields necessary for the gauging without introducing new and unwanted propagating degrees of freedom. We will circumvent this apparent problem by interpreting (1.1) as defining up to 248 vector fields as (nonlocal) functions of the scalar fields. This freedom in the choice of the number of vector fields is at the origin of the large number of possible gauge groups that we encounter in three dimensions. In higher dimensions, the gauge group is to a large extent determined by the number and transformation behavior of the vector fields under the rigid G symmetry of the ungauged theory. As a necessary condition for gauging a subgroup G 0 ⊂ G, the vector fields or at least a maximal subset thereof must transform in the adjoint representation of G 0 . In the latter case there may remain additional vector fields which transform nontrivially under the gauge group. Upon gauging, these charged vector fields would acquire mass terms and thereby spoil the matching of bosonic and fermionic degrees of freedom; to avoid such inconsistencies one needs some additional mechanism to accommodate these degrees of freedom. Altogether, this does not leave much freedom for the choice of the gauge group. In D = 4 and D = 7 one must make use of the full set of vector fields transforming in the adjoint representation of the gauge groups SO(8) and SO(5), respectively. The situation is more subtle in dimensions D = 5, 6 where only a subset of the vector fields transforms in the adjoint representation of the gauge groups SO (6) and SO(5), respectively. The problem of coupling charged vector fields is circumvented in D = 5 by dualizing the additional vector fields into massive self-dual two forms [4,6]; in D = 6 they are absorbed by massive gauge transformations of the two forms [7].
By contrast the proper choice of gauge group is much less obvious in three dimensions. With (1.1), we may introduce for any subgroup G 0 ⊂ E 8(8) a set of ν = dim G 0 vector fields transforming in the adjoint representation of G 0 . A priori, there is no restriction on the choice of G 0 ; however, demanding maximal supersymmetry of the gauged theory strongly restricts the possible choices for G 0 . It is one of our main results that the entire set of consistency conditions for the three-dimensional gauged theory may be encoded into a single algebraic condition
P 27000 Θ = 0 , (1.2)
where Θ is the embedding tensor characterizing the subgroup G 0 , and P a projector in the E 8(8) tensor product decomposition (248 × 248) sym = 1+3875+27000. Solutions to (1.2) may be found by purely group theoretical considerations. Having formulated the consistency conditions of the gauged theory as a projector condition for the embedding tensor of the gauge group allows us to construct a variety of models with maximal local supersymmetry. As a result, we identify a "regular" series of gauged theories with gauge group SO(p, 8−p)×SO(p, 8−p), including the maximal compact gauge group SO(8)×SO(8) as a special case. In addition, we find several theories with exceptional noncompact gauge groups, among them an extremal theory which gauges the full E 8 (8) symmetry. These theories have no analog in higher dimensions. This collection of maximal admissible gauge groups is presented in Table I; all the gauge groups -apart from the theory with local E 8(8) -have two simple factors with a fixed ratio of coupling constants. As a by-product of our construction we can understand and re-state the corresponding consistency conditions for the higher dimensional gauged supergravities of [2,6] in very simple terms; in particular, the derivation of the T -identities for the D = 4, 5 theories can now be simplified considerably by reducing it to purely group theoretical condition analogous to (1.2). Remarkably, and even though the rigid G = E d(d) symmetry of the ungauged theory is broken, the construction and proof of consistency of the gauged theory makes essential use of the properties of the maximal symmetry group E d(d) in all cases.
gauge group G 0 ratio of coupling constants This paper is organized as follows. In Chapter 2 we review the ungauged N = 16 theory and in particular discuss the full nonlinear version of the duality (1.1) between scalar and vector fields. In Chapter 3 we present the Lagrangian of the gauged theory. It is characterized by a set of tensors A 1,2,3 which are nonlinear functions of the scalar fields and describe the Yukawa-type couplings between fermions and scalars as well as the scalar potential. We derive the consistency conditions that these tensors must satisfy in order for the full N = 16 supersymmetry to be preserved, and show that A 1,2,3 combine into a "T -tensor" analogous to the one introduced in [2], but now transforming as the 1 + 3875 of E 8 (8) . In Chapter 4 we show that these consistency conditions imply and may entirely be encoded into the algebraic equation (1.2) for the embedding tensor of the gauge group, which selects the admissible gauge groups G 0 ⊂ E 8 (8) . In turn, every solution to (1.2) yields a nontrivial solution for A 1,2,3 in terms of the scalar fields which satisfies the full set of consistency conditions. Maximal supersymmetry of the gauged theory thus translates into a simple projector equation for the gauge group G 0 .
SO(p, 8−p)×SO(p, 8−p) g 1 /g 2 = −1 G 2(2) ×F 4(4) g G 2 /g F 4 = −3/2 G 2 × F 4(−20) E 6(6) ×SL(3) E 6(2) ×SU (2, 1) g A 2 /g E 6 = −2 E 6(−14) ×SU (3) E 7(7) ×SL(2) g A 1 /g E 7 = −3 E 7(−5) ×SU (2) E 8(8) g E 8
In Chapter 5 we analyze equation (1.2) and its solutions among the maximal subgroups of SO(16) and E 8(8) , respectively. We find the maximal compact admissible gauge group G 0 = SO(8)×SO(8) as well as its noncompact real forms SO(p, 8−p)×SO(p, 8−p) for p = 1, ..., 4. In addition, we identify the exceptional noncompact gauge groups given in Table I. Each of these groups gives rise to a maximally supersymmetric gauged supergravity. Chapter 6 is devoted to an analysis of stationary points of the scalar potential which preserve the maximal number of 16 supersymmetries. We show that all our theories admit a maximally symmetric ground state and determine their background isometries. Finally we speculate on a possible higher dimensional origin of these theories.
The ungauged N = 16 theory
We first summarize the pertinent results about (ungauged) maximal N = 16 supergravity in three dimensions. The complete Lagrangian and supersymmetry transformations were presented in [22], whose conventions and notation we follow throughout this paper. 1 The physical fields of N = 16 supergravity constitute an irreducible supermultiplet with 128 bosons and 128 fermions transforming as inequivalent fundamental spinors of SO (16). In addition, the theory contains the dreibein e µ α and 16 gravitino fields ψ I µ , which do not carry propagating degrees of freedom in three dimensions. As first shown in [21], it possesses a "hidden" invariance under rigid E 8 (8) and local SO(16) transformations. Consequently, the scalar fields are described by an element V of the non-compact coset space E 8(8) /SO (16) in the fundamental 248-dimensional representation of E 8 (8) , which transforms as
V(x) −→ g V(x) h −1 (x) , g ∈ E 8(8) , h(x) ∈ SO(16) ,(2.V −1 ∂ µ V = 1 2 Q IJ µ X IJ +P A µ Y A . (2.2)
The composite SO(16) connection Q IJ µ enters the covariant derivative D µ in
D µ ψ I ν := ∂ µ ψ I ν + 1 4 ω µ ab γ ab ψ I ν + Q IJ µ ψ J ν , D µ χȦ := ∂ µ χȦ + 1 4 ω µ ab γ ab χȦ + 1 4 Q IJ µ Γ IJ AḂ χḂ . (2.3)
Definition (2.2) implies the integrability relations:
Q IJ µν + 1 2 Γ IJ AB P A µ P B ν = 0 , D [µ P A ν] = 0 , (2.4)
where the SO(16) field strength is defined as
Q IJ µν := ∂ µ Q IJ ν − ∂ ν Q IJ µ + 2 Q K[I µ Q J]K ν .
The full supersymmetry variations read [22] δe µ α = iǫ I γ α ψ I µ , δ ψ I µ = D µ ǫ I − 1 4 iγ ν ǫ J χ Γ IJ γ µν χ ,
V −1 δV = Γ I AȦ χȦǫ I Y A , δ χȦ = i 2 γ µ ǫ I Γ I AȦ P A µ ,(2.5)
with the supercovariant current
P A µ := P A µ − ψ I µ χȦΓ I AȦ .
As shown in [22], they leave invariant the Lagrangian 2
L = − 1 4 eR + 1 4 eP µA P A µ + 1 2 ǫ λµν ψ I λ D µ ψ I ν − i 2 eχȦγ µ D µ χȦ − 1 2 e χȦγ µ γ ν ψ I µ Γ I AȦ P A ν − 1 8 e χγ ρ Γ IJ χ ψ I µ γ µνρ ψ J ν − ψ I µ γ ρ ψ µJ + χχ ψ I µ γ ν γ µ ψ I ν +e 1 8 (χχ)(χχ) − 1 96 χγ µ Γ IJ χ χγ µ Γ IJ χ . (2.6)
The invariance is most conveniently checked in 1.5 order formalism, with the torsion
T µν ρ = 1 2 iψ K µ γ ρ ψ K ν + 1 4 i χȦ γ µν ρ χȦ . (2.7)
A central role in our construction is played by the on-shell duality between scalar fields and vector fields in three dimensions, which we shall now discuss. The scalar field equation induced by (2.6) is given by
D µ e (P µA − ψ I ν γ µ γ ν χȦΓ I AȦ ) = = 1 2 ǫ µνρ ψ I µ ψ J ν Γ IJ AB P B ρ + 1 8 ie χγ µ Γ IJ χ Γ IJ AB P B µ ,(2.8)
Upon use of the Rarita-Schwinger and Dirac equations for ψ I µ and χȦ, respectively, this equation may be rewritten in the form
∂ µ (e J µ M ) = 0 , (2.9)
where J µ M is the conserved Noether current associated with the rigid E 8(8) symmetry [31]:
eJ µM = 2V M B P µB − i 2 V M IJ χγ µ Γ IJ χ − 2e −1 ǫ µνρ V M IJ ψ I ν ψ J ρ − i Γ I AȦ V M A ψ I ν γ ρ χȦ . (2.10)
In writing this expression we have made use of the equivalence of the fundamental and adjoint representations of E 8(8) which yields the relation (see also App. A)
V M A := 1 60 Tr (t M V t A V −1 ) .
The existence of the conserved current (2.10) allows us to introduce 248 abelian vector fields B µ M (with index M = 1, . . . , 248), via
ǫ µνρ B νρ M = eJ µM , (2.11) where B µν M := ∂ µ B ν M −∂ ν B µ MB µ M → B µ M + ∂ µ Λ M . (2.12)
In accordance with (2.1) these vector fields transform in the adjoint representation of rigid E 8 (8) and are singlets under local SO (16). The supersymmetry transformations of the vector fields have not been given previously; they follow by "E 8 (8) covariantization" of the supersymmetry variations of the 36 vector fields obtained by direct dimensional reduction of D = 11 supergravity to three dimensions [32]
δB µ M = − 2 V M IJ ǫ I ψ J µ + iΓ I AȦ V M A ǫ I γ µ χȦ . (2.13)
For consistency, this transformation must be compatible with the duality relation (2.11). To check this, it is convenient to rewrite the latter in terms of the supercovariant field strength
B µν M := B µν M + 2 V M IJ ψ I µ ψ J ν − 2i Γ I AȦ V M A ψ I [µ γ ν] χȦ ,
whose supercovariance is straightforwardly verified from (2.13). The duality relation (2.11) then takes the following supercovariant form
ǫ µνρ B νρ M = 2e V M A P µA − i 2 eV M IJ χγ µ Γ IJ χ .
(2.14)
Equation (2.14) consistently defines the dual vector fields as nonlocal and nonlinear functions of the original 248 scalar fields (including the 120 gauge degrees of freedom associated with local SO(16)), provided the latter obey their equations of motion. We emphasize that in this way we can actually introduce as many vector fields as there are scalar fields, whereas the direct dimensional reduction of D = 11 supergravity to three dimensions produces only 36 vector fields. The "E 8 (8) covariantization" alluded to above simply consists in extending the relevant formulas from these 36 vectors to the full set of dim G 0 ≤ 248 vector fields in a way that respects the E 8(8) structure of the theory. In the ungauged theory the vector fields have been introduced merely on-shell; there is no Lagrangian formulation that would comprise the scalar fields as well as their dual vector fields. However, we shall see that the gauged theory provides a natural off-shell framework which accommodates both the scalars and their dual vectors.
From (2.14) we can also extract the equation of motion of the dual vectors: acting on both sides with ǫ ρµν ∂ ν and making use of the integrability relations (2.4), we obtain
∂ ν B µν M = − 1 2 e −1 ǫ µνρ V M IJ Q IJ νρ + fermionic terms . (2.15)
Also the fermionic terms still depend on the original scalar fields. This is obvious from the fact that we need the scalar field matrix V to convert the SO(16) indices on the fermions into the E 8 (8) indices appropriate for the l.h.s. of this equation.
(Let us note already here that in the gauged theory, the r.h.s. of this equation will acquire additional contributions containing B µν M in order of the coupling constant). We recognize an important difference between the "dual formulations" of the theory: whereas the vectors disappear completely in the standard formulation of the theory, the vector equations of motion in general still depend on the dual scalar fields. It is only under very special circumstances, and for special subsets of the 248 vector fields, that one can completely eliminate the associated dual scalars. This is obviously the case for the version obtained by direct reduction of D = 11 supergravity to three dimensions where only 92 bosonic degrees of freedom appear as scalar fields while 36 physical degrees of freedom appear as vector fields. As shown in [28], the latter are associated with the 36-dimensional maximal nilpotent commuting subalgebra of E 8 (8) , but there are further intermediate possibilities.
To conclude this section, we recall that the three dimensional Einstein-Hilbert term can be rewritten in Chern-Simons form as
− 1 4 eR = 1 4 ǫ µνρ e µ a F νρ a ,(2.16)
by means of the dual spin connection
A a µ = − 1 2 ǫ abc ω µ bc , with field strength F a µν = 2∂ [µ A a ν] + ǫ a bc A b µ A c ν .
When gauging the theory the Minkowski background space-time will be deformed to an AdS 3 spacetime characterized by
R µν = 2m 2 g µν ,(2.17)
with (negative) cosmological constant Λ = −2m 2 . The Lorentz-covariant derivative is accordingly modified to an AdS 3 covariant derivative
D ± µ := ∂ µ + 1 2 iγ a (A µ a ± me µ a ) , (2.18) with commutator [D ± µ , D ± ν ] = 1 2 iγ a (F µν a + m 2 ǫ abc e µb e νc ) .
We will return to these formulas when discussing the conditions for (n L , n R ) supersymmetry in AdS 3 in Chapter 6.
Gauged N = 16 supergravity
The Lagrangian (2.6) is invariant under rigid E 8 (8) and local SO (16). To gauge the theory, we now select a subgroup G 0 ⊂ E 8(8) which will be promoted to a local symmetry. The resulting theory will then be invariant under local G 0 × SO(16), such that (2.1) is replaced by
V(x) −→ g 0 (x) V(x) h −1 (x) , g 0 (x) ∈ G 0 , h(x) ∈ SO(16) , (3.1)
However, it should be kept in mind that the local symmetries are realized in different ways: as before, the local SO(16) is realized in terms of "composite" gauge connections, whereas the gauge fields associated with the local G 0 symmetry are independent fields to begin with. Restricting to semisimple subgroups, G 0 is properly characterized by means of its embedding tensor Θ MN which is the restriction of the Cartan-Killing form η MN onto the associated algebra g 0 . The embedding tensor will have the form
Θ MN = j ε j η (j) MN , (3.2) where η MN η (j)
N K project onto the simple subfactors of G 0 , and the numbers ε j correspond to the relative coupling strengths. It will turn out that these coefficients are completely fixed by group theory, so there is only one overall gauge coupling constant g. Owing to the symmetry of the projectors η (j) the embedding tensor is always symmetric:
Θ MN = Θ N M . (3.3)
As discussed in the introduction we introduce a subset of ν = dim G 0 vector fields, obtained from (2.14) by projection with Θ MN . For these we introduce special labels m, n, . . ., with the short hand notation
B µ m t m ≡ B µ M Θ MN t N , etc. (3.4)
Note that we do not make any assumption about G 0 at this point; in particular, our ansatz allows for compact as well as noncompact gauge groups. The possible choices for G 0 will be determined in Chapter 5. The first step is the covariantization of derivatives in (2.2) according to
V −1 D µ V ≡ V −1 ∂ µ V + g B µ m V −1 t m V ≡ P A µ Y A + 1 2 Q IJ µ X IJ , (3.5)
with gauge coupling constant g. The non-abelian field strength reads
B µν m := ∂ µ B ν m − ∂ ν B µ m + g f m np B µ n B ν p . (3.6)
The integrability relations (2.4) are modified to
Q IJ µν + 1 2 Γ IJ AB P A µ P B ν = g B µν m Θ mn V n IJ , 2D [µ P A ν] = g B µν m Θ mn V n A . (3.7)
With the hidden g dependent extra terms in the definition of the currents in (3.5), their supersymmetry variations become
δQ IJ µ = 1 2 (Γ IJ Γ K ) AȦ P A µ χȦǫ K + g(δB µ m ) Θ mn V n IJ , δP A µ = Γ I AȦ D µ (χȦǫ I ) + g(δB µ m ) Θ mn V n A , (3.8)
with the variation of the vector fields given in (2.13). Both modifications violate the supersymmetry of the original Lagrangian. In order to restore local supersymmetry we follow the standard Noether procedure as in [2], modifying both the original Lagrangian as well as the transformation rules by g-dependent terms. We will first state the results, and then explain their derivation and comment on the special and novel features of our construction.
The full Lagrangian can be represented in the form
L = L (0) + L (1) + L (2) + L (3) ,(3.9)
where L (0) is just the original Lagrangian (2.6), but with the modified currents defined in (3.5); thus L (0) and L differ by terms of order O(g). The contributions L (1) and L (2) are likewise of order g and describe the Chern-Simons coupling of the vector fields and the Yukawa type couplings between scalars and fermions, respectively:
L (1) = − 1 4 g ǫ µνρ B µ m ∂ ν B ρ m + 1 3 gf mnp B ν n B ρ p , (3.10) L (2) = 1 2 ge A IJ 1 ψ I µ γ µν ψ J ν + ige A IȦ 2 χȦ γ µ ψ I µ + 1 2 ge AȦḂ 3 χȦ χḂ , (3.11)
where the tensors A 1,2,3 are functions of the scalar matrix V which remain to be determined. At order O(g 2 ), there is the scalar field potential W (V):
L (3) = eW ≡ 1 8 g 2 e A IJ 1 A IJ 1 − 1 2 A IȦ 2 A IȦ 2 . (3.12)
Besides the extra g dependent terms induced by the modified currents, the supersymmetry variations must be amended by the following O(g) terms:
δ g ψ I µ = ig A IJ 1 γ µ ǫ J , δ g χȦ = g A IȦ 2 ǫ I . (3.13)
Of course, the above modifications of the Lagrangian and the supersymmetry transformation rules have not been guessed "out of the blue", but at this point simply constitute an ansatz that has been written down in analogy with known gauged supergravities, in particular the N = 8 theory of [2]. The consistency of this ansatz must now be established by explicit computation.
The SO(16) tensors A 1,2,3 depending on the scalar fields V introduce Yukawatype couplings between the scalars and the fermions beyond the derivative couplings generated by (2.2), as well as a potential for the scalar fields. As is evident from their definition, the tensors A IJ 1 and AȦḂ 3 are symmetric in their respective indices. Therefore, A IJ 1 decomposes as 1 + 135 under SO(16), 3 viz.
A IJ 1 = A (0) 1 δ IJ +Ã IJ 1 ,(3.14)
withà JJ = 0, while for AȦḂ 3 we have the decomposition
AȦḂ 3 = A (0) 3 δȦḂ +ÃȦḂ 3 , (3.15) whereÃȦḂ 3 = 1 4! A (4) 3 IJKL Γ IJKL AḂ + 1 2·8! A (8) 3 I 1 ...I 8 Γ I 1 ...I 8 AḂ .
Therefore A 3 can contain the representations 1+1820+6435. However, we will see that the 6435 drops out. Due to the occurrence of the 1820 in this decomposition, the tensor A 3 cannot be expressed in terms of A 1,2 unlike for D = 4 and D = 5. The independence of A 3 is a new feature of the D = 3 gauged theory. Several restrictions on the tensors A 1,2,3 can already be derived by imposing closure of the supersymmetry algebra on various fields at order O(g). Computing the commutator on the dreibein field we obtain an extra Lorentz rotation with parameter
Λ αβ = 2gA IJ 1 ǫ I 1 γ αβ ǫ J 2 ,(3.16)
while evaluation of the commutator on the vector fields and the scalar field matrix V yields an extra gauge transformation with parameter
Λ m = 2 V m IJ ǫ I 1 ǫ J 2 + iB µ m ǫ I 1 γ µ ǫ I 2 . (3.17)
The latter induces a further SO(16) rotation with parameter ω IJ = gΛ m V m IJ on V (as well as the fermions which transform under SO (16)). For the derivation of this result we need the relations
V m A Γ (I AȦ A J)Ȧ 2 = V m IK A JK 1 + V m JK A IK 1 , (3.18) Γ [I AȦ A J]Ȧ 2 = V C IJ Θ CD V D A ,(3.19)
which give the first restrictions on the tensors A 1,2,3 . A peculiarity is that the closure of the superalgebra on B µ m requires use of the duality equation, whereas the equations of motion are not needed to check closure on the remaining bosonic fields. Tracing (3.18) over the indices I and J and using the symmetry of A IJ 1 we immediately obtain
Γ I AȦ A IȦ 2 = 0 . (3.20)
The tensor A IȦ 2 thus transforms as the 1920 (traceless vector spinor) representation of SO (16).
To state the restrictions imposed on these tensors by the requirement of local supersymmetry more concisely, we now define the T -tensor
T A|B := V M A V N B Θ MN . (3.21)
Clearly T A|B = T B|A by the symmetry of Θ. Unlike the cubic expressions in [2] and [6], however, the T -tensor is quadratic in V due to the equivalence of the fundamental and adjoint representations for E 8(8) , see (A.4). The tensors A 1,2,3 must be expressible in terms of T if the theory can be consistently gauged. The detailed properties of the T -tensor will be the subject of the following chapter. Let us next consider the consistency conditions for local supersymmetry of (3.9) step by step. All cancellations that are G 0 -covariantizations of the corresponding terms in the ungauged theory will work as before, and for this reason we need only discuss those variations which have no counterpart in the ungauged theory. Variation of L (1) produces only the contribution
δL (1) = − 1 4 gǫ µνρ δB µ m B νρm ,
because the CS term depends on no other fields but B µ m . Inserting (2.13) the above variation can be seen to cancel against the extra terms in the variation of L (0) arising in the integrability conditions, cf. (3.7)
A second set of g-dependent terms is obtained by varying B µ m in Q µ and P µ , cf. (3.8). Expressing the result by means of the T -tensor, we obtain
g 2 T IJ|KL ǫ I ψ J µ − i T KL|A Γ I AḂ ǫ I γ µ χḂ ψ K ν γ µνρ ψ L ρ + i 4 χγ µ Γ KL χ − g T A|KL ǫ K ψ L µ − 1 2 i T A|B Γ K BḂ ǫ K γ µ χḂ P µA − χȦγ ν γ µ ψ I ν Γ I AȦ .
These terms combine with the variations of the fermionic fields from L (2) and the new variations (3.13) in L (0) . Consideration of the ǫψP and ǫχP terms now reproduces (3.19), but in addition requires the differential relations
D µ A IJ 1 = P µ A Γ (I AȦ A J)Ȧ 2 , D µ A IȦ 2 = 1 2 P µ A Γ I AḂ AȦḂ 3 + Γ J AȦ A IJ 1 − 1 2 P µ A Γ I BȦ T A|B . (3.22)
Multiplying the second relation by Γ I AȦ and invoking (3.20) yields
T A|B = (A (0) 1 + A (0) 3 ) δ AB + 1 16 Γ I AȦÃȦḂ 3 Γ IḂ B . (3.23) Since Γ I Γ (8) Γ I = 0 there is no 6435 of SO(16) in T A|B .
However, the argument does not yet suffice to rule out such a contribution in A 3 . As in [2], the supersymmetry variation of the tensors A 1,2 is obtained from (3.22) by replacing P A µ by Γ I AȦ ǫ I χȦ:
δà IJ 1 = Γ K AḂ ǫ K χḂ Γ (I AȦ A J)Ȧ 2 , δA IȦ 2 = 1 2 Γ K AḂ ǫ K χḂ Γ I AĊ AȦĊ 3 + Γ J AȦ A IJ 1 − Γ I BȦ T A|B . (3.24) The tracelessness of A IȦ 2 in (3.20) in conjunction with (3.22) also implies that A (0) 1 and A(0)
3 are constant. This is consistent with the fact that the trace parts drop out from the above variations. Observe that the supersymmetry variation of A 3 does not yet enter at this point as it appears only at cubic order in the fermions.
At O(g 2 ) we get two quadratic identities. The first multiplies the g 2 ψǫ variations and is straightforwardly obtained
A IK 1 A KJ 1 − 1 2 A IȦ 2 A JȦ 2 = 1 16 δ IJ A KL 1 A KL 1 − 1 2 A KȦ 2 A KȦ 2 . (3.25)
The second comes from the g 2 χǫ variations: performing the O(g) variations in L (2) we obtain
δ g L (2) = g 2 eχȦǫ I ( − 3A IJ 1 A IȦ 2 + AȦḂ 3 A IḂ 2 ) .
Varying A 1,2 in the potential, on the other hand, and making use of the above formulas (3.24) together with (3.20), we arrive at:
χȦǫ K (Γ K Γ I )ȦḂ 3 16Ã IJ 1 A JḂ 2 − 1 16ÃḂĊ 3 A IĊ 2 .
By the tracelessness of A IȦ 2 we can drop the tildes in this expression, and thus obtain the second relation
3A IJ 1 A JȦ 2 −A IḂ 2 AȦḂ 3 = 1 16 (Γ I Γ J )ȦḂ 3A JK 1 A KḂ 2 −A JĊ 2 AḂĊ 3 , (3.26)
which must be satisfied for local supersymmetry to hold. Thus, at linear order in the fermions, supersymmetry requires the tensors A 1,2,3 to satisfy the identities (3.18), (3.19), and (3.22)-(3.26). However, these do not yet constitute a complete set of restrictions. In marked contrast to the D ≥ 4 gauged supergravities, we get further and independent conditions at cubic order in the fermions. This special feature is again related to the algebraic independence of the third tensor A 3 . Although the necessary calculations are quite tedious, we here refrain from giving details and simply state the results, as the relevant Fierz technology is (or should be) standard by now. Interested readers may find many relevant formulas in [22].
The analysis of the (ψψ)(ψǫ) terms gives
T IJ|KL = 2δ I[K A L]J 1 +T [IJ|KL] . (3.27)
The structure of the r.h.s. of this equation thus restricts T IJ|KL to the SO(16) components 1, 135 and 1820. Demanding the cancellation of (χχ)(ψǫ) terms yields three more constraints:
A (0) 3 + 2A (0) 1 = 0 , A(8)
3
I 1 ...I 8 = 0 , T [IJ|KL] = 2Ã (4) 3 IJKL ,(3.T IJ|KL = 2 δ IJ KL A (0) 1 + 2 δ I[KÃL]J 1 + 2Ã(4)
3 IJKL ,
T IJ|A = Γ [I AȦ A J]Ȧ 2 , T A|B = −A (0) 1 δ AB + 1 2·4! Γ IJKL ABÃ(4)
3 IJKL .
(3.29)
In particular, the two singlets and the two 1820 representations in T IJ|KL and T A|B coincide. Finally, the analysis of the (χχ)(χǫ) terms yields δA (4)
3 IJKL = − 1 2 ǫ M χȦ Γ M Γ [IJK ȦḂ A L]Ḃ 2 .
(3.30)
In order to derive this condition and to prove the vanishing of the (χχ)(χǫ) terms, one needs the additional Fierz identity, which cannot be derived from the relations given in the Appendix of [22]:
(χΓ KLM N χ) (χȦǫ I ) (Γ I Γ KLM )ȦḂA NḂ 2 = = 36 (χγ µ Γ IJ χ) (χȦγ µ ǫ I ) A JȦ 2 − 4 (χγ µ Γ KL χ) (χȦγ µ ǫ I ) Γ KL AḂ A IḂ 2 + 48 (χχ) (χȦǫ I ) A IȦ 2 − 12 (χγ µ Γ KL χ) (χȦγ µ ǫ I ) Γ IK AḂ A LḂ 2 ,
The tracelessness of A IȦ 2 is again crucial in obtaining this result.
Let us summarize our findings. The complete set of consistency conditions ensuring supersymmetry of the gauged Lagrangian (3.9) is given by the linear relations (3.29), the differential identities (3.22), (3.30), the relation (3.18), and the quadratic identities (3.25), (3.26). The tensors A 1,2,3 can contain only the SO(16) representations 1, 135, 1820 and 1920. Equations (3.29) show that likewise the Ttensor may contain only these representations. The remarkable fact -which eventually allows the resolution of all identities -is that these SO(16) representations combine into representations of E 8 (8) . More specifically, we have [33] and in the 351 of E 6(6) [6], respectively. We shall come back to this point in the next chapter. Perhaps the most unexpected feature of our construction is the fact that the vector fields appear via a CS term (3.10) in order g, rather than the standard Yang-Mills term. This has no analog in higher dimensions, where the vector fields appear already in the ungauged theory via an abelian kinetic term. In hindsight this coupling of the vector fields turns out to be the only consistent way to bring in the dual vector fields without introducing new propagating degrees of freedom, and thereby to preserve the balance of bosonic and fermionic physical degrees of freedom.
135+1820+1920 = 3875 ,(3.
The emergence of non-abelian CS terms in the maximally supersymmetric theories naturally leads to a non-abelian extension of the duality relation (2.14)
ǫ µνρ B µν m = 2 eV m A P ρA − i 2 eV m IJ χγ ρ Γ IJ χ ,(3.32)
which consistently reduces to (2.14) in the limit g → 0. However, in this limit, the vector fields drop from the Lagrangian such that the duality relation (2.14) no longer follows from a variational principle in the ungauged theory but rather must be imposed by hand. This can be viewed as a very mild form of the gauge discontinuity encountered for gauged supergravities in odd dimensions [3,4,6]. In contrast to those models however, the Lagrangian (3.9) has a perfectly smooth limit as g → 0.
Because of the explicit appearance of the gauge fields on the r.h.s. of the nonabelian duality relation it is no longer possible to trade the vector fields for scalar fields and thereby eliminate them, unlike in [28]. Vice versa, the explicit appearance of the scalar fields in the potential of (3.9) also excludes the possibility to eliminate some of these fields by replacing them by vector fields. In contrast to the ungauged theory which allows for different equivalent formulations related by duality, the gauged theory apparently comes in a unique form which requires the maximal number of scalar fields together with the dual vectors corresponding to the gauge group G 0 .
Note that unlike in (2.14), the nonabelian duality relation (3.32) may be imposed only for those vector fields which belong to the gauge group G 0 . Having gauged the theory, we can no longer introduce additional vector fields as was the case for the ungauged theory. This is because additional vector fields transforming nontrivially under the gauge group G 0 would acquire mass terms in the gauged theory, entailing a mismatch between bosonic and fermionic degrees of freedom. As a consequence, (3.32) does not imply the full set of bosonic equations of motion, but just their projection onto the subgroup G 0 . However, just as in (2.15) we may deduce the equations of motion for the vector fields from (3.32) by acting on both sides with ǫ ρµν D ν and making use of (3.7):
D ν B µν m = 1 2 ge −1 ǫ µνρ V m A V n A + V m IJ V n IJ Θ nk B νρ k − 1 2 e −1 ǫ µνρ V m IJ Q IJ νρ + fermionic terms = g V m B T B|A + V m IJ T IJ|A P µA − 1 2 e −1 ǫ µνρ V m IJ Q IJ νρ + fermionic terms .
T -identities
In the foregoing chapter we have derived the consistency conditions which must be satisfied by the tensors A 1,2,3 and the T -tensor in order to ensure the full supersymmetry of the gauged action (3.9). It remains to show that these conditions admit nontrivial solutions A 1,2,3 (V). This will single out the possible gauge groups G 0 ⊂ E 8 (8) . Recall that in the three dimensional model the choice of gauge group is less restricted than in higher dimensions where the gauge group G 0 ⊂ G is essentially determined by the fact that a maximal subset of the vector fields of the theory must transform in its adjoint representation. Up to this point, we have made no assumptions on the gauge group G 0 ⊂ E 8 (8) , which is characterized by its embedding tensor Θ AB , cf. (3.2). We will now show that all the consistency conditions derived in the previous section may be encoded into a single algebraic equation for the embedding tensor.
According to (3.3), Θ AB transforms in the symmetric tensor product (248 × 248) sym = 1 + 3875 + 27000 . The explicit projectors of this decomposition have been computed in [34] (P 1 ) MN
KL = 1 248 η MN η KL , (P 3875 ) MN KL = 1 7 δ K (M δ L N ) − 1 56 η MN η KL − 1 14 f P M (K f PN L) , (P 27000 ) MN KL = 6 7 δ K (M δ L N ) + 3 217 η MN η KL + 1 14 f P M (K f PN L) . (4.2)
Accordingly, Θ MN may be decomposed as
Θ MN = θ η MN + Θ 3875 MN + Θ 27000 MN , (4.3) with Θ 3875 MN = (P 3875 ) MN KL Θ KL , Θ 27000 MN = (P 27000 ) MN KL Θ KL .
The T -tensor as it has been defined in (3.21) is given by a rotation of Θ MN by the matrix V. It may likewise be decomposed
T A|B = T 1 A|B + T 3875 A|B + T 27000 A|B , (4.4) with T 3875 A|B = (P 3875 ) AB CD T C|D = V M A V N B Θ 3875 MN , etc.
where the second equality is due to invariance of the projectors under E 8 (8) . Analogous tensors have been defined in [2] and [6] for the maximally gauged models in D = 4 and D = 5, respectively. Unlike those T -tensors, however, the T -tensor here is quadratic in V, as already emphasized before.
The constraint for the embedding tensor
We have seen that supersymmetry of the gauged Lagrangian in particular implies the set of relations (3.29) for the T -tensor. As discussed above, these relations show that T may only contain the SO(16) representations contained in the 1+3875 of E 8 (8) . It follows that equations (3.29) can be solved for A 1,2,3 if and only if
T 27000 A|B = 0 ⇐⇒ Θ 27000 AB = 0 . (4.5)
This is a set of linear algebraic equations for the embedding tensor Θ AB . We stress once more the remarkable fact that the equations (3.29) combine into an E 8(8) covariant condition for the T -tensor which makes it possible to translate these equations into a condition for the constant tensor Θ. In particular, each single equation from (3.29) yields an SO(16) covariant restriction on the T -tensor (3.21) which already implies the full set of relations (3.29), if it is to be satisfied for all E 8 (8) valued matrices V.
We shall show in the following sections that (4.5) not only reproduces the linear equations (3.29) but indeed implies the complete set of consistency conditions (including the differential and quadratic ones) identified in the last chapter 4 .
Linear identities
Making use of the explicit form of the projectors (4.
Θ IJ,A = 1 7 (Γ I Γ L ) AB Θ B,LJ , Θ A,B = 1 96 Γ IJKL AB Θ IJ,KL + θ δ AB ,(4.6)
and likewise for T . These equations contain the complete set of linear identities among different components of the T -tensor. Once they are satisfied, the T -tensor may entirely be expressed in terms of the tensors A 1,2,3 as found in (3.29) above:
T IJ|KL = 2 δ I[K A L]J 1 + 1 64 Γ IJKL AḂ AȦḂ 3 , T IJ|A = Γ [I AȦ A J]Ȧ 2 , T A|B = 1 6144 Γ IJKL AB Γ IJKL AḂ AȦḂ 3 + θ δ AB . (4.7)
These equations may be inverted and give the solution for the tensors A 1,2,3 in terms of the T -tensor:
A IJ 1 = 8 7 θ δ IJ + 1 7 T IK|JK , A IȦ 2 = − 1 7 Γ J AȦ T IJ|A , AȦḂ 3 = 2θ δȦḂ + 1 48 Γ IJKL AḂ T IJ|KL . (4.8)
Differential identities
With the linear identities derived in the last section we may now compute the variation of the tensors A 1,2,3 when V is varied. Since the matrix V lives in the adjoint representation, its variation along an invariant vector field Σ A is given by
δV M B δΣ A = f B CA V M C =⇒ δV M IJ δΣ A = − 1 2 Γ IJ AB V M B δV M B δΣ A = − 1 4 Γ IJ AB V M IJ . (4.9)
From (4.8) we then obtain
δA IJ 1 δΣ A = 1 14 Γ IK AB T KJ|B + Γ JK AB T KI|B , δA IȦ 2 δΣ A = 1 14 Γ J BȦ Γ IJ AC T B|C + 1 2 Γ M N AB T IJ|M N , δAȦḂ 3 δΣ A = − 1 48 Γ IJKL AḂ Γ KL AB T IJ|B .
Rewriting the expressions on the r.h.s. in terms of the tensors A 1,2,3 by means of (4.7) we get
δA IJ 1 δΣ A = Γ (I AȦ A J)Ȧ 2 , δA IȦ 2 δΣ A = 1 2 Γ M AȦ A IM 1 + Γ I AḂ AȦḂ 3 − Γ I BȦ T A|B , δAȦḂ 3 δΣ A = 1 48 Γ IKM Ṅ AḂ Γ KM N AĊ A IĊ 2 . (4.10)
This reproduces equations (3.24) and (3.30) from the last chapter. In particular, we obtain the covariant derivatives of the tensors A 1,2
D µ A IJ 1 = Γ (I AȦ A J)Ȧ 2 P A µ , D µ A IȦ 2 = 1 2 Γ M AȦ A IM 1 + Γ I AḂ AȦḂ 3 − Γ I BȦ T A|B P A µ ,(4.11)
which coincide with equations (3.22) found before. The variation (4.10) further allows to compute the variation of the scalar potential (3.12)
δ δΣ A A IJ 1 A IJ 1 − 1 2 A IȦ 2 A IȦ 2 = 1 2 Γ M AȦ 3A M N 1 A NȦ 2 − AȦḂ 3 A MḂ 2 ,
which has also been used in the last chapter. Together with the quadratic identity (4.20) to be derived below, this yields the condition for stationary points of the potential
δW δΣ A = 0 ⇐⇒ 3 A IM 1 A MȦ 2 = AȦḂ 3 A IḂ 2 . (4.12)
Obviously, a sufficient condition for stationarity is A IȦ 2 = 0 .
Quadratic identities
So far, we have exploited the projector condition (4.5) to derive linear identities in T A|B . However, additional information stems from the fact that the tensor Θ MN is built from projectors onto subgroups, cf. (3.2). This can be used to derive further identities quadratic in the tensors A 1,2,3 . As we have seen in the previous chapter, identities of this type are also needed to ensure supersymmetry of the gauged theory.
Since Θ MN projects onto a subgroup G 0 ⊂ G, it satisfies:
Θ K(M f N ) KL Θ LP = 0 ,(4.13)
which follow from closure of G 0 and the antisymmetry of the structure constants. Invariance of the structure constants then implies
Θ mn V n C f CD (A T B)|D = 0 .4 V m N (I T K)M |M N + Γ IM AB V m A T KM |B + Γ KM AB V m A T IM |B = 0 , where the index m is projected onto the subalgebra g 0 . Inserting (4.7) yields V m A Γ (I AȦ A K)Ȧ 2 = V m IM A M K 1 + V m KM A M I 1 ,(4.15)
and thus the identity (3.18), required above for closure of the supersymmetry algebra in the gauged theory. If we contract this equation with V n JK Θ mn , symmetrize in (IJ) and once more insert (4.7), we obtain
A IK 1 A KJ 1 − 1 2 A IȦ 2 A JȦ 2 = 1 16 δ IJ A KL 1 A KL 1 − 1 2 A KȦ 2 A KȦ 2 .
(4. 16) This gives already the quadratic identity (3.25). If on the other hand we contract (4.15) with Γ K BȦ V n B Θ mn , we obtain after inserting (4.7)
1 64 Γ IKM Ṅ CḊ Γ M Ṅ AḂ A KḂ 2 AĊḊ 3 = −32 A IN 1 A NȦ 2 + 2 (Γ I Γ K )ȦḂ A KN 1 A NḂ 2 + 10 A IḂ 2 AȦḂ 3 − (Γ I Γ K )ȦḂ A KĊ 2 AḂĊ 3 − 16 θ A IȦ 2 . (4.17)
Evaluating (4.14) for (A, B) = ([IJ], A) and contracting with Γ J AȦ leads to 18) again, if the index m is projected onto the subalgebra g 0 . To obtain the desired identity, we contract this equation with V n IJ Θ mn and insert (4.7). After some calculation we arrive at
1 6 (Γ J Γ M N KL ) AȦ V m A T M N |KL − 1 12 (Γ M N KL Γ J ) AȦ V m A T M N |KL = = 4 7 (Γ K Γ M N ) AȦ V m M N T JK|A − 16 7 Γ K AȦ V m JM T M K|A + 8 7 Γ K AȦ V m A T JM |M K + 1 14 Γ J AȦ V m A T M N |M N ,(4.1 64 Γ IKM Ṅ CḊ Γ M Ṅ AḂ A KḂ 2 AĊḊ 3 = 64 A IN 1 A NȦ 2 − 4 (Γ I Γ K )ȦḂ A KN 1 A NḂ 2 − 22 A IḂ 2 AȦḂ 3 + (Γ I Γ K )ȦḂ A KĊ 2 AḂĊ 3 − 16 θ A IȦ 2 . (4.19)
Equating (4.17) and (4.19), we finally obtain
3A IJ 1 A JȦ 2 − A IḂ 2 AȦḂ 3 = 1 16 (Γ I Γ J )ȦḂ 3A JK 1 A KḂ 2 − A JĊ 2 AḂĊ 3 . (4.20)
We have thus shown that the condition (4.5) together with the fact that Θ AB projects onto a subalgebra implies the quadratic identities (4.16) and (4.20) which coincide with (3.25), (3.26) found above. Altogether, we recover in this fashion all the identities required in Chapter 3 from the single projector condition (4.5) for the embedding tensor Θ AB .
Admissible gauge groups G 0
Having reduced the consistency conditions required by local supersymmetry to a set of algebraic conditions (4.5) for the embedding tensor of the gauge group G 0 ⊂ G, we must now ascertain that this condition admits non-trivial solutions and classify them. This is the objective of the present section. As we will see the variety of solutions of (4.5), each of which gives rise to a maximally supersymmetric gauged supergravity, is far richer than in dimensions D ≥ 4.
The power of equation (4.5) is based on its formulation as a single projector condition in the tensor product decomposition (4.1). This permits the construction of solutions by purely group theoretical means. To demonstrate that these methods also clarify the structure of the T -identities in D ≥ 4, we derive the analog of (4.5) to re-obtain the results of [2] and [6]. Group theoretical arguments then show immediately that the gauge groups SO(8) and SO(6), respectively, solve the relevant equations. In particular, this provides a unifying argument for the consistency of all the noncompact gaugings found subsequently in [35,36,6].
The analysis for three dimensions turns out to be more involved, but extending the above arguments we arrive at a variety of admissible gauge groups. There is a regular series of gauge groups SO(p, 8− p)× SO(p, 8− p) including the maximal compact SO(8)×SO (8), and several exceptional noncompact gauge groups, summarized in Table II below. Still this is not a complete classification of admissible gauge groups, as we restrict the analysis of compact and noncompact gauge groups to the maximal subgroups of SO(16) and E 8(8) , respectively. We leave the exploration of smaller rank gauge groups to future work.
T -identities and gauge groups in higher dimensions
As a "warm-up" let us first apply our techniques to the gauged maximal supergravities in D = 4, 5. This will allow us to shortcut the derivation of the (linear) T -identities given in the original work.
D = 4
Like (4.4), the D = 4 T -tensor is obtained from a constant G 0 -invariant tensor Θ by a field dependent rotation with the matrix V ∈ E 7 (7) in the fundamental representation. The constant tensor Θ there transforms in the product of the adjoint and the fundamental representation (7) , 5 such that T is cubic rather than quadratic in the matrix entries of V.
56 × 133 = 56 + 912 + 6480 , (5.1) of E 7
Computations similar to those presented in the last chapter then show that full supersymmetry of the gauged Lagrangian is equivalent to
T = T 912 ⇐⇒ Θ = Θ 912 , (5.2)
providing the analogue of (4.5). It is now straightforward to see that G 0 = SO (8) indeed gives a solution to (5.2): consider the decomposition of (5.1) under SO (8) As the singlets appear only in the 912, any SO(8) invariant tensor in (5.1) automatically satisfies (5.2). The same argument proves the consistency of the noncompact SO(p, 8 − p) gaugings found in [36]. As shown in [38] equation (5.2) indeed contains no other solutions than those found in [2,36].
D = 5
For D = 5, the constant tensor Θ transforms in the product of the adjoint and the fundamental representation 351 → 1 + 2 · 6 + 2 · 10 + 2 · 10 + 4 · 15 + . . . , 1728 → 10 · 6 + 2 · 10 + 2 · 10 + 9 · 15 + . . . .
27 × 78 = 27 + 351 + 1728 ,(5.
(5.6)
Now the singlet appears only in the 351, hence there is just one SO(6) invariant tensor in (5.4) which automatically satisfies (5.5). As before, this argument generalizes to all the noncompact gauge groups found in [6].
Compact gauge groups
Let us now come back to (4.5). We will first consider compact gauge groups G 0 ⊂ SO (16). Their embedding tensors satisfy According to (3.31), the 5304 is part of the 27000 and must vanish for (4.5) to be satisfied. From (4.6) it further follows that the 1 and the 1820 coincide with the corresponding parts in Θ A,B and thus must vanish due to (5.7). Hence, for compact G 0 , only the 135 representation survives, and the condition (4.5) reduces to
Θ IJ,A = 0 = Θ AΘ IJ,KL = δ I[K Ξ L]J , with Ξ IJ = 7 2 Θ IK,JK , Ξ II = 0 . (5.9)
The tracelessness of Θ in particular rules out any simple compact gauge group.
In principle, the elementary form of the constraint (5.9) should allow a complete classification of the possible compact gauge groups; however, in the following, we restrict attention to the maximal subgroups of SO (16). They are
Ξ ij = (16−p) δ ij , Ξ ij = −p δ ij ,(5.11)
where i, j = 1, . . . p and i, j = p + 1, . . . , 16 denote the splitting of the SO(16) vector indices I, and the relative factor between Ξ ij and Ξ ij is determined from tracelessness. By (5.9), the tensor Θ IJ,KL satisfying (4.5) is
Θ ij,kl = (16−p) δ ij kl , Θ ij,kl = −p δ ij kl , Θ ij,kl = 1 2 (8−p) δ ik δ jl .
However, due to the nonvanishing mixed components Θ ij,kl , this tensor coincides with the embedding tensor of SO(p)×SO(16−p) if and only if p = 8. Hence we have shown that the only maximal subgroup of SO(16) whose embedding tensor satisfies the condition (4.5) is
G 0 = SO(8)×SO(8) ⊂ SO(16) ,(5.12)
where the ratio of coupling constants of the two factors is g 1 /g 2 = −1; in particular the trace part θ of Θ AB vanishes. Combining this with the results of the previous chapters, we have thus shown the existence of a maximally supersymmetric gauged supergravity with compact gauge group G 0 = SO(8) × SO (8). Under G 0 , the scalar degrees of freedom decompose as 120 → (1, 28)+(28, 1) Amongst other things we here recognize the standard decomposition of the on-shell IIA supergravity multiplets in terms of left and right moving string states.
+(8 s , 8 c ) , 128 → (8 v , 8 v )+(8 c , 8 s ) ,(5.
Regular noncompact gauge groups
In order to identify the allowed noncompact gauge groups, we first recall that for the maximal gauged supergravity in D = 4, several noncompact gaugings were found by analytic continuation [35,36]. The noncompact gauge groups are thus alternative real form of the complexified gauge group SO(8, C), and the consistency of the noncompact gaugings was basically a consequence of the consistency of the original theory [2] with compact gauge group. The results of the last section suggest that analogous gaugings should exist for the different real forms of (5.12). The complexification of (5.12) is SO(8, C)×SO(8, C). Its real forms which are also contained in E 8 (8) are given by (2) , for p = 1, . . . , 4 .
G 0 = SO(p, 8−p) (1) ×SO(p, 8−p)
(5.15) They are embedded in E 8 (8) via the maximal noncompact subgroup SO (8,8).
Therefore the latter group is the analogue of the subgroups SL(8, R) ⊂ E 7 (7) in D = 4 and SL(6, R)×SL(2, R) ⊂ E 6(6) in D = 5. To further illustrate the embedding, we have denoted the two factors of G 0 by superscripts (1), (2) whereas we denote the two factors of (5.12) by subscripts L,R. The maximal compact subgroup of (5.15) is given by
H 0 = H (1) × H (2) ≡ SO(p) (1) L ×SO(8−p) (1) R × SO(p) (2) R ×SO(8−p) (2) L , (5.16) with H (1) ⊂ SO(p, 8−p) (1) , SO(p) (1) L ×SO(8−p) (2) L ⊂ SO(8) L , H (2) ⊂ SO(p, 8−p) (2) , SO(p) (1) R ×SO(8−p) (2) R ⊂ SO(8) R .
The embedding of H 0 into SO(8) L × SO(8) R is the standard one, without any triality rotation. In other words, the 8 v of SO(8) L decomposes into (p, 1)
+(1, 8−p) under SO(p) (1) L ×SO(8−p)(2)
L , etc. Consistency of the gauged theories with noncompact gauge groups (5.15) could in principle be shown in analogy with [36,39] by the method of analytic continuation. Alternatively, their consistency follows from an algebraic argument along the lines of the last section by use of our form of the consistency condition (4.5). This gives the analogue of the noncompact gaugings found in higher dimensions [36,6].
Exceptional noncompact gauge groups
Next, we discuss noncompact gauge groups, which unlike the groups identified in (5.15) do not share the complexification with any compact subgroup contained in E 8 (8) . Their existence is again a consequence of the absence of any a priori restriction on the number of vector fields in three dimesnions.
These noncompact solutions to (4.5) may be found by a purely group theoretical argument. As an example, consider the maximal subgroup G 0 = G 2(2) ×F 4 (4) . Under G 0 the adjoint representation of E 8(8) decomposes as 248 → (14, 1) + (1, 52) + (7,26) . satisfies (4.5). This is the embedding tensor of G 0 = G 2(2) ×F 4(4) with a fixed ratio of coupling constants between the two factors, which solves (4.5) and (4.13). The results of the last chapter then prove the existence of a maximally supersymmetric gauged theory with gauge group G 2(2) ×F 4(4) . The same argument may be applied to other noncompact subgroups of E 8 (8) . A closer inspection of the above proof reveals that only two ingredients were needed, namely (i) that the gauge group G 0 consists of two simple factors and (ii) that the E 8 (8) representations 3875 and 27000 each contain precisely one singlet in the decomposition under G 0 . As it turns out, this requirement is also met by the noncompact groups E 7(7) ×SL(2), E 6(6) ×SL (3), and all their real forms which are contained in E 8 (8) . The list of exceptional noncompact subgroups passing this test, together with their maximal compact subgroups is displayed in Table II. There are also real forms of these exceptional gauge groups -the compact forms of E d for d = 6, 7, 8, and the real forms E 8(−24) , E 7(−25) and E 6(−26)which are not contained in E 8 (8) and thus do not appear in this list. However, every real form that may be embedded in E 8(8) gives rise to a maximally supersymmetric The subscripts L and R refer to the AdS supergroups G L ×G R associated to the maximally supersymmetric groundstates of these theories, see Chapter 6. gauged supergravity. The "extremal" noncompact solution to (4.5) is given by the group G 0 = E 8 (8) itself, in which case Θ AB reduces to the Cartan-Killing form η AB .
G 0 = G (1) ×G (2) maximal compact subgroup H 0 = H (1) ×H (2) G 2(2) ×F 4(4) SU(2) L ×SU(2) R × SU(2) L ×USp(6) R G 2 × F 4(−20) (G 2 ) L ×SO(9) R E 6(6) ×SL(3) USp(8) L × SU(2) L E 6(2) ×SU (2, 1) SU(6) L ×SU(2) R × SU(2) R ×U(1) L E 6(−14) ×SU (3) SO(10) L ×U(1) R ×SU(3) R E 7(7) ×SL(2) SU(8) L × U(1) L E 7(−5) ×SU (2) SO(12) L ×SU(2) R ×SU(2) R E 8(8) SO(16) L
To complete the construction of the theories with gauge groups given in Table II, it remains to compute the ratio of coupling constants between the two factors of G 0 which came out to be fixed to a specific value in (5.19). To this end, let us consider the general situation of a gauge group with two simple factors G 0 = G (1) × G (2) , such that its maximal compact subgroup likewise factors as H 0 = H (1) ×H (2) . Denote the embedding tensor of G 0 by
gΘ MN = g 1 η (1) MN + g 2 η (2) MN ,(5.20)
where η (1), (2) are the embedding tensors of G (1), (2) , respectively, and assume that (5.20) satisfies (4.5). Equation (5.19) was a particular case satisfying these assumptions. Contracting (5.20) with η MN yields
gθ dim E 8(8) = g 1 dim G (1) + g 2 dim G (2) .
where the l.h.s. follows from (4.3). On the other hand, contracting (5.20) with η IJ,KL over the compact part of E 8 (8) gives
gθ dim SO(16) = g 1 dim H (1) + g 2 dim H (2) ,
where the l.h.s. here follows from (4.6) -and is a consequence of the fact that due to (4.5) the only SO(16) singlet in Θ MN is given by the first term in (4.3). From the last two equations one may extract the coupling constants g 1 , g 2 of the two factors of the gauge group. Their ratio is
g 1 g 2 = − 15 dim G (2) − 31 dim H (2) 15 dim G (1) − 31 dim H (1) . (5.21)
With the gauge groups and their compact subgroups given in Table II we then immediately obtain the ratios of coupling constants for all these groups. In particular, no degeneration occurs where this ratio would vanish or diverge. In Table I displayed in the introduction, we have presented a list of all the noncompact admissible subgroups G 0 ⊂ E 8(8) , together with their ratio of coupling constants.
Remarkably, the ratios as determined by (5.21) come out to be independent of the particular real form for each of these exceptional noncompact groups. This suggests that the theories whose gauge groups are different real forms of the same complexified group may be related by analytic continuation, in a similar fashion as the SO(p, 8−p) gaugings of the D = 4 theory are related via SO(8, C) [35,36,39].
Here, the analytic continuation would have to pass through the complex group E 8 (C). This concludes our discussion of admissible gauge groups. We note that in addition to the groups identified in this chapter there should also exist non-semisimple gaugings analogous to the theories constructed in [35,36,39,40]. We leave their exploration and complete classification for future study.
Stationary points with maximal supersymmetry
The point of vanishing scalar fields, i.e. V = I, plays a distinguished role: it is a stationary point with maximal supersymmetry for all the theories we have constructed. Recall that the condition for stationarity was already spelled out in (4.12). At V = I, the gauge group G 0 is broken to its maximal compact subgroup H 0 . For the compact gauge group (5.12), the tensor A IȦ 2 vanishes at this point, since Θ has no contribution in the noncompact directions, cf. (4.8) and (5.7). Hence, (4.12) is satisfied; the compact gauged theory has a G 0 invariant stationary point at V = I. For the noncompact real forms (5.15), the decomposition (5.14) implies that there is no H 0 -invariant tensor in the tensor product 16×128; hence, A IȦ 2 vanishes also in these theories at V = I. The same argument works also for the exceptional noncompact gauge groups from Table I. In summary, all the three-dimensional theories we have constructed share the stationary point V = I. If we denote by ν = dim G 0 and κ = dim H 0 the dimension of the gauge group and its maximal compact subgroup, respectively, the field equations (3.32) imply that for V = I the vector fields split into ν −κ massive self-dual vectors and a H 0 -Chern-Simons theory of κ vector fields which do not carry propagating degrees of freedom. In this way, the erstwhile topological vector fields corresponding to the noncompact directions in G 0 acquire a mass term by a Brout-Englert-Higgs like effect as observed in [41]. Dropping the massive vector fields as well as the matter fermions, the theory then reduces to a H 0 CS theory, coupled to supergravity. Since the AdS 3 (super-)gravity itself allows the formulation as a CS theory of the AdS group SO(2, 2) [11,25], the resulting theory is a CS theory with connection on a superextension of H 0 × SO(2, 2). We shall determine these supergroups in the following.
In order to analyze the residual supersymmetries at the stationary point V = I in a little more detail, we consider the Killing spinor equations, derived from (2.5), (3.13) in absence of the vector fields:
0 ! = ∂ µ ǫ I + 1 2 iγ a A µ a δ IJ − 2g e µ a A IJ 1 ǫ J , (6.1) 0 ! = A IȦ 2 ǫ I . (6.2)
Adapting the arguments of [6] to the present case, it may be shown that (6.1) in fact implies (6.2). Namely, comparing (6.1) to (2.18) we find that every solution to (6.1) corresponds to the product of an AdS 3 Killing spinor and an eigenvector ǫ I 0 of the real symmetric matrix A IJ 1 ; the eigenvalue α i of A IJ 1 is related to the AdS radius by
2g |α i | = m . (6.3)
On the other hand, the Einstein field equations derived from (3.9) imply that
R µν = 4W 0 g µν ,(6.4)
where W 0 is the value of the potential (3.12) at the critical point. From (2.17) we infer the relation m 2 = 2W 0 . Given the eigenvector ǫ I 0 of A IJ 1 with eigenvalue α i , we contract (3.25) with ǫ I 0 to obtain
2 g 2 α 2 i − W 0 ǫ I 0 = g 2 A IȦ 2 A JȦ 2 ǫ J 0 . (6.5)
If α i satisfies (6.3), this equation indeed implies (6.2). As in higher dimensions, the number of residual supersymmetries therefore corresponds to the number of eigenvalues α i of A IJ 1 satisfying (6.3). Conversely, equation (6.5) shows that A IȦ 2 = 0 is a sufficient condition for a maximally supersymmetric ground state: all eigenvalues of the tensor A IJ 1 then satisfy (6.3), splitting into 16 = n L +n R with positive and negative sign, respectively. Altogether, we have thus shown that all the theories with noncompact gauge groups from (5.15) and Table I possess a maximally supersymmetric ground state at V = I. This is in marked contrast to the higherdimensional models, where several of the noncompact gaugings do not even admit any stationary points [35,36,39].
Not unexpectedly, the background isometries of these groundstates are superextensions of the three-dimensional AdS group SO(2, 2). Since SO(2, 2) = SU (1, 1) L × SU (1, 1) R is not simple, they are in general direct products of two simple supergroups G L ×G R . Accordingly, the sixteen supersymmetry generators split into N = (n L , n R ), such that the groups G L,R are n L,R superextensions of the SU (1, 1) L,R with bosonic subgroups
G L,R ⊃ H L,R ×SU (1, 1) L,R . (6.6)
A list of possible factors G L,R based on the classification [42,43] is given in [19].
To determine the AdS supergroups G L ×G R corresponding to the maximally supersymmetric ground state of the theory with gauge group G 0 , one must identify the groups H L,R among the simple factors of its maximally compact subgroup H 0 , such that H 0 = H L × H R . This basically follows from the decomposition of the sixteen supercharges under H 0 . Note that H L is not necessarily entirely contained in one of the two factors of the semisimple gauge group G 0 . Rather we find that in the two factorizations
H (1) ×H (2) = H 0 = H L ×H R ,(6.7)
the various subfactors are distributed in different ways among the two factors. This has been made explicit in (5.16) and Table II, respectively, by designating the simple factors of H 0 with the corresponding sub-and superscripts. In fact, the only gauge groups for which the two factorizations (6.7) coincide are the compact group (5.12), the group G 2 ×F 4(−20) from Table II, and the gauge group E 8 (8) itself. For the noncompact gauge groups E 6(6) × SL(3), E 7(7) × SL (2), and E 8(8) , we find H 0 = H L , i.e. G R reduces to its purely bosonic AdS part SU (1, 1) R . Another particular situation arises for the noncompact gauge group SO(4, 4) × SO (4, 4), where the supergroups G L,R themselves are not simple but direct products of two supergroups, respectively. The complete list is given in Table III, where we have summarized the background isometries of the maximally supersymmetric stationary point V = I for all the three-dimensional gauged maximal supergravities constructed in this article. Let us emphasize that this table presumably represents only the tip of the iceberg as we expect there to be a wealth of stationary points with partially broken supersymmetry for "small" gauge groups G 0 ⊂ E 8 (8) . On the other hand, for "large" gauge groups stationary points will be more scarce. As a special example, consider the extremal theory with noncompact gauge group E 8 (8) , for which the potential becomes just a (cosmological) constant, and does not exhibit any stationary points besides the trivial one. In this case V = I may always be achieved by gauge fixing the local E 8(8) symmetry. Even after this gauge fixing, by which the scalar fields have been eliminated altogether, there still remains the "composite" local SO (16) invariance rendering 120 vectors out of the 248 vector fields unphysical. Accordingly, the theory in this gauge may be interpreted as an SO(16) Chern-Simons theory coupled to 128 massive selfdual vector fields, each of which represents one physical degree of freedom. In other words, with respect to the ungauged theory, the propagating degrees of freedom have been shifted from the scalar fields to mas-sive selfdual vectors. This is in fact an extremal case of the mechanism required for gauging higher dimensional supergravities in odd dimensions [44,3,4,6] whereby massless k − 1 forms in a 2k + 1 dimensional space-time upon gauging turn into massive selfdual k-forms. As discussed above, truncating the massive vector fields together with the matter fermions, the theory reduces to the OSp(16|2, R) theory of [11] and reproduces its (16, 0) supersymmetric ground state.
It will be most interesting to study the boundary theories associated with the gauged supergravities. The background isometries given in Table III determine the superconformal symmetries of the theories on the AdS 3 boundary. The chiral algebras are obtained by Hamiltonian reduction of the current algebras based on the AdS 3 supergroups G L and G R , respectively (see [45] for a discussion and a translation table). For instance, the boundary theory of the superextended Chern-Simons theories [11] is described by a super-Liouville action with SO(n) extended superconformal symmetry [46,47]. The maximal gauged supergravities (3.9) then introduce additional scalar and massive vector degrees of freedom, respectively, which propagate in the bulk.
Outlook: a higher dimensional ancestor?
As already pointed out in the introduction there appears to be no way to obtain the gauged models constructed in this paper by means of a conventional Kaluza Klein compactification, because the latter would give rise to a standard Yang-Mills-type Lagrangian with a kinetic term for the vector fields, instead of the CS term that was required here. Moreover, D = 11 supergravity does not admit maximally supersymmetric groundstates of the type AdS 3 × M 8 (see e.g. [48]), and even if it did, there simply are no 8-manifolds M 8 whose isometry groups would coincide with the gauge groups G 0 that we have found (since there are no 7-manifolds with these isometries either, the arguments a fortiori also excludes type-IIB theory as a possible ancestor). Nonetheless all these gauged models constitute continuous deformations of the original N = 16 theory of [22], which itself is derivable by a torus reduction of D = 11 supergravity. The situation is therefore quite different from the one in dimensions D ≥ 4 where the gauged theories do emerge via sphere compactifications of D = 11 supergravity. 7 This raises the question whether there exists a higher-dimensional ancestor theory that would give rise to these theories, and if so, what it might be. While we have no answer to this question at the moment, we would like here to offer some hints.
Obviously, a crucial step in our construction was the introduction "by hand" of up to 248 vector fields B µ M subject to the transformation rules
δB µ M = − 2 V M IJ ǫ I ψ I µ + iΓ I AȦ V M A ǫ I γ µ χȦ .
As mentioned before, for the 36 vector fields associated with the 36 commuting nilpotent directions in the E 8(8) Lie algebra, this formula can be derived directly from eleven dimensions [32]. Owing to the on-shell equivalence of vectors and scalars, vector fields can be added with impunity in three dimensions, but in extrapolating this step to eleven dimensions we seem to run into an obstacle, because extra vector fields would normally introduce new and unwanted propagating degrees of freedom. Nevertheless, the evidence for a generalized vielbein in eleven dimensions presented in [52,53,32], and the fact that a consistent gauging in three dimensions based on this extrapolation does exist, prompt us to conjecture that all 248 vector fields introduced here have an eleven-dimensional origin. In [32] it was observed that the physical bosonic degrees of freedom can be assembled into a 248-bein, which is just the lift of the The latter is just an element of the coset space GL(11, R)/SO(1, 10) in a special gauge where the tangent space symmetry is broken to SO(1, 2) × SO (8).
However, an analogous interpretation of the above (3+248)-bein remains to be found. Amongst other things, it would require replacing the action of the global E 8(8) on the 248-bein V M A by some new type of general coordinate transformations, in the same way as GL (11) is replaced by diffeomorphisms in the vielbein description of Einstein's theory [32]. The gauge groups found in the compactification to three dimensions would then emerge as "isometry groups" in a suitable sense. We also note that for the tangent space group we have the embedding SO(1, 2)×SO(16) ⊂ OSp(32), but there is no simple group generalizing GL(11) that would contain GL(3) × E 8 (8) and yield the right number of (bosonic) physical degrees of freedom upon division by OSp(32) (see, however, [54] for an alternative ansatz based on the embedding OSp(32) ⊂ OSp(64|1)).
The challenge is therefore to find a reformulation of D = 11 supergravity in terms of the above (3+248)-bein and an action, which must still describe no more than 128 massless bosonic physical degrees of freedom, despite the presence of new field components in eleven dimensions. The only way to achieve this appears to be via a CS-like action in eleven dimensions that would encompass all degrees of freedom, and thus unify the Einstein-Hilbert and three-form actions of the original theory 8 . In making these speculations we are encouraged by the fact that, at least in three dimensions, the dreibein e µ α , the gravitinos ψ I µ and the vector fields are all governed by CS-type actions.
Appendix A: E 8(8) conventions
The E 8 (8) generators t A are split into 120 compact ones X IJ ≡ −X JI and 128 noncompact ones Y A , with SO (16) The equivalence of the fundamental and the adjoint representations of E 8(8) plays an important role in our considerations; it is expressed by the relation
V −1 t M V = V M A t A ⇐⇒ V M A = 1 60 Tr (t M V t A V −1 ) . (A.4)
Further formulas concerning the E 8(8) Lie algebra, which will be used in this paper can be found in [34,32].
Let us finally point out that in the main text we use collective labels A, B, . . .
denotes the abelian field strength. This equation defines the vector fields up to the [U (1)] 248 gauge transformations
Θ
IJ,KL = − 2 7 δ I[K Θ L]M,M J + Θ [IJ,KL] + 16 7 θ δ IJ KL ,
expression for (A, B) = ([IM ], [KM ]):
6(6) . Rotation by V in the fundamental representation of E 6(6) converts Θ into the T -tensor, cubic in the matrix entries of V. Supersymmetry of the gauged Lagrangian then is shown to be equivalent toT = T 351 ⇐⇒ Θ = Θ 351 ,(5.5) in analogy with (4.5) and (5.2). Again, it is straightforward to see that G 0 = SO(6) yields a solution to (5.5): under SO(6), (5.4) decomposes as 27 → 2 · 6 + 15 ,
SO(p)×SO(16−p) , for p = 0, . . . , 8 . (5.10)A necessary condition for a compact gauge group to be admissible immediately follows from (5.9): there must exist a G 0 -invariant tensor Ξ IJ in the 135 of SO(16). In other words, there must be a singlet in the decomposition of 135 w.r.t. G 0 . From the maximal subgroups (5.10) this already rules out the first three. It remains to study the SO(p)×SO(16−p). These groups have a unique invariant tensor in the 135:
c )+(8 s , 1) , 128 → (8 v , 8 s )+(8 c , 8 v ) .(5.14)
the symmetric tensor product (4.1) contains three singlets under G 0 , and the Cartan-Killing form of E 8(8) decomposes into three G 0 -invariant tensors: , each of the three terms on the r.h.s. of (4.1) contains exactly one singlet under G 0[37]. Consequently, there is a linear combination entirely in the 3875. Subtracting a proper multiple of the E 8(8) singlet (5.18), we find thatΘ MN ≡ (α 1 −α 3 ) η
gauge group G 0 N
0= (n L , n R ) background supergroup G L ×G R
E 8 ( 8 )
88matrix V to eleven dimensions. Assuming that there are indeed 248 vector fields, all bosonic fields would thus naturally fit into a (3+248)also incorporate the three-form degrees of freedom and would replace the original elfbein of D = 11 supergravity
vector indices I, J, . . . ∈ 16 , spinor indices A, B, . . . ∈ 128, and the collective labels A, B, . . . = ([IJ], A), . . .. The conjugate SO(16) spinors are labeled by dotted indicesȦ,Ḃ, . . .. In this SO(16) basis the totally antisymmetric E 8(8) structure constants f ABC possess the non-vanishing components f IJ, KL, MN = −indices are raised and lowered by means of the Cartan-Killing metricη AB = 1 60 Tr t A t B = − 1 60 f A CD f BCD , (A.2)with components η AB = δ AB and η IJ KL = −2δ IJ KL . When summing over antisymmetrized index pairs[IJ], an extra factor of 1 2 is always understood. Explicitly, the commutators are[X IJ , X KL ] = 4 δ I[K X L]J , [X IJ , Y A ] = − 1 2 Γ IJ AB Y B , [Y A , Y B ] = 1 4 Γ IJ AB X IJ . (A.3)
and M, N , . . . for the E 8(8) matrix V M A defined in (A.4), to distinguish the transformation of these indices under the left and right action of E 8(8) and SO(16), respectively, according to (2.1). Likewise, Θ MN is an E 8(8) tensor whereas T A|B transforms under the local SO(16), cf. (3.21).
Table I :
IRegular and exceptional admissible gauge groups.
Table II :
IIExceptional noncompact gauge groups and their maximal compact subgroups.
Table III :
IIIBackground isometries of the maximally supersymmetric ground states
In particular we use the metric with signature (+ − −) and three-dimensional gamma matrices with e γ µνρ = −iǫ µνρ , where ǫ 012 = ǫ012 = 1, and e ≡ det eµ α is the dreibein determinant.
Note that the factor in front of the last term (χγµΓ IJ χ) 2 differs from the one given in[22] as was already noticed in[30].
Here and in the following, representations of SO(16) are written with ordinary numbers, while representations of E 8(8) are given in boldface numbers.
Let us stress once more that in addition to (4.5), Θ must project onto a subgroup. If that condition is dropped, further solutions to (4.5) can be found, but the T -tensor would then fail to satisfy the quadratic identities of section 4.4.
It is only for E8(8) that the fundamental representation coincides with the adjoint representation and the tensor Θ hence coincides with the embedding tensor of the group G0.6 LiE[37] has been very helpful to quickly determine these decompositions.
For the AdS4 × S 7 compactification this was rigorously shown in[49], while for the AdS7 × S 4 a complete proof was given more recently[50]. By contrast, the full consistency of the AdS5 × S 5 truncation of IIB supergravity remains an open problem despite much supporting evidence, see[51] and references therein.
We are aware that the idea of reformulating D = 11 supergravity as a CS theory is not entirely new. However, the present ansatz is evidently very different from previous attempts in this direction.
Maximal gauged supergravity in three dimensions. H Nicolai, H Samtleben, hep-th/0010076Phys. Rev. Lett. 861686H. Nicolai and H. Samtleben, Maximal gauged supergravity in three dimensions, Phys. Rev. Lett. 86 (2001) 1686, [hep-th/0010076].
N = 8 supergravity. B De Wit, H Nicolai, Nucl. Phys. 208323B. de Wit and H. Nicolai, N = 8 supergravity, Nucl. Phys. B208 (1982) 323.
Gauged maximally extended supergravity in seven-dimensions. M Pernici, K Pilch, P Van Nieuwenhuizen, Phys. Lett. 143103M. Pernici, K. Pilch, and P. van Nieuwenhuizen, Gauged maximally extended supergravity in seven-dimensions, Phys. Lett. B143 (1984) 103.
Gauged N = 8 D = 5 supergravity. M Pernici, K Pilch, P Van Nieuwenhuizen, Nucl. Phys. 259460M. Pernici, K. Pilch, and P. van Nieuwenhuizen, Gauged N = 8 D = 5 supergravity, Nucl. Phys. B259 (1985) 460.
d = 8 supergravity. A Salam, E Sezgin, Nucl. Phys. 258284A. Salam and E. Sezgin, d = 8 supergravity, Nucl. Phys. B258 (1985) 284.
Compact and noncompact gauged supergravity theories in five-dimensions. M Günaydin, L J Romans, N P Warner, Nucl. Phys. 272598M. Günaydin, L. J. Romans, and N. P. Warner, Compact and noncompact gauged supergravity theories in five-dimensions, Nucl. Phys. B272 (1986) 598.
On gauged maximal supergravity in six dimensions. P M Cowdall, hep-th/9810041JHEP. 0618P. M. Cowdall, On gauged maximal supergravity in six dimensions, JHEP 06 (1999) 018, [hep-th/9810041].
Supergravity theory in eleven-dimensions. E Cremmer, B Julia, J Scherk, Phys. Lett. 76409E. Cremmer, B. Julia, and J. Scherk, Supergravity theory in eleven-dimensions, Phys. Lett. 76B (1978) 409.
Extended supergravity in ten-dimensions. M B Green, J H Schwarz, Phys. Lett. 122143M. B. Green and J. H. Schwarz, Extended supergravity in ten-dimensions, Phys. Lett. B122 (1983) 143.
Symmetries and transformations of chiral N = 2, D = 10 supergravity. J H Schwarz, P C West, Phys. Lett. 126301J. H. Schwarz and P. C. West, Symmetries and transformations of chiral N = 2, D = 10 supergravity, Phys. Lett. B126 (1983) 301.
A Chern-Simons action for three-dimensional anti-de Sitter supergravity theories. A Achúcarro, P K Townsend, Phys. Lett. 18089A. Achúcarro and P. K. Townsend, A Chern-Simons action for three-dimensional anti-de Sitter supergravity theories, Phys. Lett. B180 (1986) 89.
Topologically massive supergravity. S Deser, J H Kay, Phys. Lett. 12097S. Deser and J. H. Kay, Topologically massive supergravity, Phys. Lett. B120 (1983) 97.
Domain walls from anti-de Sitter spacetime. H Lu, C N Pope, P K Townsend, hep-th/9607164Phys. Lett. 391H. Lu, C. N. Pope, and P. K. Townsend, Domain walls from anti-de Sitter spacetime, Phys. Lett. B391 (1997) 39, [hep-th/9607164].
Consistent Kaluza-Klein sphere reductions. M Cvetič, H Lu, C N Pope, hep-th/0003286Phys. Rev. 6264028M. Cvetič, H. Lu, and C. N. Pope, Consistent Kaluza-Klein sphere reductions, Phys. Rev. D62 (2000) 064028, [hep-th/0003286].
Matter coupled AdS 3 supergravities and their black strings. N S Deger, A Kaya, E Sezgin, P Sundell, hep-th/9908089Nucl. Phys. 573N. S. Deger, A. Kaya, E. Sezgin, and P. Sundell, Matter coupled AdS 3 supergravities and their black strings, Nucl. Phys. B573 (2000) 275, [hep-th/9908089].
Large N field theories, string theory and gravity. O Aharony, S S Gubser, J Maldacena, H Ooguri, Y Oz, hep-th/9905111Phys. Rept. 323O. Aharony, S. S. Gubser, J. Maldacena, H. Ooguri, and Y. Oz, Large N field theories, string theory and gravity, Phys. Rept. 323 (2000) 183, [hep-th/9905111].
Renormalization group flows from holography -supersymmetry and a c-theorem. D Z Freedman, S S Gubser, K Pilch, N P Warner, hep-th/9904017Adv. Theor. Math. Phys. 3D. Z. Freedman, S. S. Gubser, K. Pilch, and N. P. Warner, Renormalization group flows from holography -supersymmetry and a c-theorem, Adv. Theor. Math. Phys. 3 (1999) [hep-th/9904017].
The black hole in three-dimensional space-time. M Banados, C Teitelboim, J Zanelli, hep-th/9204099Phys. Rev. Lett. 691849M. Banados, C. Teitelboim, and J. Zanelli, The black hole in three-dimensional space-time, Phys. Rev. Lett. 69 (1992) 1849, [hep-th/9204099].
The unitary supermultiplets of d = 3 Anti-de Sitter and d = 2 conformal superalgebras. M Günaydin, G Sierra, P K Townsend, Nucl. Phys. 274429M. Günaydin, G. Sierra, and P. K. Townsend, The unitary supermultiplets of d = 3 Anti-de Sitter and d = 2 conformal superalgebras, Nucl. Phys. B274 (1986) 429.
The SO(8) supergravity. E Cremmer, B Julia, Nucl. Phys. 159141E. Cremmer and B. Julia, The SO(8) supergravity, Nucl. Phys. B159 (1979) 141.
Application of supergravity to gravitation theories. B Julia, Unified field theories in more than 4 dimensions. V. D. Sabbata and E. SchmutzerSingaporeWorld ScientificB. Julia, Application of supergravity to gravitation theories, in Unified field theories in more than 4 dimensions (V. D. Sabbata and E. Schmutzer, eds.), (Singapore), pp. 215-236, World Scientific, 1983.
Three-dimensional supergravity theories. N Marcus, J H Schwarz, Nucl. Phys. 228145N. Marcus and J. H. Schwarz, Three-dimensional supergravity theories, Nucl. Phys. B228 (1983) 145.
Locally supersymmetric D = 3 non-linear sigma models. B De Wit, A Tollstén, H Nicolai, hep-th/9208074Nucl. Phys. 392B. de Wit, A. Tollstén and H. Nicolai, Locally supersymmetric D = 3 non-linear sigma models Nucl. Phys. B392 (1993) 3, [hep-th/9208074].
Three-dimensional Einstein gravity: Dynamics of flat space. S Deser, R Jackiw, G Hooft, Ann. Phys. 152220S. Deser, R. Jackiw, and G. 't Hooft, Three-dimensional Einstein gravity: Dynamics of flat space, Ann. Phys. 152 (1984) 220.
2+1)-dimensional gravity as an exactly soluble system. E Witten, Nucl. Phys. 31146E. Witten, (2+1)-dimensional gravity as an exactly soluble system, Nucl. Phys. B311 (1988) 46.
2+1)-quantum gravity as a toy model for the (3+1) theory. A Ashtekar, V Husain, C Rovelli, J Samuel, L Smolin, Class. Quant. Grav. 6185A. Ashtekar, V. Husain, C. Rovelli, J. Samuel, and L. Smolin, (2+1)-quantum gravity as a toy model for the (3+1) theory, Class. Quant. Grav. 6 (1989) L185.
Physical states in d = 3, N = 2 supergravity. B De Wit, H J Matschull, H Nicolai, gr-qc/9309006Phys. Lett. 318B. de Wit, H. J. Matschull, and H. Nicolai, Physical states in d = 3, N = 2 supergravity, Phys. Lett. B318 (1993) 115, [gr-qc/9309006].
Dualisation of dualities. I. E Cremmer, B Julia, H Lu, C N Pope, hep-th/9710119Nucl. Phys. 523E. Cremmer, B. Julia, H. Lu, and C. N. Pope, Dualisation of dualities. I, Nucl. Phys. B523 (1998) 73, [hep-th/9710119].
E 10 symmetry in one-dimensional supergravity. S Mizoguchi, hep-th/9703160Nucl. Phys. 528S. Mizoguchi, E 10 symmetry in one-dimensional supergravity, Nucl. Phys. B528 (1998) 238, [hep-th/9703160].
The integrability of N =16 supergravity. H Nicolai, Phys. Lett. 194402H. Nicolai, The integrability of N =16 supergravity, Phys. Lett. B194 (1987) 402.
The canonical structure of maximally extended supergravity in three dimensions. H Nicolai, Nucl. Phys. 353493H. Nicolai, The canonical structure of maximally extended supergravity in three dimensions, Nucl. Phys. B353 (1991) 493.
An exceptional geometry for D = 11 supergravity?. K Koepsell, H Nicolai, H Samtleben, hep-th/0006034Class. Quant. Grav. 17K. Koepsell, H. Nicolai, and H. Samtleben, An exceptional geometry for D = 11 supergravity?, Class. Quant. Grav. 17 (2000) 3689, [hep-th/0006034].
The parallelizing S 7 torsion in gauged N = 8 supergravity. B De Wit, H Nicolai, Nucl. Phys. 231506B. de Wit and H. Nicolai, The parallelizing S 7 torsion in gauged N = 8 supergravity, Nucl. Phys. B231 (1984) 506.
On the Yangian [Y (e 8 )] quantum symmetry of maximal supergravity in two dimensions. K Koepsell, H Nicolai, H Samtleben, hep-th/9903111JHEP. 0423K. Koepsell, H. Nicolai, and H. Samtleben, On the Yangian [Y (e 8 )] quantum symmetry of maximal supergravity in two dimensions, JHEP 04 (1999) 023, [hep-th/9903111].
Noncompact gaugings of N = 8 supergravity. C M Hull, Phys. Lett. 14239C. M. Hull, Noncompact gaugings of N = 8 supergravity, Phys. Lett. B142 (1984) 39.
More gaugings of N = 8 supergravity. C M Hull, Phys. Lett. 148297C. M. Hull, More gaugings of N = 8 supergravity, Phys. Lett. B148 (1984) 297.
LiE, a computer algebra package for Lie group computations. M Van Leeuwen, A Cohen, B Lisser, AmsterdamComputer Algebra NederlandM. van Leeuwen, A. Cohen, and B. Lisser, LiE, a computer algebra package for Lie group computations, Computer Algebra Nederland, Amsterdam (1992).
N = 8 gaugings revisited: An exhaustive classification. F Cordaro, P Fré, L Gualtieri, P Termonia, M Trigiante, hep-th/9804056Nucl. Phys. 532F. Cordaro, P. Fré, L. Gualtieri, P. Termonia, and M. Trigiante, N = 8 gaugings revisited: An exhaustive classification, Nucl. Phys. B532 (1998) 245, [hep-th/9804056].
The structure of the gauged N = 8 supergravity theories. C M Hull, N P Warner, Nucl. Phys. 253650C. M. Hull and N. P. Warner, The structure of the gauged N = 8 supergravity theories, Nucl. Phys. B253 (1985) 650.
Non-semisimple gaugings of D = 5 N = 8 supergravity and FDAs. L Andrianopoli, F Cordaro, P Fré, L Gualtieri, hep-th/0009048Class. Quantum Grav. 18L. Andrianopoli, F. Cordaro, P. Fré, and L. Gualtieri, Non-semisimple gaugings of D = 5 N = 8 supergravity and FDAs, Class. Quantum Grav. 18 (2001) 395, [hep-th/0009048].
A remark on the Higgs effect in presence of Chern-Simons terms. S Deser, Z Yang, Mod. Phys. Lett. 42123S. Deser and Z. Yang, A remark on the Higgs effect in presence of Chern-Simons terms, Mod. Phys. Lett. A4 (1989) 2123.
A sketch of Lie superalgebra theory. V G Kac, Commun. Math. Phys. 5331V. G. Kac, A sketch of Lie superalgebra theory, Commun. Math. Phys. 53 (1977) 31.
Supersymmetries and their representations. W Nahm, Nucl. Phys. 135149W. Nahm, Supersymmetries and their representations, Nucl. Phys. B135 (1978) 149.
Selfduality in odd dimensions. P K Townsend, K Pilch, P Van Nieuwenhuizen, Phys. Lett. 13638P. K. Townsend, K. Pilch, and P. van Nieuwenhuizen, Selfduality in odd dimensions, Phys. Lett. 136B (1984) 38.
Six-dimensional supergravity on S 3 ×AdS 3 and 2d conformal field theory. J Boer, hep-th/9806104Nucl. Phys. 548J. de Boer, Six-dimensional supergravity on S 3 ×AdS 3 and 2d conformal field theory, Nucl. Phys. B548 (1999) 139, [hep-th/9806104].
The asymptotic dynamics of three-dimensional Einstein gravity with a negative cosmological constant. O Coussaert, M Henneaux, P Van Driel, gr-qc/9506019Class. Quant. Grav. 122961O. Coussaert, M. Henneaux, and P. van Driel, The asymptotic dynamics of three-dimensional Einstein gravity with a negative cosmological constant, Class. Quant. Grav. 12 (1995) 2961, [gr-qc/9506019].
Asymptotic dynamics and asymptotic symmetries of three-dimensional extended AdS supergravity. M Henneaux, L Maoz, A Schwimmer, hep-th/9910013Annals Phys. 28231M. Henneaux, L. Maoz, and A. Schwimmer, Asymptotic dynamics and asymptotic symmetries of three-dimensional extended AdS supergravity, Annals Phys. 282 (2000) 31, [hep-th/9910013].
. M J Duff, B E W Nilsson, C N Pope, Kaluza-Klein, Supergravity, Phys. Rept. 1301M. J. Duff, B. E. W. Nilsson, and C. N. Pope, Kaluza-Klein supergravity, Phys. Rept. 130 (1986) 1.
The consistency of the S 7 truncation in d = 11 supergravity. B De Wit, H Nicolai, Nucl. Phys. 281211B. de Wit and H. Nicolai, The consistency of the S 7 truncation in d = 11 supergravity, Nucl. Phys. B281 (1987) 211.
Consistent nonlinear KK reduction of 11d supergravity on AdS 7 ×S 4 and self-duality in odd dimensions. H Nastase, D Vaman, P Van Nieuwenhuizen, hep-th/9905075Phys. Lett. 469H. Nastase, D. Vaman, and P. van Nieuwenhuizen, Consistent nonlinear KK reduction of 11d supergravity on AdS 7 ×S 4 and self-duality in odd dimensions, Phys. Lett. B469 (1999) 96, [hep-th/9905075].
A new supersymmetric compactification of chiral IIB supergravity. K Pilch, N P Warner, hep-th/0002192Phys. Lett. 487K. Pilch and N. P. Warner, A new supersymmetric compactification of chiral IIB supergravity, Phys. Lett. B487 (2000) 22, [hep-th/0002192].
D = 11 supergravity with local SU (8) invariance. B De Wit, H Nicolai, Nucl. Phys. 274363B. de Wit and H. Nicolai, D = 11 supergravity with local SU (8) invariance, Nucl. Phys. B274 (1986) 363.
D = 11 supergravity with local SO(16) invariance. H Nicolai, Phys. Lett. 187316H. Nicolai, D = 11 supergravity with local SO(16) invariance, Phys. Lett. B187 (1987) 316.
Hidden superconformal symmetry of M theory. P C West, hep-th/0005270JHEP. 00087P.C. West, Hidden superconformal symmetry of M theory, JHEP 0008 (2000) 007 [hep-th/0005270].
| []
|
[
"Spectrum Sensing in Cognitive Radios Based on Multiple Cyclic Frequencies",
"Spectrum Sensing in Cognitive Radios Based on Multiple Cyclic Frequencies"
]
| [
"Jarmo Lundén [email protected] ",
"Visa Koivunen \nSchool of Engineering and Applied Science\nPrinceton University\n\n",
"Anu Huttunen ",
"H Vincent Poor \nSchool of Engineering and Applied Science\nPrinceton University\n\n",
"Smarad Coe ",
"\nSignal Processing Laboratory Helsinki Univ. of Technology\nFinland\n"
]
| [
"School of Engineering and Applied Science\nPrinceton University\n",
"School of Engineering and Applied Science\nPrinceton University\n",
"Signal Processing Laboratory Helsinki Univ. of Technology\nFinland"
]
| []
| Cognitive radios sense the radio spectrum in order to find unused frequency bands and use them in an agile manner. Transmission by the primary user must be detected reliably even in the low signal-to-noise ratio (SNR) regime and in the face of shadowing and fading. Communication signals are typically cyclostationary, and have many periodic statistical properties related to the symbol rate, the coding and modulation schemes as well as the guard periods, for example. These properties can be exploited in designing a detector, and for distinguishing between the primary and secondary users' signals. In this paper, a generalized likelihood ratio test (GLRT) for detecting the presence of cyclostationarity using multiple cyclic frequencies is proposed. Distributed decision making is employed by combining the quantized local test statistics from many secondary users. User cooperation allows for mitigating the effects of shadowing and provides a larger footprint for the cognitive radio system. Simulation examples demonstrate the resulting performance gains in the low SNR regime and the benefits of cooperative detection. | 10.1109/crowncom.2007.4549769 | [
"https://arxiv.org/pdf/0707.0909v1.pdf"
]
| 38,754 | 0707.0909 | e2e619bbe67cf842820541e98381f6e5b43a43e1 |
Spectrum Sensing in Cognitive Radios Based on Multiple Cyclic Frequencies
6 Jul 2007
Jarmo Lundén [email protected]
Visa Koivunen
School of Engineering and Applied Science
Princeton University
Anu Huttunen
H Vincent Poor
School of Engineering and Applied Science
Princeton University
Smarad Coe
Signal Processing Laboratory Helsinki Univ. of Technology
Finland
Spectrum Sensing in Cognitive Radios Based on Multiple Cyclic Frequencies
6 Jul 2007arXiv:0707.0909v1 [cs.IT] Invited Paper
Cognitive radios sense the radio spectrum in order to find unused frequency bands and use them in an agile manner. Transmission by the primary user must be detected reliably even in the low signal-to-noise ratio (SNR) regime and in the face of shadowing and fading. Communication signals are typically cyclostationary, and have many periodic statistical properties related to the symbol rate, the coding and modulation schemes as well as the guard periods, for example. These properties can be exploited in designing a detector, and for distinguishing between the primary and secondary users' signals. In this paper, a generalized likelihood ratio test (GLRT) for detecting the presence of cyclostationarity using multiple cyclic frequencies is proposed. Distributed decision making is employed by combining the quantized local test statistics from many secondary users. User cooperation allows for mitigating the effects of shadowing and provides a larger footprint for the cognitive radio system. Simulation examples demonstrate the resulting performance gains in the low SNR regime and the benefits of cooperative detection.
I. INTRODUCTION
Spectrum sensing is needed in cognitive radios in order to find opportunities for agile use of spectrum. Moreover, it is crucial for managing the level of interference caused to primary users (PUs) of the spectrum. Sensing provides awareness of the radio operating environment. A cognitive radio may then adapt its parameters such as carrier frequency, power and waveforms dynamically in order to provide the best available connection and to meet the user's needs within the constraints on interference.
In wireless communication systems we typically have some knowledge on the waveforms and structural or statistical properties of the signals that the primary user of the spectrum is using. Such knowledge may be related to the modulation scheme, the symbol or chip rate of the signal, the channel Jarmo Lundén's work was supported by GETA graduate school, Finnish Defence Forces Technical Research Centre and Nokia Foundation.
The funding for Visa Koivunen's sabbatical term at Princeton University was provided by the Academy of Finland.
Anu Huttunen in on a research leave from Nokia Research Center. H. Vincent Poor's work was supported by the US National Science Foundation under Grants ANI-03-38807 and CNS-06-25637 coding scheme, training or pilot signals, guard periods, and the power level or correlation properties of the signal, just to mention a few. These properties may be used to design a detector that works in a very low SNR regime and has low complexity and consequently low power consumption. These are very desirable properties especially for cognitive radios in mobile applications. In the absence of any knowledge of the signal, one may have to resort to classical techniques such as energy detection [1]. An energy detector may need to collect data over a long period of time to detect the primary users reliably. Moreover, controlling the false alarm rates in mobile applications is difficult because the statistics of the signals, noise and interference may be time-varying. Another significant drawback is that energy detection has no capability to distinguish among different types of transmissions or to dichotomize between primary and secondary users of the spectrum.
Cyclostationary processes are random processes for which the statistical properties such as the mean and autocorrelation change periodically as functions of time [2]. Many of the signals used in wireless communication and radar systems possess this property. Cyclostationarity may be caused by modulation or coding [2], or it may be also intentionally produced in order to aid channel estimation or synchronization [3]. Cyclostationarity property has been widely used in intercept receivers [2], [4], [5], direction of arrival or time-delay estimation, blind equalization and channel estimation [6] as well as in precoder design in multicarrier communications [3]. In order to exploit cyclic statistics, the signal must be oversampled with respect to the symbol rate, or multiple receivers must be used to observe the signal. The use of cyclostationary statistics is appealing in many ways: noise is rarely cyclostationary and second-order cyclostationary statistics retain also the phase information. Hence, procedures based on cyclostationarity tend to have particularly good performance at the low SNR regime. Moreover, cyclostationarity allows for distinguishing among different transmission types and users if their signals have distinct cyclic frequencies. A comprehensive list of references on cyclostationarity along with a survey of the literature is presented in [7].
The presence of cyclostationary signals may be determined by using hypothesis testing. Many existing tests, such as [8], are able to detect the presence of cyclostationarity at only one cyclic frequency at a time, and they partly ignore the rich information present in the signals. For example, a communication signal may have cyclic frequencies related to the carrier frequency, the symbol rate and its harmonics, the chip rate, guard period, the scrambling code period, and the channel coding scheme. In this paper we propose a method for detecting multiple cyclic frequencies simultaneously. It extends the method of [8] to take into account the rich information present at different cyclic frequencies. This provides improved detector performance over techniques relying only on single cyclic frequency and facilitates dichotomizing among the primary and secondary user signals and different waveforms used.
In cognitive radio systems, there are typically multiple geographically distributed secondary users (SUs) that need to detect if the primary user is transmitting. The distributed sensors may work collaboratively to decide between two hypotheses: is the primary user active, or is the spectrum unused and available for the secondary users? Decentralized processing has a number of advantages for such situations. Obviously, it allows for a larger coverage area. Furthermore, there are gains similar to diversity gains in wireless communications so that the detection becomes less sensitive to demanding propagation conditions such as shadowing by large obstacles, large numbers of scatterers, differences in attenuation, or fast fading caused by mobility. Moreover, distributed sensory systems may require less communication bandwidth, consume less power, be more reliable and cost less as well. In this paper, we propose a simple decentralized decision making approach based on sharing and combining quantized local decision statistics. This approach may be used in both decision making with or without a fusion center. This paper is organized as follows. In Section II, there is a short review of cyclostationary statistics. A novel detector for multiple cyclic frequencies is derived in Section III. Section IV addresses the problem of collaborative detection of primary user. Simulation results demonstrating the detector's reliability in the low SNR regime as well as the gains obtained via collaborative operation are presented in Section V. Finally, conclusions are drawn in Section VI.
II. CYCLOSTATIONARITY: A RECAP
In this section, we provide a brief overview of cyclostationarity in order to make the derivation of the detector in Section III clearer. A continuous-time random process x(t) is wide sense second-order cyclostationary if there exists a T 0 > 0 such that [2]:
µ x (t) = µ x (t + T 0 ) ∀t(1)
and
R x (t 1 , t 2 ) = R x (t 1 + T 0 , t 2 + T 0 ) ∀t 1 , t 2 .(2)
T 0 is called the period of the cyclostationary process. Due to the periodicity of the autocorrelation R x (t 1 , t 2 ), it has a Fourier-series representation. By denoting t 1 = t + τ /2 and t 2 = t − τ /2, we obtain the following expression for the Fourier-series [2]:
R x (t + τ 2 , t − τ 2 ) = α R α x (τ )e j2παt ,(3)
where the Fourier coefficients are
R α x (τ ) = 1 T 0 ∞ −∞ R x (t + τ 2 , t − τ 2 )e −j2παt dt(4)
and α is called the cyclic frequency. The function R α x (τ ) is called the cyclic autocorrelation function. If the process has zero mean, then this is also the cyclic autocovariance function.
When the autocorrelation function has exactly one period T 0 we have the following set of cyclic frequencies
A = {α = k/T, k ≥ 1} , where R α x (τ )
is the cyclic autocorrelation function and A are the set of cyclic frequencies. The cyclic frequencies are harmonics of the fundamental frequency. If the autocorrelation function has several periods T 0 , T 1 , . . ., we may express R α x (τ ) at the limit [2] R α x (τ ) = lim
T →∞ 1 T T /2 −T /2 x(t + τ 2 )x * (t − τ 2 )e j2παt dt. (5)
The process x(t) is almost cyclostationary in the wide sense and the set of cyclic frequencies A is comprised of a countable number of frequencies that do not need to be harmonics of the fundamental frequency. In general, the process is said to be cyclostationary if there exists an α = 0 such that R α x (τ ) = 0 for some value of τ . Typically cyclic frequencies are assumed to be known or may be estimated reliably.
III. DETECTION USING MULTIPLE CYCLIC FREQUENCIES
Statistical tests for the presence of a single cyclic frequency have been proposed, for example, in [8]. The tests in [8] have asymptotically constant false alarm rate (CFAR) for testing presence of cyclostationarity at a given cyclic frequency. However, the tests do not retain the CFAR property over a set of tested frequencies.
Typical communication signals exhibit cyclostationarity at multiple cyclic frequencies instead of just a single cyclic frequency. That is, for example a signal that is cyclostationary at the symbol frequency is typically cyclostationary at all integer multiples of the symbol frequency as well. There also may be cyclic frequencies related to the coding and guard periods, or adaptive modulation and coding may be used. In such cases the cyclic frequencies present may vary depending on channel quality and the waveform used. If one is testing for the presence of many different signals at a given frequency band, or in case the cyclic frequencies are not known, it would be desirable to retain the CFAR property over the whole set of tested cyclic frequencies. This would be especially desirable in a cognitive radio application where the interest is in finding unoccupied frequency bands. Otherwise the frequency band may unnecessarily be classified as occupied for most of the time.
In the following we extend the test based on second-order cyclic statistics of [8] to multiple cyclic frequencies. To do so we first define all the terms used in the test statistics.
Let ( * ) denote an optional complex conjugation. The notation allows convenient handling of both cyclic autocorrelation and conjugate cyclic autocorrelation with only one equation. An estimate of the (conjugate) cyclic autocorrelation R xx ( * ) (α, τ ) may be obtained using M observations aŝ
R xx ( * ) (α, τ ) = 1 M M t=1 x(t)x ( * ) (t + τ )e −j2παt (6) = R xx ( * ) (α, τ ) + ε(α, τ ),(7)
where the latter term is the estimation error. This estimator is consistent, (see [8]) so that the error goes to zero as M → ∞. Now we need to construct a test for a number of lags τ 1 , . . . , τ N as well as a set of cyclic frequencies of interest. Let A denote the set of cyclic frequencies of interest, and
r xx ( * ) (α) = Re{R xx ( * ) (α, τ 1 )}, . . . , Re{R xx ( * ) (α, τ N )}, Im{R xx ( * ) (α, τ 1 )}, . . . , Im{R xx ( * ) (α, τ N )} (8) denote a 1×2N
vector containing the real and imaginary parts of the estimated cyclic autocorrelations at the cyclic frequency of interest stacked in a single vector.
The 2N × 2N covariance matrix of r xx ( * ) can be computed as [8]
Σ xx ( * ) (α) = Re Q+Q * 2 Im Q−Q * 2 Im Q+Q * 2 Re Q * −Q 2 (9)
where the (m, n)th entries of the two covariance matrices Q and Q * are given by
Q(m, n) = S fτ m fτ n (2α, α)
and
Q * (m, n) = S * fτ m fτ n (0, −α).(10)
Here, S fτ m fτ n (α, ω) and S * fτ m fτ n (α, ω) denote the unconjugated and conjugated cyclic spectra of f (t, τ ) = x(t)x ( * ) (t + τ ), respectively. These spectra can be estimated using frequency smoothed cyclic periodograms aŝ
S fτ m fτ n (2α, α) = 1 M L (L−1)/2 s=−(L−1)/2 W (s) · F τn (α − 2πs M )F τm (α + 2πs M )(11)S * fτ m fτ n (0, −α) = 1 M L (L−1)/2 s=−(L−1)/2 W (s) · F * τn (α + 2πs M )F τm (α + 2πs M )(12)
where F τ (ω) = M t=1 x(t)x ( * ) (t + τ )e −jωt and W is a normalized spectral window of odd length L. Now the hypothesis testing problem for testing if α is a cyclic frequency can be formulated as [8] H 0 : ∀{τ n } N n=1 =⇒r xx ( * ) (α) = ǫ xx ( * ) (α)
H 1 : for some {τ n } N n=1 =⇒r xx ( * ) (α) = r xx ( * ) (α) + ǫ xx ( * ) (α).(13)
Here ǫ xx ( * ) is the estimation error which is asymptotically normal distributed, i.e., lim M→∞ √ M ǫ xx ( * ) D = N (0, Σ xx ( * ) ) [8]. Hence, using the asymptotic normality ofr xx ( * ) the generalized likelihood ratio (GLR) is given by
Λ = exp(− 1 2 Mr xx ( * )Σ −1 xx ( * )r T xx ( * ) ) exp(− 1 2 M (r xx ( * ) −r xx ( * ) )Σ −1 xx ( * ) (r xx ( * ) −r xx ( * ) ) T ) = exp(− 1 2 Mr xx ( * )Σ −1 xx ( * )r T xx ( * ) ).(14)
Finally, by taking the logarithm and multiplying the result by 2, we arrive at the test statistic in [8] T
xx ( * ) (α) = −2 ln Λ = Mr xx ( * )Σ −1 xx ( * )r T xx ( * ) .(15)
Under the null hypothesis T xx ( * ) (α) is asymptotically χ 2
2N
distributed. Now in order to extend the test for the presence of secondorder cyclostationarity at any of the cyclic frequencies of interest α ∈ A simultaneously, we formulate the hypothesis testing as follows H 0 : ∀α ∈ A and ∀{τ n } N n=1 =⇒r xx ( * ) (α) = ǫ xx ( * ) (α) H 1 : for some α ∈ A and for some {τ n } N n=1 =⇒r xx ( * ) (α) = r xx ( * ) (α) + ǫ xx ( * ) (α).
(16) For this detection problem, we propose the following two test statistics:
D m = max α∈A T xx ( * ) (α) = max α∈A Mr xx ( * ) (α)Σ −1 xx ( * ) (α)r T xx ( * ) (α)(17)D s = α∈A T xx ( * ) (α) = α∈A Mr xx ( * ) (α)Σ −1 xx ( * ) (α)r T xx ( * ) (α).(18)
The first test statistic calculates the maximum of the cyclostationary GLRT statistic (15) over the cyclic frequencies of interest A while the second calculates the sum. Assuming independence of cyclic autocorrelation estimates for different cyclic frequencies the test statistic D s is the GLRT statistic. Depending on the signal and the set of tested cyclic frequencies the test statistics may have different performances. This requires further research.
The asymptotic distribution of D s is under the null hypothesis χ 2 2N Nα where N α is the number of cyclic frequencies in set A. This is due to the fact that the sum of independent chisquare random variables is also a chi-square random variable whose degrees of freedom is the sum of the degrees of freedom of the independent random variables.
In the following we derive the asymptotic distribution of the test statistic D m under the null hypothesis. As stated above, under the null hypothesis T xx ( * ) (α) is asymptotically χ 2 2N distributed. The cumulative distribution function of the chisquare distribution with 2N degrees of freedom is given by
F (x, 2N ) = γ(N, x/2) Γ(N )(19)
where γ(k, x) is the lower incomplete gamma function and Γ(k) is the ordinary gamma function. For a positive integer k the following identities hold:
Γ(k) = (k − 1)! (20) γ(k, x) = Γ(k) − (k − 1)! e −x k−1 n=0 x n n! .(21)
Hence, the cumulative distribution function of the chi-square distribution with 2N degrees of freedom is given by
F (x, 2N ) = 1 − e −x/2 N −1 n=0 (x/2) n n! .(22)
The cumulative distribution function of the maximum of d independent and identically distributed random variables is the cumulative distribution function of the individual random variables raised to the power d. Thus, the cumulative distribution function of the test statistic D m is given by
F Dm (x, 2N, d) = 1 − e −x/2 N −1 n=0 (x/2) n n! d .(23)
The corresponding probability density function is obtained by differentiating the cumulative distribution function, i.e.,
f Dm (x, 2N, d) = d 2 1 − e −x/2 N −1 n=0 (x/2) n n! d−1 · e −x/2 N −1 n=0 (x/2) n n! − N −1 n=1 (x/2) n−1 (n − 1)! .
(24) Consequently, the null hypothesis is rejected if F Dm (D m , 2N, N α ) > 1 − p where p is the false alarm rate and N α is the number of tested cyclic frequencies.
IV. COOPERATIVE DETECTION
User cooperation may be used to improve the performance and coverage in a cognitive radio network. The users may collaborate in finding unused spectrum and new opportunities. Many of the collaborative detection techniques stem from distributed detection theory; see [10], [11]. In cognitive radio systems, there are typically multiple geographically distributed secondary users that need to detect whether the primary user is active. All the secondary users may sense the entire band of interest, or monitor just a partial band to reduce power consumption. In the latter case each SU senses a certain part of the spectrum, and then shares the acquired information with other users or a fusion center.
The cooperation may then be coordinated by a fusion center (FC), or it may take place in an ad-hoc manner without a dedicated fusion center. Here we assume that a fusion center collects information from all K secondary users and makes a decision about whether the spectrum is available or not. We assume that each secondary user sends a quantized version of its local decision statistics (such as the likelihood ratio) to the FC. In the case of very coarse quantization, binary local decision may be sent. To derive a test for the FC, we assume that the sensors are independent conditioned on whether the hypothesis H 0 or H 1 is true. Then the optimal fusion rule is the likelihood ratio test over the received local likelihood ratios l i :
T K = K i=1 l i .(25)
In case the secondary users send binary decisions, the sum of ones may calculated and compared to a threshold. Here, we consider the simplest way of making the decision using generalized likelihood ratios. Instead of using the product of the generalized likelihood ratios, we can employ the sum of generalized log-likelihood ratios. We propose the following test statistic for the hypothesis testing problem (13)
T ′ K = K i=1 T (i) xx ( * ) (α),(26)
and the following two for the hypothesis testing problem (16)
D m,K = max α∈A K i=1 T (i) xx ( * ) (α) (27) D s,K = α∈A K i=1 T (i) xx ( * ) (α)(28)
where T (i) xx ( * ) (α) is the cyclostationarity based test statistic (15) from i th secondary user. Due to the use of generalized likelihood ratios, no optimality properties can be claimed. The GLRT test does, however, perform highly reliably in many applications.
Under the conditional independence assumption the asymptotic distributions of the test statistic T ′ K and D s,K are under the null hypothesis χ 2 2N K and χ 2 2N NαK , respectively. This is again due to the fact that the sum of independent chisquare random variables is also a chi-square random variable whose degrees of freedom is the sum of the degrees of freedom of the independent random variables. The cumulative distribution function of D m,K is under the null hypothesis F Dm (D m,K , 2N K, N α ) where N α is again the number of tested cyclic frequencies. The testing is done similarly as in one secondary user case.
Different techniques for reducing the amount of transmitted data, taking into account the relevance of the information provided by secondary users as well as how to deal with communication rate constraints will be addressed in a forthcoming paper.
V. SIMULATION EXAMPLES
In this section the performance of the proposed detectors is considered. The test signal is an orthogonal frequency division multiplex (OFDM) signal. The baseband equivalent of a cyclic prefix OFDM signal may be expressed as
x(t) = Nc−1 n=0 ∞ l=−∞ c n,l g(t − lT s )e j(2π/N )n(t−lTs )(29)
where N c is the number of subcarriers, T s is the symbol length, g(t) denotes the rectangular pulse of length T s , and the c n,l 's denote the data symbols. The symbol length is the sum of the length of the useful symbol data T d and the length of the cyclic prefix T cp , i.e., T s = T d + T cp . The above OFDM signal exhibits cyclostationarity (i.e., complex conjugation is used in (6) and the following equations) with cyclic frequencies of α = k/T s , k = 0, ±1, ±2, . . . and potentially other frequencies depending on the coding scheme. The cyclic autocorrelation surfaces for α = k/T s peak at τ = ±T d [9].
In the following the performance of cyclic detectors based on one and two cyclic frequencies is compared as a function of signal-to-noise ratio (SNR) in an additive white Gaussian noise (AWGN) channel. The SNR is defined as SNR = 10 log 10
σ 2 x σ 2 n where σ 2
x and σ 2 n are the variances of the signal and the noise, respectively. The cyclic frequencies employed by the detectors are 1/T s and 2/T s . The detector based on one cyclic frequency uses the first frequency and the detectors based on two cyclic frequencies use both frequencies. Each detector uses two time lags ±T d .
The cyclic spectrum estimates were calculated using a length-2049 Kaiser window with β parameter of 10. A Fast-Fourier transform (FFT) was employed for faster computation. The FFT size was 10000 giving a cyclic frequency resolution of 0.0001.
The OFDM signal has 32 subcarriers and the length of the cyclic prefix is 1/4 of the useful symbol data. The subcarrier modulation employed is 16-QAM. The signal length is 100 OFDM symbols. Fig. 1 depicts the performance of the detectors as a function of the SNR for a constant false alarm rate of 0.05. Fig. 2 shows a zoom of the important area illustrating the differences in performance more clearly. All the curves are averages over 10000 experiments. It can be seen that the detectors based on multiple cyclic frequencies outperform the detector based on single cyclic frequency in the low SNR regime. Furthermore, the multicycle detector calculating the sum over the cyclic statistics of different frequencies has the best performance. Fig. 3 plots the probability of detection vs. false alarm rate for SNR of -7 dB. The figure show that the detectors have desirable receiver operating characteristics. That is, the probability of detection increases as the false alarm rate parameter is increased.
Next the performance gain from cooperative detection of several secondary users is analyzed. The signal is the same as above. The cooperative detection is based on the data of 5 secondary users. Each secondary user receives the same data with different noise. SNR is the same for each secondary user. Fig. 4 depicts the performance for 5 secondary users compared to the single secondary user case. Performance gain of roughly 3 dB is obtained from the cooperation of 5 secondary users. Using two cyclic frequencies provides similar performance improvement as in single secondary user case. Fig. 5 shows the probability of detection vs. false alarm rate for SNR of -9 dB.
In the following simplistic example, we illustrate the gains that may be achieved via collaborative detection in the face of shadowing effects. In order to simulate shadowing, the SNR of each user was independently selected randomly from a normal distribution with a mean of -9 dB and standard deviation of 10 dB. That is, the logarithm of the received power level is normally distributed. Fig. 6 depicts the performance of the multicycle detectors for the simple shadowing scenario. Comparison to Fig. 5 reveals that cooperation among secondary users reduces sensitivity to shadowing effects significantly. Figure 6. Probability of detection vs. false alarm rate. In order to simulate shadowing the SNR of each user was independently selected randomly from a normal distribution with a mean of -9 dB and standard deviation of 10 dB. Cooperation among secondary users reduces sensitivity to shadowing effects.
VI. CONCLUSION
In this paper, a generalized likelihood ratio test for detecting primary transmissions with multiple cyclic frequencies has been proposed, and the asymptotic distribution of the test statistic has been derived. In this test, impairments such as shadowing and fading are mitigated by combining the quantized local likelihood ratios from a number of secondary users under a conditional independence assumption. Simulation examples demonstrating the improved reliability in the detector performance in the low SNR regime as well as significant gains obtained via collaborative decision making have also been presented.
Figure 1 .
1Probability of detection vs. SNR. The multicycle detectors achieve better performance than the single cycle detector in the low SNR regime. The sum detector of the test statistic Ds has the best performance.
Figure 2 .Figure 3 .Figure 4 .
234Probability of detection vs. SNR. Zoom of the important region. The multicycle detectors achieve better performance than the single cycle detector in the low SNR regime. The sum detector of the test statistic Ds has the best performance. Probability of detection vs. false alarm rate. The detectors based on multiple cyclic frequencies achieve better performance than the detector based on a single cyclic frequency. Probability of detection vs. SNR. Cooperation of 5 secondary users provides performance gain of 3 dB. Using multiple cyclic frequencies further improves the detection performance. The sum detector of the test statistic D s,K has the best performance.
Figure 5 .
5Probability of detection vs. false alarm rate. Cooperation among secondary users combined with the use of multicycle sum test statistic D s,K provides the best performance.
An Introduction to Signal Detection and Estimation. H V Poor, SpringerNew York2nd editionH. V. Poor, An Introduction to Signal Detection and Estimation, 2nd edition, Springer, New York, 1994.
W A Gardner, Statistical Spectral Analysis: A Nonprobabilistic Theory. Upper Saddle River, NJPrentice-HallW. A. Gardner, Statistical Spectral Analysis: A Nonprobabilistic Theory, Prentice-Hall, Upper Saddle River, NJ, 1987.
Transmitter Induced Cyclostationarity for Blind Channel Equalization. M Tsatsanis, G B Giannakis, IEEE Trans. Signal Processing. 45M. Tsatsanis and G.B. Giannakis, "Transmitter Induced Cyclostation- arity for Blind Channel Equalization," IEEE Trans. Signal Processing, Vol. 45, pp. 1785-1794, Jul. 1997.
Blind Despreading of Short-Code CDMA Signals in Asynchronous Multi-User Systems. T Koivisto, V Koivunen, Signal Processing. to appear inT. Koivisto and V. Koivunen, "Blind Despreading of Short-Code CDMA Signals in Asynchronous Multi-User Systems," Signal Processing, to appear in 2007.
Automatic Radar Waveform Recognition. J Lundén, V Koivunen, Special issue on Adaptive Waveform Design for Agile Sensing and Communication. to appear inJ. Lundén and V. Koivunen, "Automatic Radar Waveform Recognition," IEEE Journal of Selected Topics in Signal Processing, Special issue on Adaptive Waveform Design for Agile Sensing and Communication, to appear in 2007.
Blind Identification and Equalization Based on Second-Order Statistics: A Time Domain Approach. L Tong, G Xu, T Kailath, IEEE Trans. Information Theory. 402L. Tong, G. Xu, and T. Kailath, "Blind Identification and Equalization Based on Second-Order Statistics: A Time Domain Approach," IEEE Trans. Information Theory, vol. 40, no. 2, pp. 340-349, Mar. 1994.
Cyclostationarity: Half a Century of Research. W A Gardner, A Napolitano, L Paura, Signal Processing. 86W. A. Gardner, A. Napolitano, and L. Paura, "Cyclostationarity: Half a Century of Research," Signal Processing, Vol. 86, pp. 639-697, Apr. 2006.
Statistical Tests for Presence of Cyclostationarity. A V Dandawaté, G B Giannakis, IEEE Trans. Signal Processing. 429A. V. Dandawaté and G. B. Giannakis, "Statistical Tests for Presence of Cyclostationarity," IEEE Trans. Signal Processing, vol. 42, no. 9, pp. 2355-2369, Sep. 1994.
Air Interface Recognition for a Software Radio System Exploiting Cyclostationarity. M Öner, F , Proc. 15th IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC'04). 15th IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC'04)Barcelona, Spain3M.Öner and F. Jondral, "Air Interface Recognition for a Software Radio System Exploiting Cyclostationarity," in Proc. 15th IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC'04), Barcelona, Spain, Sep. 5-8, 2004, vol. 3, pp. 1947-1951.
Distributed Detection with Multiple Sensors: Part I -Fundamentals. R Viswanathan, P K Varshney, Proceedings of the IEEE. 851R. Viswanathan, and P. K. Varshney, "Distributed Detection with Multi- ple Sensors: Part I -Fundamentals," Proceedings of the IEEE, Vol. 85, No. 1, pp. 54-63, Jan. 1997.
Distributed Detection with Multiple Sensors: Part II -Advanced Topics. R S Blum, S A Kassam, H V Poor, Proceedings of the IEEE. 851R. S. Blum, S. A. Kassam, and H. V. Poor, "Distributed Detection with Multiple Sensors: Part II -Advanced Topics," Proceedings of the IEEE, Vol. 85, No. 1, pp. 64-79, Jan. 1997.
| []
|
[
"Lie algebra of an n-Lie algebra",
"Lie algebra of an n-Lie algebra"
]
| [
"Basile Guy ",
"Richard Bossoto [email protected]:[email protected] ",
"Eugène Okassa \nFaculté des Sciences et Techniques\nDépartement de Mathématiques\nEcole Normale Supérieure\n\n",
"Mathias Omporo ",
"\nUniversité Marien NGOUABI\n\n"
]
| [
"Faculté des Sciences et Techniques\nDépartement de Mathématiques\nEcole Normale Supérieure\n",
"Université Marien NGOUABI\n"
]
| []
| We construct the Lie algebra of an n-Lie algebra and we also define the notion of cohomology of an n-Lie algebra. | null | [
"https://arxiv.org/pdf/1310.2433v2.pdf"
]
| 119,306,434 | 1310.2433 | 6b4d7c898108defabef0e68f175a70ab297f30ac |
Lie algebra of an n-Lie algebra
10 Oct 2013
Basile Guy
Richard Bossoto [email protected]:[email protected]
Eugène Okassa
Faculté des Sciences et Techniques
Département de Mathématiques
Ecole Normale Supérieure
Mathias Omporo
Université Marien NGOUABI
Lie algebra of an n-Lie algebra
10 Oct 2013arXiv:1310.2433v2 [math.DG]Lie algebran-Lie algebracohomology MSC (2010): 17B3017B5616W25
We construct the Lie algebra of an n-Lie algebra and we also define the notion of cohomology of an n-Lie algebra.
Introduction
The notion of n-Lie algebra over a commutative field K with characteristic zero, n an integer ≥ 2, introduced par Filippov [3], is a generalization of the notion of Lie algebra which corresponds with the usual case when n = 2.
When n ≥ 2 is an integer and K a commutative field, an n-Lie algebra structure on a K-vector space G is due to the existence of a skew-symmetric n-multilinear map {, ..., } : G n = G × G × ... × G −→ G, (x 1 , x 2 , ..., x n ) −→ {x 1 , x 2 , ..., x n } , such that {x 1 , x 2 , ..., x n−1 , {y 1 , y 2 , ..., y n }} = n i=1 {y 1 , y 2 , ..., y i−1 , {x 1 , x 2 , ..., x n−1 , y i } , y i+1 , ..., y n } for any x 1 , x 2 , ..., x n−1 , y 1 , y 2 , ..., y n elements of G.
The above identity is called Jacobi identity of the n-Lie algebra G. From this generalization, many authors, [2], [4], [5], extended the following notions: ideal of an n-Lie algebra, semi simple n-Lie algebra, nilpotent n-Lie algebra, solvable n-Lie algebra, Cartan subalgebra of an n-Lie algebra, etc.
The main goal of this paper is to construct the Lie algebra structure from an n-Lie algebra: so in this new context, we give definitions of ideal of an n-Lie algebra, semi simple n-Lie algebra, nilpotent n-Lie algebra, solvable n-Lie algebra, Cartan subalgebra of an n-Lie algebra. We also define the cohomology of an n-Lie algebra.
In what follows, K denotes a commutative field with characteristic zero, G an n-Lie algebra over K with bracket {, ..., } and finally n ≥ 2 an integer. n-Lie algebra structure
We recall that, [3], for n ≥ 2, an n-Lie algebra structure on a K-vector space G is due to the existence of a skew-symmetric n-multilinear map
{, ..., } : G n = G × G × ... × G −→ G, (x 1 , x 2 , ..., x n ) −→ {x 1 , x 2 , ..., x n } , such that {x 1 , x 2 , ..., x n−1 , {y 1 , y 2 , ..., y n }} = n i=1 {y 1 , y 2 , ..., y i−1 , {x 1 , x 2 , ..., x n−1 , y i } , y i+1 , ..., y n }
for any x 1 , x 2 , ..., x n−1 , y 1 , y 2 , ..., y n elements of G.
A derivation of an n-Lie algebra (G, {, ..., }), is a K-linear map
D : G −→ G such that D {x 1 , x 2 , ..., x n } = n i=1 {x 1 , x 2 , ..., D(x i ), ..., x n }
for any x 1 , x 2 , ..., x n elements of G.
We verify that the set of derivations of a n-Lie algebra G is a Lie algebra over K which we denote Der K (G). And that ends the proof. A morphism of an n-Lie algebra G into another n-Lie algebra G ′ is a Klinear map ϕ :
G −→ G ′ such that ϕ({x 1 , x 2 , ..., x n }) = {ϕ(x 1 ), ϕ(x 2 ), ..., ϕ(x n )}
for any x 1 , x 2 , ..., x n elements of G. We verify that the set of n-Lie algebras over K is a category.
Lie algebra structure deduced from an n-Lie algebra
When G is an n-Lie algebra and when Der K (G) is the Lie algebra of Kderivations of G, then the multilinear map
G n−1 −→ Der K (G), (x 1 , x 2 , ..., x n−1 ) −→ ad(x 1 , x 2 , ..., x n−1 ),
is skew-symmetric. If we denote Λ n−1 K (G), the (n − 1)-exterior power of the K-vector space G, there exists an unique K-linear map
ad G : Λ n−1 K (G) −→ Der K (G) such that ad G (x 1 Λx 2 Λ...Λx n−1 ) = ad(x 1 , x 2 , ..., x n−1 )
for any x 1 , x 2 , ..., x n−1 elements of G.
We recall, [1], that when
f : W −→ W
is an endomorphism of a K-vector space W and when Λ K (W ) is the K-exterior algebra of W , then there exists an unique derivation with degree zero
D f : Λ K (W ) −→ Λ K (W )
such that, for any p ∈ N,
D f (w 1 Λw 2 Λ...Λw p ) = p i=1 w 1 Λw 2 Λ...Λw i−1 Λf (w i )Λw i+1 Λ...Λw p for any w 1 , w 2 , ..., w p elements of W . When g : W −→ W is another endomorphism of the K-vector space W , then [D f , D g ] = D [f,g] ,
where the bracket [, ] is the usual bracket of endomorphisms.
Proposition 2 For any s 1 , s 2 elements of Λ n−1 K (G), then we have simultane-
ously [ad G (s 1 ), ad G (s 2 )] = ad G D ad G (s 1 ) (s 2 )
and
[ad G (s 1 ), ad G (s 2 )] = ad G −D ad G (s 2 ) (s 1 )
Proof. We prove for indecomposable elements. Let s 1 = x 1 Λx 2 Λ...Λx n−1 and s 2 = y 1 Λy 2 Λ...Λy n−1 . For any a ∈ G, we get
([ad G (s 1 ), ad G (s 2 )])(a) = {x 1 , x 2 , ..., x n−1 , {y 1 , y 2 , ..., y n−1 , a}} − {y 1 , y 2 , ..., y n−1 , {x 1 , x 2 , ..., x n−1 , a}} = n−1 i=1 {y 1 , y 2 , ..., y i−1 , {x 1 , x 2 , ..., x n−1 , y i } , y i+1 , ..., y n−1 , a} + {y 1 , y 2 , ..., y n−1 , {x 1 , x 2 , ..., x n−1 , a}} − {y 1 , y 2 , ..., y n−1 , {x 1 , x 2 , ..., x n−1 , a}} = n−1 i=1 {y 1 , y 2 , ..., y i−1 , {x 1 , x 2 , ..., x n−1 , y i } , y i+1 , ..., y n−1 , a} = ad G ( n−1 i=1 y 1 Λ...Λy i−1 Λ [ad G (x 1 Λx 2 Λ...Λx n−1 )] (y i )Λy i+1 Λ...Λy n−1 ) (a).
Thus we have
[ad G (s 1 ), ad G (s 2 )] = ad G ( n−1 i=1 y 1 Λy 2 Λ...Λy i−1 Λ [ad G (x 1 Λx 2 Λ...Λx n−1 )] (y i )Λy i+1 Λ...Λy n−1 ) = ad G D ad G (s 1 ) (s 2 ) .
On the other hand, we get That ends the proofs. We denote V K (G) the K-subvector space of Λ n−1 K (G) generated by the elements of the form
([ad G (s 1 ), ad G (s 2 )])(a) = {x 1 , x 2 , ..., x n−1 ,D ad G (s 1 ) (s 2 ) + D ad G (s 2 ) (s 1 )
where s 1 and s 2 describe Λ n−1
K (G). Let Λ n−1 K (G) −→ Λ n−1 K (G)/V K (G)
, s −→ s, be the canonical surjection. Considering that precedes, we deduce that
ad G [V K (G)] = 0.
We denote ad G : Λ n−1 K (G)/V(G) −→ Der K (G) the unique linear map such that
ad G (s) = ad G (s) for any s ∈ Λ n−1 K (G). For s 1 , s ′ 1 , s 2 , s ′ 2 elements of Λ n−1 K (G), we have D ad G (s 1 ) (s 2 ) + D ad G (s ′ 2 ) (s ′ 1 ) = D ad G (s 1 −s ′ 1 ) (s 2 ) + D ad G (s ′ 2 −s 2 ) (s ′ 1 ) + D ad G (s ′ 1 ) (s 2 ) + D ad G (s 2 ) (s ′ 1 )
. We deduce that when
s 1 = s ′ 1 and s 2 = s ′ 2 , then D ad G (s 1 ) (s 2 ) + D ad G (s ′ 2 ) (s ′ 1 ) = D ad G (s ′ 1 ) (s 2 ) + D ad G (s 2 ) (s ′ 1 ). Finally we get D ad G (s 1 ) (s 2 ) = D ad G (s ′ 1 ) (s ′ 2 ). Thus the bracket [s 1 , s 2 ] = D ad G (s 1 ) (s 2 )
is well defined. Proof. The map [, ] is obviously bilinear. We have [s 1 , s 2 ] = D ad G (s 1 ) (s 2 ). As
D ad G (s 1 ) (s 2 ) + D ad G (s 2 ) (s 1 ) ∈ V K (G), we immediately get D ad G (s 1 ) (s 2 ) = −D ad G (s 2 ) (s 1 ). Thus [s 1 , s 2 ] = − [s 2 , s 1 ].
For Jacobi identity, we write:
[s 1 , [s 2 , s 3 ]] + [s 2 , [s 3 , s 1 ]] + [s 3 , [s 1 , s 2 ]] = s 1 , D ad G (s 2 ) (s 3 ) − [s 2 , [s 1 , s 3 ]] + s 3 , D ad G (s 1 ) (s 2 ) = s 1 , D ad G (s 2 ) (s 3 ) − s 2 , D ad G (s 1 ) (s 3 ) − D ad G (s 1 ) (s 2 ), s 3 = D ad G (s 1 ) D ad G (s 2 ) (s 3 ) − D ad G (s 2 ) D ad G (s 1 ) (s 3 ) − D ad G [Dad G (s 1 ) (s 2 )] (s 3 ) = D ad G (s 1 ) , D ad G (s 2 ) (s 3 ) − D [ad G (s 1 ),ad G (s 2 )] (s 3 ) = 0.
Moreover, we get
ad G (s 1 ), ad G (s 2 ) = [ad G (s 1 ), ad G (s 2 )] = ad G D ad G (s 1 ) (s 2 ) = ad G (D ad G (s 1 ) (s 2 )) = ad G ([s 1 , s 2 ]).
That ends the proof of the two assertions.
Remark 1 Thus the map
ad G : Λ n−1 K (G)/V K (G) −→ Der K (G), s −→ ad G (s), is a representation of Λ n−1 K (G)/V K (G) into G. Proposition 4
If G is an n-Lie algebra, then the space of invariant elements of G for the representation Proof. Considering the representation
ad G : Λ n−1 K (G)/V K (G) −→ Der K (G), s −→ ad G (s),ad G : Λ n−1 K (G)/V K (G) −→ Der K (G), s −→ ad G (s), we know that Inv(G) = x ∈ G/ ad G (s) (x) = 0 for any s ∈ Λ n−1 K (G)/V K (G).
We verify that
Inv(G) = {x ∈ G/ {x, y 1 , y 2 , ..., y n−1 = 0 }} for any y 1 , y 2 , ..., y n−1 ∈ G.
Proposition 5 When G is an n-Lie algebra, a subspace G 0 of G is stable for the representation ad G : Λ n−1 K (G)/V K (G) −→ Der K (G), s −→ ad G (s), if and only if for any x ∈ G 0 and for any y 1 , y 2 , ..., y n−1 ∈ G, we have {x, y 1 , y 2 , ..., y n−1 } ∈ G 0 .
Proof. It is obvious.
In what follows, we give the relation between the category of n-Lie algebras and the category of Lie algebras.
Proposition 6 The correspondence
L n : G −→ L n (G) = Λ n−1 K (G)/V K (G)
is a covariant functor from the category of n-Lie algebras to the category of Lie algebras.
Proof. It is too quite obvious.
When F is a vector subspace of G, we denote Λ n−1 K (F ) the set of finite sums i 1 <i 2 <...<i n−1
x i 1 Λx i 2 Λ...Λx i n−1 /i 1 , i 2 , ..., i n−1 ∈ N, x i 1 , x i 2 , ..., x i n−1 ∈ F .
We constructed a Lie algebra from an n-Lie algebra. Considering the functor L n : G −→ L n (G) = Λ n−1 K (G)/V K (G), we will say that a subspace I ⊂ G is an ideal of the n-Lie algebra G if the image of the space Λ n−1 K (I) by the canonical surjection Λ n−1 K (G) −→ Λ n−1 K (G)/V K (G), s −→ s, is an ideal of the Lie algebra Λ n−1 K (G)/V K (G). We also will say that a subspace vectoriel C ⊂ G is Cartan subalgebra of the n-Lie algebra G if the image of the space Λ n−1 K (C) by the canonical surjection Λ n−1 K (G) −→ Λ n−1 K (G)/V K (G), s −→ s, is a Cartan subalgebra of the Lie algebra Λ n−1 K (G)/V K (G). We finally will say that an n-Lie algebra G is a semi simple n-Lie algebra (nilpotent n-Lie algebra, solvable n-Lie algebra, commutative n-Lie algebra respectively) if the Lie algebra Λ n−1 K (G)/V K (G) is semi simple (nilpotent, solvable, commutative respectively). Proof. We reason with indecomposable elements. We consider x 1 , x 2 , ..., x n−1 elements of Inv(G) and y 1 , y 2 , ..., y n−1 elements of G. We get That ends the proof. We will establish a similar state for stable subspaces.
Proposition 8 If a subspace G 0 of an n-Lie algebra G is stable for the representation
ad G : Λ n−1 K (G)/V K (G) −→ Der K (G), s −→ ad G (s)
, then G 0 is an ideal of the n-Lie algebra G, i.e. the image of Λ n−1 K (G 0 ) by the canonical surjection
Λ n−1 K (G) −→ Λ n−1 K (G)/V K (G)
, is an ideal of the Lie algebra Λ n−1 K (G)/V K (G). Proof. Here, we also reason with indecomposable elements. If we consider x 1 , x 2 , ..., x n−1 elements of G 0 and y 1 , y 2 , ..., y n−1 elements of G. We get belongs to the image of Λ n−1 K (G 0 ) by the canonical surjection Λ n−1 K (G) −→ Λ n−1 K (G)/V K (G). That ends the proof.
Cohomology of an n-Lie algebra
When G is an n-Lie algebra, we denote d n the cohomolgy operator associated with the representation ad G : Λ n−1 K (G)/V K (G) −→ Der K (G), s −→ ad G (s).
For any p ∈ N, L p sks (Λ n−1 K (G)/V K (G), G) denotes the K-vector space of skew-symmetric p-multilinear maps of Λ n−1 K (G)/V K (G) into G and L sks ( Λ n−1 K (G)/V K (G) , G) = p∈N L p sks ( Λ n−1 K (G)/V K (G) , G).
We will say that the cohomology of the differential complex L sks ( Λ n−1 K (G)/V K (G) , G), d n is the cohomology of the n-Lie algebra G.
We denote H n (G) = Ker(d n )/ Im(d n ).
Proposition 9 When G is an n-Lie algebra, then H 0 n (G) = Inv(G).
Proposition 1
1If (G, {, ..., }) is an n-Lie algebra, then for any x 1 , x 2 , ..., x n−1 elements of G the mapad(x 1 , x 2 , ..., x n−1 ) : G −→ G, y −→ {x 1 , x 2 , ..., x n−1 , y} , is a derivation of (G, {, ..., }).Proof. For any x 1 , x 2 , ..., x n−1 elements of G and for any y 1 , y 2 , ..., y n elements of G, we have[ad(x 1 , x 2 , ..., x n−1 )] ({y 1 , y 2 , ..., y n }) = {x 1 , x 2 , ..., x n−1 , {y 1 , y 2 , ..., y n }} = n i=1 {y 1 , y 2 , ..., y i−1 , {x 1 , x 2 , ..., x n−1 , y i } , y i+1 , ..., y n } = n i=1 {y 1 , y 2 , ..., y i−1 , [ad(x 1 , x 2 , ..., x n−1 )] (y i ), y i+1 , ..., y n } .
{y 1 , y 2 , ..., y n−1 , a}} − {y 1 , y 2 , ..., y n−1 , {x 1 , x 2 , ..., x n−1 , a}} = {x 1 , x 2 , ..., x n−1 , {y 1 , y 2 , ..., y 1 , x 2 , ..., x i−1 , {y 1 , y 2 , ..., y n−1 , x i } , x i+1 , ..., x n−1 , a} − {x 1 , x 2 , ..., x n−1 , {y 1 , y 2 , ..., y 1 , x 2 , ..., x i−1 , {y 1 , y 2 , ..., y n−1 , x i } , x i+1 , ..., x n−1 , a} = ad G (− Λ...Λx i−1 Λ [ad G (y 1 Λ...Λy n−1 )] (x i )Λx i+1 Λ...Λx n−1 ) (a).Thus[ad G (s 1 ), ad G (s 2 )] = ad G −D ad G (s 2 ) (s 1 ) .
Theorem 3
3When (G, {, ..., }) is an n-Lie algebra, then the map[, ] : Λ n−1 K (G)/V K (G) 2 −→ Λ n−1 K (G)/V K (G), (s 1 , s 2 ) −→ D ad G (s 1 ) (s 2 ), only depends on s 1 and s 2 , and defines a Lie algebra structure on Λ n−1 K (G)/V K (G). Moreover the mapad G : Λ n−1 K (G)/V K (G) −→ Der K (G), s −→ ad G (s), is a morphism of K-Lie algebras.
is the following set Inv(G) = {x ∈ G/ {x, y 1 , y 2 , ..., y n−1 = 0 }} for any y 1 , y 2 , ..., y n−1 ∈ G.
Proposition 7
7If Inv(G) is the space of invariants elements of G for the representationad G : Λ n−1 K (G)/V K (G) −→ Der K (G), s −→ ad G (s),thenthe image of Λ n−1 K [Inv(G)] by the canonical surjection Λ n−1 K (G) −→ Λ n−1 K (G)/V K (G), is contained in the center of the Lie algebra Λ n−1 K (G)/V K (G).
x 1
1Λx 2 Λ...Λx n−1 , y 1 Λy 2 Λ...Λy n−1 = − y 1 Λy 2 Λ...Λy n−1 , x 1 Λx 2 Λ...Λx Λx 2 Λ...Λx i−1 Λ {y 1 , y 2 , ..., y n−1 , x i } Λx i+1 Λ...Λx n−1 . As {y 1 , y 2 , ..., y n−1 , x i } = 0 for i = 1, 2, ..., n − 1, then x 1 Λx 2 Λ...Λx n−1 , y 1 Λy 2 Λ...Λy n−1 = 0.
x 1
1Λx 2 Λ...Λx n−1 , y 1 Λy 2 Λ...Λy n−1 = − y 1 Λy 2 Λ...Λy n−1 , x 1 Λx 2 Λ...Λx nΛx 2 Λ...Λx i−1 Λ {y 1 , y 2 , ..., y n−1 , x i } Λx i+1 Λ...Λx n−1 . As {y 1 , y 2 , ..., y n−1 , x i } ∈ G 0 for i = 1, 2, ..., n − 1, then the bracket x 1 Λx 2 Λ...Λx n−1 , y 1 Λy 2 Λ...Λy n−1
. N Bourbaki, I Chap, Ii, Hermann Iii, ParisN. Bourbaki, Chap I,II,III, Hermann, Paris 1970
On n-Lie algebras of jacobians. V T Filipov, Sib. Mat. J. 39V.T. Filipov, On n-Lie algebras of jacobians, Sib. Mat. J. 39 1998 660-669
n-Lie algebra. V T Filipov, Sib. Mat. J. 266V.T. Filipov, n-Lie algebra, Sib. Mat. J. 26 (6) 1985 126-140
On a theory of n-Lie algebra, Algebra Logika. S M Kasymov, 26S.M. Kasymov, On a theory of n-Lie algebra, Algebra Logika, 1987, 26(3), 277-297
Nilpotent n-Lie algebras. M P Williams, Communications in Algebra. 376M.P. Williams, Nilpotent n-Lie algebras, Communications in Algebra, 37(6) juin 2009, 1843-1849.
| []
|
[
"Extended Bose-Hubbard model in a shaken optical lattice",
"Extended Bose-Hubbard model in a shaken optical lattice"
]
| [
"Jiao Miao \nInstitute for Advanced Study\nTsinghua University\n100084BeijingChina\n"
]
| [
"Institute for Advanced Study\nTsinghua University\n100084BeijingChina"
]
| []
| We study an extended Bose-Hubbard model with next-nearest-neighbor (NNN) hopping in a shaken optical lattice. We show how mean-field phase diagram evolves with the change of NNN hopping amplitude t2, which can be easily tuned via shaking amplitude. As t2 increases, a Z2symmetry-breaking superfluid (Z2SF) phase emerges at the bottom of the Mott lobs. The tricritical points between normal superfluid, Z2SF, and Mott insulator (MI) phases are identified. We further demonstrate the tricritical point can be tuned to the tip of the Mott lobe, in which case a new critical behavior has been predicted. Within random-phase approximation, excitation spectra in the three phases are obtained, which indicate how the phase transitions occur. | 10.1103/physreva.92.023632 | [
"https://arxiv.org/pdf/1506.04539v3.pdf"
]
| 117,546,502 | 1506.04539 | 6e1e02e576bbe8ee99f6f5ea1e4668b839562488 |
Extended Bose-Hubbard model in a shaken optical lattice
Jiao Miao
Institute for Advanced Study
Tsinghua University
100084BeijingChina
Extended Bose-Hubbard model in a shaken optical lattice
We study an extended Bose-Hubbard model with next-nearest-neighbor (NNN) hopping in a shaken optical lattice. We show how mean-field phase diagram evolves with the change of NNN hopping amplitude t2, which can be easily tuned via shaking amplitude. As t2 increases, a Z2symmetry-breaking superfluid (Z2SF) phase emerges at the bottom of the Mott lobs. The tricritical points between normal superfluid, Z2SF, and Mott insulator (MI) phases are identified. We further demonstrate the tricritical point can be tuned to the tip of the Mott lobe, in which case a new critical behavior has been predicted. Within random-phase approximation, excitation spectra in the three phases are obtained, which indicate how the phase transitions occur.
I. INTRODUCTION
Ultracold atoms condensed in periodically shaken optical lattices have shown novel properties. Two kinds of lattice shaking techniques have been developed. One is the off-resonant lattice shaking, in which the shaking frequency is tuned to be very large compared to the band gap and width. The hopping parameters and the interparticle interactions can be tuned by lattice shaking, which could result in synthetic gauge fields [1][2][3][4], an effective attractive Fermi-Hubbard model [5], or topologically nontrivial phases [6,7].
The other is the near-resonant lattice shaking, in which the shaking frequency is tuned to be a little larger than the gap of two energy bands. In this case different Bloch bands are hybridized, which will dramatically modifies the single-particle dispersion and leads to interesting phenomena. In a shaken one-dimensional optical lattice, a Z 2 -symmetry-breaking superfluid (Z 2 SF) phase has been observed [8], and an effective field theory has been constructed to study the normal superfluid-(NSF-)Z 2 SF-Mott insulator (MI) phase transition [9]. And the effective theory predicted a new critical behavior nearby the tricritical point with particle-hole symmetry in three dimensions [9]. Algebraical orders [10,11] and topological nontrivial phases [12] are predicted in shaken higherdimensional optical lattices.
The Bose-Hubbard (BH) model, which consists of nearest-neighbor (NN) hopping and on-site interaction, is used to study a MI-superfluid transition [13]. The model is a good approximation in the tight-binding limit and has been realized in an optical lattice [14]. Considerable efforts have been dedicated to extend the model by adding terms, such as next-nearest-neighbor (NNN) hopping [15,16], nearest-neighbor interaction [17], dipolar interaction [18], interaction-induced hopping term [19], spin structure [20], or disorder [13,21]. The NN and NNN hopping parameters can be renormalized in a different way by off-resonant lattice shaking, hence the ratio between them can be tuned [16].
In this paper, we show an extended Bose-Hubbard (EBH) model with NNN hopping can be easily realized by shaking optical lattices resonantly. Within mean-field theory, we find NSF, Z 2 SF and MI phases. We further show Z 2 SF phase emerges at the bottom of the Mott lobes for nonvanishing NNN hopping amplitude. In three dimensions, a new critical exponent of superfluid transition is predicted near the tricritical point with particlehole symmetry [9]. Nevertheless, the analysis is based on a constructed effective theory, and the existence of the particle-hole-symmetric tricritical point is in doubt.
Here within the microscopic EBH model, we demonstrate the tricritical point always exists and can be tuned to the tip of a Mott lobe. This makes previous work [9] more reliable. In the end, we calculate excitation spectra in the MI and superfluid phases in the random-phase approximation. We find gapless superfluid excitation has a quadratic dispersion near the condensate momentum at the NSF-Z 2 SF transition boundary. We also demonstrate in the Z 2 SF phase that the excitation spectrum has a roton structure in the strong-coupling limit.
II. MEAN-FIELD PHASE DIAGRAM
Let us consider a Chicago-type experiment [8]. Two counterpropagating laser beams are time-periodically modulated. The Hamiltonian [8] reads
H(t) = p 2 x 2m + V cos 2 (k r x + θ(t) 2 ),(1)
wherehk r is the photon momentum, θ(t) = f cos(ω 0 t), f and ω 0 are shaking amplitude and frequency, respectively, and ∆ ≡ f /(2k r ) is the maximum displacement of the lattice. By performing a transformation, x → x − ∆ cos(ω 0 t), in the comoving frame the Hamiltonian reads
H(t) = p 2 x 2m + V cos 2 (k r x) − A x (t)p x m ,(2)
where A x (t) = mω 0 ∆ sin(ω 0 t). An ac electric field E x = −mω 2 0 ∆ cos(ω 0 t) is effectively imposed to bosons condensed in the unshaken lattice. The first two terms, representing the unshaken lattice, give a static band structure λ (k x ) with Bloch state Ψ λ,kx (x). We choose the Bloch state as a basis in our following analysis.
In the experiment [8], shaking frequency is tuned to make s and p bands near-resonant. So we will use twoband and rotating-wave approximations in the following arXiv:1506.04539v3 [cond-mat.quant-gas] 3 Sep 2015 analysis. A quasienergy spectrum obtained by numerically diagonalizing Floquet operator T exp{− ī h T 0 H(t)} for the lowest 21 bands [8] is shown in Fig. 1 (b), where T denotes time ordering and T = 2π/ω 0 is time period. Fig. 1 (b) indicates the approximations are very good.
The Hamiltonian in the tight-binding form reads
H(t) = kx Ψ † p,kx , Ψ † s,kx H kx (t) Ψ p,kx Ψ s,kx ,(3)
where
H kx (t) = p (k x ) 0 0 s (k x ) + sin(ω 0 t) × −4h p sin(k x d) −2iΩ kx sin(ω 0 t) 2iΩ kx sin(ω 0 t) −4h s sin(k x d) ,(4)Ω kx = h sp + h sp1 cos(k x d),(5)h sp = − ω 0 ∆ 2 w p (x)|ip x |w s (x) ,(6)h sp = − ω 0 ∆ 2 w p (x)|ip x |w s (x − d) ,(7)
Ψ † λ,k is creation operator of a boson in the λ-band with quasimomentum k x , d = π/k r is the lattice constant, w λ is the Wannier function for the λ band, λ denotes s or p, and · · · | · · · | · · · denotes a real-space integral dx · · · . In the rotating-wave approximation, the effective Hamiltonian reads
H kx = U † (t)[H kx (t) − i∂ t ]U (t) ≈ p (k x ) Ω kx Ω kx s (k x ) +hω 0 ,(8)
where
U (t) = 1 0 0 e iω0t .
Here we neglect the fast-rotating terms. Before lattice shaking, the s band is decoupled with the p band due to inversion symmetry (IS). We notice lattice shaking effectively breaks IS, causing the coupling between s and p bands. Lattice shaking plays the same role with the electric field applied in the orbital Rashba effect [22].
The quasienergy spectrum calculated by diagonalizing the effective Hamiltonian in Eq. (8) is shown in Fig. 1. Before shaking, the dressed s band has a perfect cos(k x d)type dispersion, and therefore NNN hopping can be neglected. Lattice shaking changes the dressed s band into a hybridized band, in which bosons will stay when turning on shaking adiabatically. As shaking amplitude increases, the hybridized band dispersion deviates from the cos(k x d) form. So an extra cos(2k x d) term needs to be considered.
Assuming the ground state is Ψ kc (x) with quasimomentum k c , which breaks Z 2 symmetry spontaneously for nonvanishing k c [9], time-average interaction energy reads
int (k c ) = 1 T T 0 dt g dx|Ψ kc (x)| 4 ,(9)
where g is the repulsive interaction strength. We project Hilbert space into the upper hybridized band. In the tight-binding limit, the next-next-nearestneighbor hopping strength is smaller than the NNN hopping strength and will be neglected without affecting the following qualitative results. The off-site interaction energy is much smaller than the on-site interaction energy and will also be neglected. So we will study an EBH
Hamiltonian H EBH = −t 1 <i,j> a † i a j + t 2 <<i,j>> a † i a j − µ i n i + U 2 i n i (n i − 1),(10)
where a i is the boson operator annihilating boson at site i, n i = a † i a i is boson number operator, µ is the chemical potential, U is the on-site interaction, and the summations for the first and second terms are over NN and NNN sites, respectively.
Within standard mean-field theory, the EBH Hamiltonian in Eq. (10) can be rewritten as
H EBH = H M F − t 1 <i,j>ã † iã j + t 2 <<i,j>>ã † iã j , (11) where H M F ≡ i H i M F ,(12)H i M F = 2tψ 2 − 2t(ψ i a † i + ψ * i a i ) − µn i + U 2 n i (n i − 1),(13)a i = a i −ψ i represents fluctuation, andt = t 1 cos(k xc d)− t 2 cos(2k xc d).
The order parameter ψ i ≡ a i = e ikxcxi ψ is site-dependent, where · · · denotes expectation value in the mean-field ground state, k xc is the condensate momentum in the x direction, x i is the x coordinate of the ith lattice site, and ψ is positive and uniform. The meanfield Hamiltonian H M F breaks U(1)×Z 2 symmetry when ψ and k xc are nonvanishing. There are three possible phases: (1) MI phase with ψ = 0, (2) NSF phase with ψ = 0 and k xc = 0, and (3) Z 2 SF phase with ψ = 0 and k xc = 0. By minimizing the single-particle dispersion (k x ) = −2t 1 cos(k x d) + 2t 2 cos(2k x d) with respect to k x , in superfluid phase (ψ = 0), k xc has the value of
k xc = 0, t 1 ≥ 4|t 2 | 1 d arccos t1 4t2 , |t 1 | < 4|t 2 | π d , t 1 ≤ −4|t 2 | .(14)
The critical shaking amplitude f c of the NSF-Z 2 SF transition is determined by the condition Fig. 2 shows that as shaking amplitude increases, NN hopping parameter t 1 decreases, while NNN hopping parameter t 2 increases. Here the on-site interaction energy U is almost a constant for small shaking amplitude f . For a fixed detuning, as lattice depth V decreases, initial t 0 1 before shaking increases. Initial t 0 2 before shaking is almost vanishing. When the detuning δ is fixed and f is small, the degree of the hybridization and the slope of the line t 1,2 -f at any f are nearly the same for different V . So an increased f c is needed for a decreased V to meet the transition condition in Eq. (15). For a fixed V , as δ increases, t 0 1,2 remains the same, the change of t 1,2 with respect to f gets slower, and hence f c increases.
t 1 = 4|t 2 |.(15)
We numerically calculate the order parameter ψ i in Eq. (13) by the standard self-consistent approach and find the MI-superfluid transition is a second-order transition. So we can use the Landau theory [23] of phase transitions. By using perturbation theory near the MI phase boundary for small ψ, we obtain the mean-field ground state energy
E(ψ) M = −µn + U 2 n(n − 1) + 2t 1 − 2tχ(µ, n) ψ 2 + O(ψ 4 ),(16)
where M is the number of lattice sites and χ(µ/U, n) = n+1 U n−µ + n µ−U (n−1) . So the MI-superfluid transition boundary is given by
1 − 2tχ(µ/U, n) = 0.(17)
And the boundary condition can be rewritten as The mean-field phase diagram for a fixed t 2 is shown in Fig. 3. As t 2 increases from zero, Z 2 SF phase begins to appear at the bottom of Mott lobes near integer values of µ/U . Tricritical points lie on sides of Mott lobes. For a fixed filling number n, the tip of the Mott lobe lies at chemical potential (µ/U ) c = √ n 2 + n − 1, which is the same as that in the standard BHM. The Z 2 SF region grows and the Mott lobe gets thinner and longer because of competition between NN and NNN hopping. For a critical NNN hopping amplitude (t 2 /U ) c = 1/[6U χ((µ/U ) c , n)], the tricritical point coincides with the tip of the Mott lobe. When t 2 continues to increase, the Mott lobe gets first longer and then shorter and finally vanishes. Here the microscopic theory supports the existence of the tricritical point in Ref. [9]. The Mott lobe is the longest for (t 2 /U ) l = 1/[4U χ((µ/U ) c , n)], which is larger than t c 2 . The critical behavior near a particle-hole-symmetric tricritical point is usually different from the mean-field results and attracts a lot of interest. In three dimensions, a O(2) rotor universality class [11] and a new universality class [9] have been predicted. For a given n, one can tune the parameters to
µ ± U = − 1 2 + n −t U ± 1 2 1 − 4(1 + 2n)t U + 4(t U ) 2 .(18)� ���� ��� ���� ��� � ��� � ��� � � � /� μ/� � ��� ��� ��� ��� ��� -��� � ��� � ��� � � � /� μ/� (a) (b) (c) (d) � � � � � � � � � � � � � � � � � ���� ��� � ��� � ��� � � � /� μ/� ��� � � ��[(µ/U ) c , (t 1 /U ) c , (t 2 /U ) c ], where (t 1 /U ) c = 2/[3U χ((µ/U ) c , n)]
, to make the tricritical point meet the tip of the Mott lobe.
The Mott lobes have varying shapes and fixed chemical potentials for their tips for different NNN hopping in the mean-field level. A beyond-mean-field theory using a U (1) quantum rotor approach has predicted the same result of bosons with NNN hopping in a two-dimensional square lattice [15]. The approach only describes the U (1) symmetry-breaking NSF-MI phase transition. depth V (detuning δ) as discussed before. We know critical (t 1 /U ) c is a constant in the parameter regime. From previous analysis we also know t 1 at the NSF-Z 2 SF transition boundary increases as V decreases and does not change much for small δ. And U is proportional to interaction strength g and changes little with V and δ. So the critical interaction strength g c in the parameter regime increases as V decreases and changes little with δ under the near-resonant condition, as shown in Fig. 5 (b). When a shaken one-dimensional lattice system is tuned to this tricritical point, one can measure the critical exponent via the in situ technique [24] and a new universality class is expected [9].
III. COLLECTIVE EXCITATIONS
In this section, we study collective excitations at zero temperature. Following the standard-basis operator approach [25,26], we choose eigenstates {|iα } of the singlesite mean-field Hamiltonian H i M F in Eq. (13) as a basis, and the EBH Hamiltonian H EBH in Eq. (11) can be rewritten as
H EBH = i,α E α L i αα + (−t 1 <i,j> +t 2 <<i,j>> ) × αα ββ T ij αα ββ L i αα L j ββ ,(19)
where E α is the mean-field energy per site, L i αα ≡ |iα iα |, T ij αα ββ ≡ iα|ã † i |iα jβ|ã j |jβ . The single-particle retarded Green's function is defined as
g i,j (t − t ) = −iΘ(t − t ) [a i (t), a † i (t )] ,(20)
where Θ(t) is the step function. In the standard basis, the Green's function reads
g i,j (t − t ) = αα ββ T ji ββ αα G ij αα ββ (t − t ),(21)
where
G ij αα ββ (t − t ) = −iΘ(t − t ) [L i αα (t), L j ββ (t )] .(22)
By introducing the random-phase approximation, one obtains the equations of motion for G in the frequency and momentum space,
δ αβ δ α β D αα = (ω − E α + E α )G αα ββ (k x , ω) − D αα γγ (k x + k xc )T α αγγ + (k x − k xc )T γγ α α G γγ ββ (k x , ω),(23)
where D αα ≡ L αα − L α α , k xc is given in Eq. (14) for superfluid phase and is zero for MI phase, T α αγγ ≡ y † α α y γγ , y † α α ≡ iα |e ikxcxi a † i |iα , and y γγ ≡ iγ|e −ikxcxi a i |iγ .T α αγγ is site independent. Equations (23) are linear equations of αα y αα G αα ββ and αα y † αα G αα ββ . Substituting the solution into the Green's function g(k x , ω), one obtains
g(k x , ω) = Π(k x − 2k xc , ω) 1 − (k x )Π(k x − 2k xc , ω) ,(24)
where
Π(k x , ω) = A 11 (ω) + (k x ) A 12 (ω)A 21 (ω) 1 − (k x )A 22 (ω) ,(25)A 11 (ω) = α y 0α y † α0 ω + − ∆E α − y α0 y † 0α ω + + ∆E α ,(26)A 12 (ω) = A † 21 (ω) = α y 0α y α0 ω + − ∆E α − y α0 y 0α ω + + ∆E α ,(27)A 22 (ω) = α y † 0α y α0 ω + − ∆E α − y † α0 y 0α ω + + ∆E α ,(28)
|iα = 0 denotes the mean-field single-site ground state, ω + = ω + i0 + , and ∆E α = E α − E 0 . Here the Green's function is a generalization of that in the standard BHM [26], in which bosons condense at zero momentum (k xc = 0). In the MI phase, the basis is just the Fock state |iα = n . For a commensurate filling n, there is nonvanishing y n n = √ nδ n ,n−1 . The Green's function reads
g(k x , ω) = Z ω + − E p + 1 − Z ω + − E h ,(29)
where
E p,h = 1 2 U (2n − 1) − 2µ + (k x ) ± U 2 + 2(2n + 1)U (k x ) + (k x ) 2(30)Z = µ + U + E p (k x ) E p (k x ) − E h (k x ) .(31)
E p,h represents particle (hole) excitation. The condition of existence of a gapless excitation at k x = k xc exactly gives the MI phase boundary in Eq. (18). In the superfluid phase, we will numerically calculate the Green's function in Eq. (24) and the spectral function A(k x , ω) = −(1/π) Im g(k x , ω), of which excitation modes give excitation spectra. Fig. 6 shows excitation spectra near different MI-superfluid phase boundaries. There are two gapless spectra in the superfluid phase with positive and negative energy corresponding to quasiparticle and quasihole excitation, respectively. In the superfluid there are also gapped excitation modes as a consequence of the band structure of the lattice system. In the Z 2 SF phase, roton excitation spectrum has been observed in the weakly-interacting regime [27]. Here we show the roton excitation spectrum in the strong-coupling limit in Fig. 6 (a). Figures 6 (a) and (c) show linear dispersion around condensate momentum in the Z 2 SF and NSF phase, respectively. Figures 6 (b) and (d) show quadratic dispersions around k x = 0 at the NSF-Z 2 SF transition boundary without and with particle-hole symmetry, respectively. In the superfluid phase, the quadratic dispersion indicates stronger phase fluctuations and weaker superfluidity than linear dispersion [11]. At the lower MIsuperfluid transition boundary, the gapless particle dispersion vanishes, the hole dispersion becomes quadratic around k x = k xc , and a Mott gap is opened, which means disappearance of superfluidity. At the lobe tip, the Mott gap vanishes. In the Mott phase, the dispersions of both particle and hole excitations are gapped.
IV. CONCLUSION
In conclusion, we have shown a significant NNN hopping effect in near-resonantly shaken optical lattices. We studied the mean-field phase diagram for a onedimensional EBH model and found tricritical points between three phases. Furthermore, we calculated corresponding microscopic parameters to the EBH model parameters and provided strong support for the existence of the tricritical point with particle-hole symmetry. A new critical behavior [9] is expected to be verified by the in situ technique [24] in the parameters regimes. We also calculated the excitation spectra in all three phases and showed how the spectrum evolves during the phase transitions. In the Z 2 SF phase, the excitation spectrum has the roton structure. At the NSF-Z 2 SF transition boundary, the quasiparticle (quasihole) excitation has a quadratic dispersion relation around k x = 0.
FIG. 1 :
1Band structure with lattice depth V = 7Er and detuning δ = 0.44Er, where Er = (hkx) 2 /(2m) denotes photon recoil energy. (a) Band structure before shaking. The magenta and blue lines denote s and p bands, respectively. The red line denotes the dressed s band with a detuning δ. (b) Quasienergy dispersion of the upper hybridized band. The red and green lines are calculated using 2 and 21 bands, respectively. The upper and lower lines with the same color denotes the dispersion for shaking amplitude f = 0.2 and f = 0, respectively.
FIG. 2 :
2Lattice shaking induced NN and NNN hopping. Parameters (V /Er, δ/Er) for dotted, solid, and dashed lines are (8, 0.25), (7, 0.25), and (7, 0.45), respectively. Interaction energy is gn = 0.1Er, where n denotes particle density. The red and blue lines denote t1/U and t2/U , respectively, where U denotes on-site interaction energy.
FIG. 3 :
3Phase diagram for different t2. t2 increases from (a) to (d). The green, red, and blue regions denote the regions of MI, NSF, and Z2SF phases, respectively.
FIG. 4 :
4Mean-field phase diagram with lattice depth V = 7Er and shaking frequency ω0 = 5.4Er/h. U0 is the on-site interaction energy before shaking.
Fig. 4
4shows the phase diagram expressed in f and µ terms. When the shaking amplitude f increases, t 1 decreases and t 2 increases. So an initial NSF phase can turn into a Z 2 SF phase, and the tricritical point can be tuned onto the tip of a Mott lobe. The parameter regime, where tricritical point at the tip of the n = 1 Mott lobe lies, is shown inFig. 5.Figure 5 (a)shows the relationship between critical shaking amplitude f c and latticeFIG. 5: Parameters regime for a tricritical point at the tip of the n = 1 Mott lobe.
FIG. 6 :
6Excitation spectra with parameters marked inFig. 3(a) and (b). Red, blue (dashed), and black lines denote excitation spectra in the superfluid phase, at the MI-superfluid transition boundary, and in the MI phase, respectively.
V. ACKNOWLEDGEMENTSWe thank H. Zhai, W. Zheng, C. Chin, and Y. Ohashi for valuable discussions and suggestions.
. L.-K Lim, C M Smith, A Hemmerich, Phys. Rev. Lett. 100130402L.-K. Lim, C. M. Smith, and A. Hemmerich, Phys. Rev. Lett. 100, 130402 (2008).
. A Eckardt, P Hauke, P Soltan-Panahi, C Becker, K Sengstock, M Lewenstein, Euro-phys. Lett. 8910010A. Eckardt, P. Hauke, P. Soltan-Panahi, C. Becker, K. Sengstock, and M. Lewenstein, Euro-phys. Lett. 89, 10010 (2010).
. J Struck, C Ölschläger, R Le Targat, P Soltan-Panahi, A Eckardt, M Lewenstein, P Windpassinger, K Sengstock, Science. 333996J. Struck, C.Ölschläger, R. Le Targat, P. Soltan-Panahi, A. Eckardt, M. Lewenstein, P. Windpassinger, and K. Sengstock, Science 333, 996 (2011).
. J Struck, C Ölschläger, M Weinberg, P Hauke, J Simonet, A Eckardt, M Lewenstein, K Sengstock, P Windpassinger, Phys. Rev. Lett. 108225304J. Struck, C.Ölschläger, M. Weinberg, P. Hauke, J. Si- monet, A. Eckardt, M. Lewenstein, K. Sengstock, and P. Windpassinger, Phys. Rev. Lett. 108, 225304 (2012).
. N Tsuji, T Oka, P Werner, H Aoki, Phys. Rev. Lett. 106236401N. Tsuji, T. Oka, P. Werner, and H. Aoki, Phys. Rev. Lett. 106, 236401 (2011).
. W Zheng, H Zhai, Phys. Rev. A. 8961603W. Zheng and H. Zhai, Phys. Rev. A 89, 061603 (2014).
. G Jotzu, M Messer, R Desbuquois, M Lebrat, T Uehlinger, D Greif, T Esslinger, Nature. 515237G. Jotzu, M. Messer, R. Desbuquois, M. Lebrat, T. Uehlinger, D. Greif, and T. Esslinger, Nature (London) 515, 237 (2014).
. C V Parker, L C Ha, C Chin, Nature Phys. 9769C. V. Parker, L. C. Ha, and C. Chin, Nature Phys. 9, 769 (2013).
. W Zheng, B.-Y Liu, J Miao, C Chin, H Zhai, Phys. Rev. Lett. 113155303W. Zheng, B.-Y. Liu, J. Miao, C. Chin, and H. Zhai, Phys. Rev. Lett. 113, 155303 (2014).
. H.-C Po, Q Zhou, arXiv:1408.6421H.-C. Po and Q. Zhou, arXiv:1408.6421 (2014).
. J Miao, B Liu, W Zheng, Phys. Rev. A. 9133404J. Miao, B. Liu, and W. Zheng, Phys. Rev. A 91, 033404 (2015).
. S.-L Zhang, Q Zhou, Phys. Rev. A. 9051601S.-L. Zhang and Q. Zhou, Phys. Rev. A 90, 051601(R) (2014).
. M P A Fisher, P B Weichman, G Grinstein, D S Fisher, Phys. Rev. B. 40546M. P. A. Fisher, P. B. Weichman, G. Grinstein, and D. S. Fisher, Phys. Rev. B 40, 546 (1989).
. M Greiner, O Mandel, T Esslinger, T W Hänsch, I Bloch, Nature. 41539M. Greiner, O. Mandel, T. Esslinger, T. W. Hänsch, and I. Bloch, Nature 415, 39 (2002).
. T A Zaleski, T K Kopéc, J. Phys. B: At. Mol. Opt. Phys. 4385303T. A. Zaleski and T. K. Kopéc, J. Phys. B: At. Mol. Opt. Phys. 43 085303 (2010) .
. M Di Liberto, O Tieleman, V Branchina, C M Smith, Phys. Rev. A. 8413607M. Di Liberto, O. Tieleman, V. Branchina, and C. M. Smith, Phys. Rev. A 84, 013607 (2011).
. G Mazzarella, S M Giampaolo, F Illuminati, Phys. Rev. A. 7313625G. Mazzarella, S. M. Giampaolo, and F. Illuminati, Phys. Rev. A 73, 013625 (2006).
. K Góral, L Santos, M Lewenstein, Phys. Rev. Lett. 88170406K. Góral, L. Santos, and M. Lewenstein, Phys. Rev. Lett. 88, 170406 (2002).
. T Sowiński, O Dutta, P Hauke, L Tagliacozzo, M Lewenstein, Phys. Rev. Lett. 108115301T. Sowiński, O. Dutta, P. Hauke, L. Tagliacozzo, and M. Lewenstein, Phys. Rev. Lett. 108, 115301 (2012).
. S Tsuchiya, S Kurihara, T Kimura, Phys. Rev. A. 7043628S. Tsuchiya, S. Kurihara, and T. Kimura, Phys. Rev. A 70, 043628 (2004).
. T Giamarchi, H J Schulz, Phys. Rev. B. 37325T. Giamarchi and H. J. Schulz, Phys. Rev. B 37, 325 (1988).
. J.-H Park, C H Kim, J.-W Rhim, J H Han, Phys. Rev. B. 85195401J.-H. Park, C. H. Kim, J.-W. Rhim and J. H. Han, Phys. Rev. B. 85, 195401 (2012).
. L D Landau, E M Lifshitz, Statistical Physics. Butterworth-HeinemannL. D. Landau and E. M. Lifshitz, Statistical Physics, (Butterworth-Heinemann, Oxford, 1980).
. X Zhang, C.-L Huang, S.-K Tung, C Chin, Science. 3351070X. Zhang, C.-L. Huang, S.-K. Tung and C. Chin, Science 335, 1070 (2012).
. K Sheshadri, H Krishnamurthy, R Pandit, T Ramakrishnan, Europhys. Lett. 22257K. Sheshadri, H. Krishnamurthy, R. Pandit, and T. Ra- makrishnan, Europhys. Lett., 22, 257 (1993).
. Y Ohashi, M Kitaura, H Matsumoto, Phys. Rev. A. 7333617Y. Ohashi, M. Kitaura, and H. Matsumoto, Phys. Rev. A, 73, 033617 (2006).
. L.-C Ha, L W Clark, C V Parker, B M Anderson, C Chin, Phys. Rev. Lett. 11455301L.-C. Ha, L. W. Clark, C. V. Parker, B. M. Anderson, and C. Chin, Phys. Rev. Lett. 114, 055301 (2015).
| []
|
[
"Physics-aware deep neural networks for surrogate modeling of turbulent natural convection",
"Physics-aware deep neural networks for surrogate modeling of turbulent natural convection"
]
| [
"Didier Lucor ",
"Atul Agrawal \nDepartment of Mechanical Engineering\nTechnical University of Munich\nGarching\n\nMünchenGermany (\n",
"Anne Sergent \nFaculté des Sciences et Ingénierie\nSorbonne Université\nUFR Ingénierie\nParisFrance\n",
"\nLaboratoire Interdisciplinaire des Sciences du Numérique (LISN)\nUniversité Paris-Saclay\nCNRS\nOrsayFrance (\n"
]
| [
"Department of Mechanical Engineering\nTechnical University of Munich\nGarching",
"MünchenGermany (",
"Faculté des Sciences et Ingénierie\nSorbonne Université\nUFR Ingénierie\nParisFrance",
"Laboratoire Interdisciplinaire des Sciences du Numérique (LISN)\nUniversité Paris-Saclay\nCNRS\nOrsayFrance ("
]
| []
| Recent works have explored the potential of machine learning as data-driven turbulence closures for RANS and LES techniques. Beyond these advances, the high expressivity and agility of physics-informed neural networks (PINNs) make them promising candidates for full fluid flow PDE modeling. An important question is whether this new paradigm, exempt from the traditional notion of discretization of the underlying operators very much connected to the flow scales resolution, is capable of sustaining high levels of turbulence characterized by multi-scale features? We investigate the use of PINNs surrogate modeling for turbulent Rayleigh-Bénard (RB) convection flows in rough and smooth rectangular cavities, mainly relying on DNS temperature data from the fluid bulk. We carefully quantify the computational requirements under which the formulation is capable of accurately recovering the flow hidden quantities. We then propose a new padding technique to distribute some of the scattered coordinates -at which PDE residuals are minimized -around the region of labeled data acquisition. We show how it comes to play as a regularization close to the training boundaries which are zones of poor accuracy for standard PINNs and results in a noticeable global accuracy improvement at iso-budget. Finally, we propose for the first time to relax the incompressibility condition in such a way that it drastically benefits the optimization search and results in a much improved convergence of the composite loss function. The RB results obtained at high Rayleigh number Ra = 2 · 10 9 are particularly impressive: the predictive accuracy of the surrogate over the entire half a billion DNS coordinates yields errors for all flow variables ranging between [0.3% − 4%] in the relative L 2 norm, with a training relying only on 1.6% of the DNS data points. | null | [
"https://arxiv.org/pdf/2103.03565v1.pdf"
]
| 232,135,224 | 2103.03565 | d25917ba570bcb96bc7911eda33e3ffe11d38880 |
Physics-aware deep neural networks for surrogate modeling of turbulent natural convection
March 8, 2021 5 Mar 2021
Didier Lucor
Atul Agrawal
Department of Mechanical Engineering
Technical University of Munich
Garching
MünchenGermany (
Anne Sergent
Faculté des Sciences et Ingénierie
Sorbonne Université
UFR Ingénierie
ParisFrance
Laboratoire Interdisciplinaire des Sciences du Numérique (LISN)
Université Paris-Saclay
CNRS
OrsayFrance (
Physics-aware deep neural networks for surrogate modeling of turbulent natural convection
March 8, 2021 5 Mar 2021deep learningmachine learningPINNsDNSturbulenceconvection 1
Recent works have explored the potential of machine learning as data-driven turbulence closures for RANS and LES techniques. Beyond these advances, the high expressivity and agility of physics-informed neural networks (PINNs) make them promising candidates for full fluid flow PDE modeling. An important question is whether this new paradigm, exempt from the traditional notion of discretization of the underlying operators very much connected to the flow scales resolution, is capable of sustaining high levels of turbulence characterized by multi-scale features? We investigate the use of PINNs surrogate modeling for turbulent Rayleigh-Bénard (RB) convection flows in rough and smooth rectangular cavities, mainly relying on DNS temperature data from the fluid bulk. We carefully quantify the computational requirements under which the formulation is capable of accurately recovering the flow hidden quantities. We then propose a new padding technique to distribute some of the scattered coordinates -at which PDE residuals are minimized -around the region of labeled data acquisition. We show how it comes to play as a regularization close to the training boundaries which are zones of poor accuracy for standard PINNs and results in a noticeable global accuracy improvement at iso-budget. Finally, we propose for the first time to relax the incompressibility condition in such a way that it drastically benefits the optimization search and results in a much improved convergence of the composite loss function. The RB results obtained at high Rayleigh number Ra = 2 · 10 9 are particularly impressive: the predictive accuracy of the surrogate over the entire half a billion DNS coordinates yields errors for all flow variables ranging between [0.3% − 4%] in the relative L 2 norm, with a training relying only on 1.6% of the DNS data points.
Introduction
Deep learning (DL) is investigated among data-driven methods as a surrogate for physicsdriven computational fluid dynamics (CFD) methods solving expensive nonlinear coupled PDEs, such as the ones describing turbulent numerical simulations or experiments. It seems to be somehow capable of producing realistic instantaneous flow fields with reasonable physically accurate spatio-temporal coherence, without solving the actual partial differential equations (PDEs) governing the system [1,2]. DL is also promising because of its proficiency in extracting low-dimensional information from large amount of high-dimensional turbulent data. This new paradigm is interesting for applications involving flow optimization and control, uncertainty quantification, gappy data reconstruction or multi-scale flow analysis, for which the turbulent prediction may be queried in real-time or many times. In practice, DL models are solved with streams of data as "black boxes" [3]. In their standard form, they lack knowledge of the underlying physics and when they achieve low prediction errors their efficiency remains hard to interpret. In fact, they do not necessarily satisfy the physics of the systems they model. It is therefore crucial to inject some known physics and principles into their framework, not only to get more physically meaningful results but also in order to better guide the learning process [4]. Besides, the physical invariants may help in recovering hidden system quantities for which no data were available, which is very common in experiments. While machine learning methods make sense as data-driven closure models for Reynolds-Averaged Navier Stokes (RANS) [5] and Large Eddy Simulations (LES) techniques [6], they may also be used for full PDE modeling [7]. An interrogation remains in terms of their potential as actual replacement of costly traditional PDE solution methods at all scales, such as direct numerical simulations (DNS). In this paper, we will test the efficiency of physics-aware DL for metamodeling turbulent natural convection. Turbulent natural convection is a spontaneous physical process present in many natural systems (oceans, atmospheres or mantles) as well as in engineering applications, such as passive cooling of braking systems, nuclear power plants or electronic devices or natural ventilation of buildings. A canonical system of such turbulent heat transport mechanisms is the Rayleigh-Bénard (RB) cell, where the temperature and velocity fields interact through the buoyancy force [8]. Efficiently modelling of this phenomenon is the first step towards heat transfer control for more sustainable energy systems, but it requires to track the heat carriers, namely the small-scale plumes. This remains a challenge due to the double kinetic and thermal nature of plumes, and the nonlinear interactions between various spatial and time scales from the large scale circulation to the small vortices and plumes [9]. The continued increase of supercomputing power in recent years has enabled the DNS of highly turbulent flows, resolving the entire array of scales at very high Rayleigh number. But it involves such a computational effort in terms of degree of parallelism, CPU resources and storage capacity that it will eventually entail a restriction on the spatial DNS resolution/storage and will therefore hamper its analysis. Few recent studies have been pioneering the use of deep learning in the framework of turbulent heat transfer with various aims. For instance, Kim et al. [10] used DL, in the form of a convolutional neural network (CNN), to predict the turbulent heat transfer -reconstructing the wall-normal heat flux at the wall -on the basis of other wall information (such as wall shear stress) obtained by DNS of channel flow with a passive temperature field. Fonda et al. [11] have tracked turbulent superstructures in RB convection in horizontally extended systems at Ra = 10 5,6,7 with U-shaped network (encoder-decoder CNN) allowing to reduce the dimensionality of the structures to slowly-evolving temporal 2D planar network of ridges. The idea here is to propose an automated tool exploiting a large DNS simulation in order to explore heat transfer properties more easily. Pandey et al. were more interested by turbulent statistical prediction of 2D large scale structures. They relied on reservoir computing modeling, which may be seen as an hybridization between a proper orthogonal decomposition (POD) of DNS data and a recurrent neural network (RNN), to tackle RB cavity flow at Ra = 10 7 [12]. At the foundation of all of these works is the use of large DNS database from which partial information, in the form of wall data or time-windowed averaging, or more global information, in the form of POD, are extracted. We propose to leverage deep neural networks to replace the DNS three-dimensional solver by a much more agile data-driven and physics-aware surrogate. Most importantly, this surrogate will be trained with partial DNS data, but will incorporate known physics (such as symmetries, constraints and conservation laws in the training process), allowing the inference of hidden fluid quantities of interest [13,14]. Raissi et al. [7] first introduced the concept of physics-informed neural networks (PINNs) to solve forward and inverse problems involving several different types of PDEs. This approach may be apprehended as a combination of data-driven supervised and physically-constrained unsupervised learning. For instance, they propose a way of approximating Navier-Stokes (NS) solutions that do not require mesh generation. Most of the flows considered in the aforementioned works are laminar at relatively low Reynolds numbers. A fundamental question related to whether PINNs could simulate three-dimensional turbulence directly, similarly to high-order DNS, was answered in [14]. In this study, the authors test different NS formulations for simulating turbulence. For their velocity-pressure formulation, that they found most effective, they provide their PINNs with velocity DNS data collected from the initial condition, boundaries and inside of a subdomain of their turbulent channel flow system. Closer to our application, they propose in [13] a similar approach for inferring pressure and fluid velocity from monitoring of passively advected scalar data with applications to laminar flows. In this paper, we wish to investigate the relevance of the approach to three-dimensional turbulent heat transfer in the form of natural convection and modeled as Navier-Stokes equations under Boussinesq approximation, for which the temperature acts as an active quantity with a feedback on the momentum equations. We will put particular emphasis on the efficiency of the surrogate modeling with respect to data and residual sampling strategy.
The paper is organized as follows. We first recall the basics of the standard PINNs formulation in section 2. We then introduce the turbulent RB cavity class of problems that we consider. We present various PINNs results and propose some methodological improvements in section 3. Finally, we discuss our findings, introduce and test a new idea on a more turbulent setup and end up with some perspectives in section 4.
2 Physics-informed DNN as a viable data-driven method to solve nonlinear PDEs
Deep learning tools have recently seemed to start providing a different approach to computational mechanics. In particular, deep neural networks (DNN) are now considered as an alternative way of approximating the solution of various deterministic PDEs types. Since some earlier studies [15,16,17,18], and thanks to significant computational advances in au-tomatic differentiation, DNN used as surrogates of PDEs solutions have generated a broad interest from the community [19,20,21,22]. Nevertheless, in the small data regime, their efficiency remains often limited and their prediction lacks robustness and interpretability, motivating the idea of "adding" any form of prior knowledge to the numerical surrogate, in order to provide some kind of "training guidance". One approach is to design a specialized network architecture embedding the prior knowledge relevant to the task at hand. This is for instance the case of the convolutional neural networks (CNN) which have revolutionized the field of computer vision thanks to their translation invariant characteristics with impressive applications in image classification, medical image analysis, natural language processing, etc. Another approach relies on a softer enforcing of this knowledge. Raissi et al. [7] have proposed to rely on physics-inspired neural networks (PINNs) for approximating solutions to general nonlinear PDEs and validated it with a series of benchmark test cases. The main feature is the inclusion of some form of prior knowledge about the physics of the problem in the learning algorithm, in addition to the data used to train the network. This is done through an enlarged/enhanced loss/cost function. This way, the outputs of the neural network are constrained to approximately satisfy a system of PDEs by using a regularization functional L PDE that typically corresponds to the residual (or the variational energy) of the set of PDEs under the neural network representation. The algorithm imposes a penalty for non-physical solutions and hopefully redirects it quicker towards the correct solution. As a result, the algorithm has good generalization property even in the small data set regime. This approach recently drew a lot of attention and is the subject of several numerical investigations including recent development of dedicated computational packages [23]. Nevertheless, the plain version of PINNs numerically suffers from several drawbacks. Indeed they are for instance notoriously hard to train for multi-scale and/or high-frequency problems. In fact, a first difficulty resides in the discrepancy of convergence rate between the different terms of the loss function depending on the change in the learning rate. This comes from an imbalanced magnitude of the back-propagated gradients during model training. It is therefore possible in practice to assign some weight coefficients within the loss function than can effectively assign a different learning rate to each individual loss term. These weights may be user-specified or tuned automatically during network training [24]. Moreover, the required depth of the network increases with increasing order of the set of PDEs, leading to slow learning-rate due to the issue of vanishing gradients. It was also noticed that PINNs are not always robust in representing sharp local gradients [25]. Another source of discredit of PINNs as described is its dependence to the data. Other teams have developed physics-constrained, data-free DNN for surrogate modeling of incompressible flows. The idea is to enforce the initial/boundary conditions instead of being penalized together during training, which is solely driven by minimizing the residuals of the governing PDEs. Some two-dimensional vascular laminar flows with idealized geometries are tested with this approach in [26].
Notations and formulation
Here we introduce our notations and briefly describe the general DNN framework and how it is coupled with a second network to form the PINN approach. The goal is to approximate the exact solution of a model noted M, i.e. a set of unsteady PDEs, written in a generic form inside and at the boundary of a physical domain Ω evolving over a time interval D = [0, T f ] as:
u(x, t) t + N (u(x, t); λ) = R(x, t), (x, t) ∈ Ω ∈ R d × D, B (u(x, t)) = B(x, t), (x, t) ∈ ∂Ω ∈ R d−1 × D.(1)
whose exact solution representing the system unknown variables is defined as:
u =f (x, t), and satisfying M(u) = 0,
through the response of a neural network:
u ≈ u DNN = f θ (x, t), with M f θ (x, t) = r(x, t),(3)
with r representing the residual fields of the set of equations. In Eq. (1), N is a general spatial differential operator (which may include a parameter λ) in the domain Ω, while B is the boundary operator on ∂Ω; R and B are potential source fields. More specifically, in our case, the model describing our physical system of interest encompasses the three-dimensional incompressible Navier-Stokes equations under the Boussinesq approximation, which may be written in non-dimensional form as:
v t + (v · ∇) v = −∇p + Pr Ra 1/2 ∆v + Pr T e z , T t + v · ∇T = 1 Ra 1/2 ∆T, ∇ · v = 0,(4)u ≡ (v, p, T ), where v ≡ (v x , v y , v z ) t ,
p and T are the dimensionless fluid velocity, pressure and temperature, respectively. Following the idea proposed in [13], an auxiliary variable T = 1 − T is added, that satisfies a similar transport equation as the temperature field. Later on, this complementary equation will be useful to the training of the networks. It will act as an additional constraint helping the algorithm to better converge as shown in [13]. The DNN will therefore have to learn the nonlinear continuous mapping relating inputs and outputs of the system. The main portion of this network is a multi-layer perceptron (MLP) made of interconnected neurons assembled in ∈ N hidden layers. The network dimensions are as follows: n 0 = n x + 1 ∈ N the input dimension, n +1 = n u ∈ N the output dimension and n l the dimension of each hidden layer. The network architecture sequence A may be summarized as A = n x + 1, n 1 , . . . , n l , . . . , n , n u . Looking at computational mechanisms of the DNN in more details, we define the following affine linear maps between adjacent layers: g l θ l : R n l−1 → R n l : a l−1 → W l a l−1 + b l , for l = 1, . . . , ,
where a l−1 ∈ R n l−1 is an array containing all the values taken by the neurons belonging to the (l − 1)−layer. The quantities θ (·) ≡ W (·) ∈ R n l−1 ×n l , b (·) ∈ R n l represent the parameters containing the weights and biases to be calibrated. Summarizing the telescoping approximation form of the DNN output, we may write it as follows:
u ≈ u DNN = f θ (x, t) = g θ ρ g −1 θ −1 ρ . . . ρ(g 1 θ 1 (x, t)) ,(6)
where ρ : R → R is a nonlinear activation function, kept the same in the entire network in this study. Classically, for a chosen architecture, the neural network may be trained with a large, but potentially noisy and scattered, training set of data {((x, t) (train) , u (train) )} by optimizing its parameters θ. In order to minimize the error associated with the prediction of the DNN, an objective function is required by the optimization. It is referred as the loss (or cost) function and maps the set of parameter values for the network onto a scalar value. For regression problems, mean-squared error (MSE) loss functions also named L 2 -based loss functions are usually preferred:
L Label θ, {(x, t) (i)) } i∈I = 1 n L n L i=1 u (i) DNN − u (i) ,(7)
where I = {1, . . . , n L } is a defined set with n L the size of the data sample. Finding the optimal value of θ under this norm is equivalent to maximizing the conditional log-likelihood
distribution N L i=1 log π u (i) |(x, t) (i) , θ [3]
. Once the parameters have been tuned, thanks to the graph-based implementation of DNNs, it is in fact straightforward to compute exactly derivatives of the surrogate network of output u with respect to its inputs, i.e. spatial/temporal derivatives, by applying the chain rule for differentiating compositions of functions using the automatic differentiation, which is conveniently integrated in many machine learning packages such as Tensorflow [27]. The PINN approach takes advantage of this functionality. Figure (1) graphically describes the structure of the PINN approach, for which the loss function contains a mismatch in the given partial data on some state variables combined with the residual of the PDEs computed on a set of random points in the time-space domain. This time the loss function may be Figure 1: Schematic of a physics-informed neural network (PINN). Thanks to automatic differentiation, differential operators applied to the u DNN outputs are available to evaluate the residuals r DNN (that are wished small) of the set of PDEs . These residuals are an additional byproduct information which is used to regularize the loss function and help in the overall training process.
a l j u (x, t) x, t r (x, t) ∂ t ℬ
written as a combination of a loss term L Label based on the data, and another one L PDE based on the residuals of the PDEs:
L θ, {(x, t) (k)) } k∈I∪J = L Label {(x, t) (i)) } i∈I + L PDE {(x, t) (j)) } j∈J = 1 n L n L i=1 u (i) PINN − u (i) + 1 n R n R j=1 r (j) PINN ,(8)
where n L is the size of a given set of input-output data sample {(x, t) (i) , u (i) } i∈I (collected inside and/or at the boundaries of the training domain), n R is the size of the sample at which PDEs residuals are computed, and u · may be a sub-component of the full output (e.g.
u (i) ≡ T (i) or u (i) ≡ (T (i) , v (i)
,∂Ω×D ) if flow velocity information is also considered in the form of data sampled from the domain boundaries). For a given sample (e.g. fixed spatial and temporal coordinates), the residual is evaluated as the sum of an array of squared residuals of size equals to the full number of PDEs in the model. In our approach, the same weight is assigned to each residual of each equation of the system (4). Note that the label and residuals sample sets are not necessarily the same, as their size and location may differ. A very recent work proposed to decompose L Label in various terms corresponding for instance to the contribution of various data sources: e.g. the initial condition, the boundaries or the inside of the domain. This allows to dynamically assign some weights to each term in order to get a better error balance [24]. The standard PINN model is therefore a grid-free approach as no mesh is needed. All the complexities of solving the model are transferred into the optimization/training stage of the neural network. Updating the parameters requires the knowledge of the loss gradient ∂L(θ, (x, t))/∂θ that is computed thanks to the backpropagation algorithm [28]. A particular algorithm from the stochastic gradient descent (SGD) class with mini-batch updates based on an average of the gradients inside each block of MB examples:
θ k ← θ k−1 − k 1 MB (k+1)MB k =kMB+1 ∂L(θ, (x, t) (k ) )/∂θ,(9)
is considered. The great advantage of SGD update methods is that their convergence does not depend on the size of the training set, only on the number of updates and the richness of the training distribution [29]. To be more specific, an Adam (for Adaptive moment estimation) optimizer [30] is used, which combines the best properties of the AdaGrad and RMSProp algorithms. Moreover, the parameters of the neural networks are randomly initialized using the Xavier scheme [31].
Regarding the choice of the hyper-parameters for the approximate approximation, we have followed the literature advices [29]. The learning rate of the Adam algorithm is (and will take different values depending on the epochs cycle) and the beta values are β 1 = 0.9, and β 2 = 0.999. In the following section we will investigate the use of standard PINNs for turbulent flows and then propose a new methodological development to improve its capability.
Numerical improvements and experiments
DNS database of turbulent Rayleigh-Bénard convection with heated blocks
We consider a Rayleigh-Bénard-like configuration made of a bi-periodic water layer heated from below (P r = 4.3). The two horizontal plates are isothermal (T bottom = 1; T top = 0). Previous studies had shown that surrogate modeling of PINN type performs better when the training domain encompasses lively flow structures with non-zero gradients. Based on this experience, we wish to propose a configuration producing a more organized natural convection in order to easily position our training domain. To this end, two heated squarebased blocks at T bottom are closely placed on the bottom plate in order to better localize the flow. They are aligned along one of the main diagonals. The resulting flow will be dominated by two main plumes developing over the blocks. They interact with each others in a complex pattern and swirl before impacting the ceiling (as seen on Figure 2).
The computational domain is a cube of width equal to H = 1, so the computational domain is defined as Ω The Rayleigh number of the studied test case is equal to Ra = 2 · 10 7 leading to a turbulent flow regime ( Figure 3). The DNS database is obtained using the in-house numerical solver SUNFLUIDH. It is based on a finite volume approach on staggered grids. A semi-implicit scheme and a pressure-correction algorithm for the velocity-pressure coupling [32] are combined together to achieve a second-order time accuracy. The resulting Poisson's equation is solved by a multi-grid method. The solid blocks are modeled through a loop truncation technique. A domain decomposition method using MPI is applied for parallel computation. The code has been validated in the context of turbulent Rayleigh-Bénard convection with roughness [33]. DNS calculations are done using (2 × 2 × 2) subdomains of 64 3 cells each with a constant convective time step equal to 2.5e-3.
= [0, H] × [0, H] × [0, H], cf.
We retain a training domain about 55 times smaller than the DNS computational domain (figure 2). It is a box-shaped volume placed over one of the two roughness elements, of dimension:
{Ω PINN = [0.5H, 0.7H] × [0.5H, 0.7H] × [0.055H, 0.5H]
, containing (n x = 26 × n y = 26 × n z = 38) grid points in the (x, y, z) frame of reference, cf. Table (1). The domain over which the PINN model is going to be trained, therefore spans 20% along each x− and y−direction and 45% along the vertical z−direction. The training domain is large enough to cover the entirety of the plume formed at the block in its spatial ascent over half the height of the cavity, and the fluid it entrains in its near field. understand how the PINN model can be made accurate, while as data-frugal as possible. The results are presented both in terms of training/validation for the chosen turbulent RB cavity with partial data information.
PINNs hyperparameters
The PINN architecture contains = 10 hidden layers (unless mentioned otherwise) of size n l=1,..., = 300 neurons each, so we have A = (n 0 = 4, n 1 = 300, . . . , n = 300, n u = 6), for a total of W = 813e3 weights to be calibrated 1 .
As for the training procedure, the results reported in the following are obtained after seven cycles, each of them being made of a certain number of consecutive epochs of the stochastic gradient Adam optimizer with various learning rates, cf. Table (2), each epoch corresponding to one pass through an entire dataset. The total number of iterations of the Adam optimizer is therefore given by the total number of epochs (i.e. 1500) times the size of the training data used, divided by the mini-batch size. The mini-batch size we have used is MB = 2 · 10 3 and the number of data points are clearly specified in the following on a case by case basis. The training is performed on our laboratory Lab-IA cluster with a single NVIDIA Tesla V100 32GB GPU. The details of the various databases that have been extracted from DNS data and used to train different PINN models are written down in Tables (1,3). Table (1) describes the spatial and temporal domains and their resolutions as well as the size of each database. The 1 Here is the computation of the weights goes according to: W = n 0 × n 1 + i=2 n i * n i−1 + n × nu, computation including the biases quantities is: θ = (n 0 + 1) × n 1 + i=2 n i * (n i−1 + 1) + (n + 1) × nu. If we compare the number of degrees of freedom (dof) between the DNS (in the full domain, i.e. 5 × 128 3 because of the 5 unknown fields in the NS under Boussinesq system to be solved) and the PINN in the training domain, we get a ratio of 18.5, if we consider the dof of the DNS in the training domain, the ratio drops to 0.23 in favor of the DNS. Considering now the storage: -storing the PINN parameters, so that an approximation of the DNS fields may be recovered very efficiently, amounts to 4.3Mb of data spatial resolution is equal to (or half of) the full DNS while temporal resolution is much coarser than the one of the full DNS. Here the size refers to the number of discrete points (x, t) at which the solution of the system of equations (4) is known. Table (3) present most of the proposed PINN models in terms of the choice of their training and testing databases. In the paper, the data labels of total size N L is the following set
{((x, t) (i) , T (i) , T (i) , v (i) ∂Ω † ×D )} N L i=1 , where v (·)
∂Ω † are fluid velocity components collected at the boundaries of the rectangular blocks (excluding the top face). N T refers to the total size of the testing database that is always disjoint from the training database. Unless mentioned otherwise, for these models the training database is common for the N L data labels (and N R data points) at which L Label (and L PDE ) loss is evaluated, respectively. For instance, if we refer to
K 1Db1 = {k l } |K 1Db1 | l=1
as the set of data points indices from the 1Db1 database of cardinality |K 1Db1 | = 2.5688e6, then for the case 1C6 we consider I ⊂ K 1Db1 , the subset with cardinality |I| = 5e5 and containing any elements k i ∈ K 1Db1 . For the residuals, the subset is then taken the same, i.e. J ≡ I. Nevertheless, during training, mini-batches of data and points are independently randomly selected among these subsets I and J , meaning that the points at which labeled data are used and the points at which residuals are computed are not necessarily collocated within the training domain. This is illustrated in the grey region of Figure (7), showing an example of mini-batch sampling.
PINN model
Training Testing
size (N L , N R ) database size (N T ) database 1C3 6 (2e6,
PINNs results
We first present the best results obtained, for the 1Ref case trained with a large data sample over a short time period, i.e. 2e6 data points spanning the space/time domain (i.e. 2e4 points per DNS snapshot). Figure 4 shows the convergence of the loss function (cf. Eq. 8) against the iterations of the optimization algorithm for the 1Ref model. Vertical dashed lines separate the different training cycles. The loss is decomposed into its label and residual contributions. It is interesting to notice that the two contributions behave quite differently both in terms of decay and magnitude. While the residual errors dominate over the label errors at early stages, they become lower around the million iteration. Moreover, the convergence of the label errors is much more regular and progressive than the residual ones, which are very much impacted by the changes in the learning rate but much less by the algorithm iterations. Overall, the convergence is satisfactory and the total loss evaluated in this case over 2e6 points is less than 10 −4 . The specifics of the accuracy of the PINN 1Ref model for each flow field are summarized in Table ( 4). We see that the results are excellent with very small errors and high correlations between the PINN model and the DNS for each field. We note that the accuracy is a bit lower for the pressure field, and this finding will be consistent across all of our numerical experiments. We also note that the accuracy on the temperature prediction is slightly lower than the one obtained from a plain DNN (noted 1Ref DNN in the table) with 10 layers and a loss function given by Eq. (7). A multi-output regression providing linear velocity and pressure predictions based on spatial/temporal coordinates and temperature field regressors show poor agreement especially for the first two components of velocity, which are known to be less correlated to the temperature field than the vertical one.
Inspired by these promising results, more models are trained to understand the effect ofthe architecture complexity, -the size and sampling frequency of the training dataset andthe change in the time acquisition range. The second models are always tested on an independent sample of points coming from a different database (e.g. denser in time) than the one used for training. We see how the 10-layer PINN architecture with large amount of data and residual evaluations provide the best overall results for the first models. With less data sampled in space (i.e. conserving the same temporal resolution), cases 1C5-6 show some reasonable decline of their accuracy. With less data sampled in time (i.e. conserving the same averaged spatial sampling frequency), cases 1C7-8 exhibit a worse decay of their predictive capability. Best and worse predictions are illustrated in Figure (6), where reference and predicted v x flow velocity and fluid pressure are represented together with their regression fit. Fig. 5 shows some spatial comparisons of some instantaneous flow quantities in the training domain at two different instants, representative of some different flow topologies. Two different types of view are proposed: -for the first time instant (2 left columns) the reader faces a vertical map corresponding to a vertical (y, z)-slice taken through the data, while for the second time instant (2 right columns), this slice is seen from the side and three additional horizontal-(x, y) slices are proposed to give an idea of the three-dimensionality of the flow. It is important to emphasize that the training database used for this model (cf. Tables (1-3)) uses only half (i.e. one every two) of the available DNS snapshots. It is therefore possible to generate an accurate prediction from the PINN surrogate for some specific time instants at which neither DNS data nor PDE residual points where used for the training, as shown on the figure. The two instants show two types of flow organization in the figure: a simple plume or a fork-like plume. In both cases, we see a plume foot carrying heat from the local roughness toward the top of the domain, with a local vertical acceleration a little above the heat source. But whereas the single plume undergoes a loss of pressure in its ascent, the pressure distribution is more complex for the fork-like plume: the
1 N T N T i=1 (u (i) PINN,j − u (i)
DNS,j ) 2 1/2 , aMAE, aR corr and aR 2 .
pressure loss is followed by a local increase along the two new emerging conduits of plume. Despite strongly evolving three-dimensional spatial structures, PINN predictions accuracy is remarkable and only subtle differences are noticed.
Improved PINNs capability with temporal-or spatial-penalty padding
Here the idea is to check if it is possible, given a computational budget, to increase the predictive accuracy of the PINN by adopting a different sampling strategy for the choice of the domain points at which the residual penalties are imposed. Let us assume that we dispose of a training data set of size (N L , N R = N L ), so we can afford to reduce the residuals of the PDEs over a number of points equal to the number of data labels. Moreover, let us assume that the labeled data points are localized in a certain temporal/spatial domain of interest, cf. black dots in grey area in Figure (7) where space is reduced to a single dimension to simplify the sketch. A standard approach is to allocate all of the residual points within the same domain. Another possibility is to use part of the residual samples to pad, either in space or in time, the surrounding regions of this domain in order to check if the accuracy is improved within the domain of interest. Indeed, it is known that predictive accuracy of the PINNS is lower close to spatial and temporal boundaries, so we hope to improve the accuracy closer to the boundaries by extending the domain of regularization. each subplot are normalized by the maximum value of the error for case 2C4 (i.e. 20% for temperature and 83% for velocity. The PINN models are asked to predict the solution on the long time range and for a resolution that is doubled (i.e. 200 snapshots for ∆T l ) compared to the one of their training data. We clearly identify the first half of the time domain in which errors are low. The 1C7 PINN solution is then predicted in the later time domain (no information from this time range was used during training of this model), while the 2C4 prediction benefits from a training with residual points (but no data) in this time range, as explained previously. Not only, the 2C4 solution is improved in the first half but it is also much more robust in the second half, controlling better error spikes all along the time window. In the following section, we will discuss the results in light of the PINNs salient numerical points. Then we will address the issue of modeling higher turbulence levels by proposing a x t > 0 An important matter to this study was the one of the PINNs robustness to lower resolution in the observation data. While it was shown that we were able to "replay" DNS simulations with the PINNs models, the PINNs accuracy depends on many factors including the amount of data and regularization used and the choice of the physical quantity under scrutiny. For this particular application, DNS temperature data inside the domain was preferentially provided, justifying the very good predictive accuracy obtained for the temperature and the vertical component of the flow velocity. Indeed, v z is strongly correlated to the temperature gradient due to the vertical buoyant forces induced by the consideration of the gravity. In plane flow velocities which magnitude is lower were in general a bit less accurate. Finally, the pressure field was the least accurate predicted quantity. We emphasize that no data was provided for the pressure as boundary or initial conditions, which was a hidden state and was obtained indirectly via the incompressibility constraint without splitting the Navier-Stokes equations.
t = t 0 x l x r t = t f 1 t = t f 2 data
As an example of this fine expressivity, for the training of case 2C3, a half-a-million timespace scattered temperature data points over 50 snapshots and available boundary velocity data points (on ∂Ω † ) were used and provided a very good average accuracy of aR 2 = 0.988 over a testing sample encompassing 100 snapshots. This training corresponds to a moderate sampling of the DNS data: i.e. temperature data were randomly collected with a temporal sampling of ∆t = 0.2, that is every 80 DNS time steps and a spatial sampling of about 20% of the (26 × 26 × 38) available DNS temperature data points at each time step. It remains that figuring a priori the amount of information that is necessary for the physicsinformed network training to converge well and fast, is a complex matter because of the interplay of many different terms involved in the composite loss function. Indeed the lost function, that is just an evolving scalar, encapsulates various error terms related to the data (in some chosen norm) and to a soft penalty represented by some appropriate functional. This functional is designed to constraint the outputs of the neural network to satisfy a set of specified conditions. This apparently simple formulation hides its underlying complexity because L Label gathers several data-fit terms, possibly including initial, boundary and internal data, while L PDE also corresponds to various contributions coming for instance from conservation of mass and momentum. It was shown that this approach affects the loss gradient and might lead to an unstable imbalance in the magnitude of the back-propagated gradients during model training using gradient descent, as explained in [24]. Some formulations have proposed a regularization parameter that acts as a weight in front of the penalty term. Indeed we have noticed that the magnitude of the penalty contribution to the total loss varies relatively to the data error term, cf. Figure (4). It is straightforward to monitor the distribution of the back-propagated gradients of the loss with respect to the neuraal network parameters during training. A finer analysis would consist in monitoring the distribution of the back-propagated gradients of each individual loss terms with respect to the weights in each hidden layer. This tedious work may reveal that some gradients are too small to condition the optimization search. This is often the case for the gradients corresponding to the boundary (or initial conditions) loss terms [24]. This lack of this information affects the training and restrains networks accuracy as it is known that a PDE system may have infinitely many solutions in this case. Despite these proposed adaptive dynamic weights strategy, it seems that turbulent flows nonetheless require the use of additional user-tuned scaling parameter that affects the respective weight of the adaptation, cf. section 4 in [14]. This shows that the perfect balance of the residuals gradients is not easily reachable and asks the question of the proper normalization with careful tuning of those anisotropic weights. The loss function is typically minimized using SGD algorithm and a large number of training points can be randomized within each SGD iteration. Therefore, it is also the relative amount of points density which are sampled from those data and PDE residuals penalization sources, that are weighting either explicitly or implicitly (depending on the formulation) the importance of the different terms. In our case, we have decided to first keep our approach simple and to fine tune the error balance by adjusting sampling density of data and residual points. Interestingly, we have shown that it is more beneficial to distribute less densely the amount of points at which the PDEs residuals are minimized. More specifically, we have found that the accuracy is improved if those points are also scattered across spatial or temporal domains encompassing the domain within which the labeled data points are available.
Importance of boundary information: data vs. penalty padding
The previous discussion based on our results confronted to a literature review, clearly points to the importance of the boundary data information in the PINN formulation. Unlike two-dimensional laminar flow problems, it was noticed in [13,14] that for more complex convective three-dimensional flows, temperature data was not sufficient (problem of wellposedness) to satisfactorily train a PINN model, and information relative to flow velocity boundary conditions were also necessary. This is indeed something that we have confirmed in previous studies and also the importance of the positioning of the training domain relative to the flow features [34]. For numerical experiments in our paper, DNS fluid velocity from the domain boundaries (except the top one) is used to complement the temperature data. The dimensionality of this information being lower, for instance for case 1Ref: |v DNS ∂Ω † | = 4628 at each given time, the small mini-batch size that we use at each training iteration (MB = 2000) collects (on average due to the random sampling) about (MB/100 snapshots )/5 faces = 4 flow velocity data points per face which is a small number. Nevertheless, for each epoch based on the temperature data, the algorithm cycles more than once (here about 4 times) through the fluid velocity boundary values, therefore using this information many times. It will be interesting to further quantify the impact of the boundaries information on the method efficiency. This could be achieved by playing with the ratio of training data points chosen at the boundaries vs. the inside of the domain. The penalty padding that we have proposed in the previous section can also be handy in this case. It comes to play as a regularization over the spatial and/or temporal zones surrounding the training boundaries which are often regions of poor accuracy of the PINN surrogate. It seems to complement the local boundary data and blends the solution nicely across the chosen boundaries, resulting in a noticeable global accuracy improvement. Future works need to be pursued in order to determine how to improve this technique, e.g. choice of the padding domains extent and shapes, distribution of the residual points density, choice of PDEs to enforce, etc. An interesting approach would be to see if the padding regularization may substitute (at least in part) the amount of boundary data. That is we wish to reduce the training data at the boundaries relative to the padding penalty. To this end, a test was carried out in order to infer on the importance of missing boundaries information: the best padded case (2C6) was rerun without including velocity data on certain domain faces covered by the padding. To make things clear, only velocity data at the bottom, back and left faces were provided during training, while velocity at front (respectively right) face located at x = 0.7H (respectively y = 0.7H) were not used, cf. Figure (2-(a))). The idea was to check if missing local boundary information could be supplemented thanks to the padded neighbor region filled with low PDE residuals enforcement points. The results (not reported here) were deceptive with an averaged error close to 20%. This shows that the boundary information is very important for this type of flows, especially when the amount of bulk labeled data is on the lower side. This finding is consistent with recent works applied to incompressible internal flows where a structured DNN architecture was devised in order to automatically enforce (in a "hard" way) initial/boundary conditions. In this particular case, it was not necessary to include any bulk simulation data, the DNN being trained by solely minimizing the residuals of the Navier-Stokes equations.
Improving modeling capability for more turbulent scenario by relaxing surrogate constraints
The promising results obtained in the previous sections motivate an investigation of more challenging natural convection metamodeling at higher turbulence levels. The attempt might fail as it is known that conventional PINN models are not very successful at approximating complex dynamics leading to solutions with non-trivial behavior, such as directional anisotropy, multi-scale features or very sharp gradients. The specificity of the PINN approach is that the constraints alter the loss landscape of that type of deep neural networks. As seen previously, different terms in the PINN composite loss function have different nature and magnitudes, sometimes leading to imbalanced gradients during back-propagation. Recent works have pointed to the problem of the stiffness of the PINN gradient flow dynam-ics. They have proposed a learning rate annealing algorithm that utilizes gradient statistics during training to adaptively weight the different terms in the label part of the loss function. They have also proposed a new fully-connected neural network architecture that is less sensitive to the stiffness of gradient flow. Our approach is to propose a minimal modification of the PINN computational framework in order to make it more efficient for our type of application. In particular, we do not want to modify the PINN architecture, the training computational budget or to upgrade too drastically the training dataset size. As we will describe in more details below, our idea is to relax some of the PDE residual losses in order to enhance the accuracy and robustness of our PINNs.
In the following, we consider a much more challenging case of RB flow in a smooth cavity filled with water (P r = 4.4) at higher Ra number, cf. Fig. 9. The geometry does not bear any roughness anymore and the flow turbulence is now much more developed than previously, with a larger value by two order of magnitude, i.e. Ra = 2 · 10 9 . Our goal is to investigate the level of accuracy we can achieve with a computational budget equivalent to the previous simulations. We keep the same simple fully-connected PINN architecture with = 10 layers and the same total number of epochs = 1500 (which is low compared to other works). The amount of data from the "true" DNS is colossal with a spatial resolution for the PINN domain alone of (129 × 65 × 219) points updated every ∆t DNS = 4.5 · 10 −4 . The database being too large to be stored, we have saved the solution with a coarser time resolution, i.e. 249 snapshots collected every 40 × ∆t DNS , totalizing almost half a billion (4.57242435e8) points at which the flow data are saved. It is this latter downgraded version that we will refer as our "full" DNS. A more tractable database 3Db1, referenced in Table (1), is constructed from the aforementioned full DNS, with a lower resolution in space and time. The PINN training dataset retained from the 3Db1 database corresponds to 90% of the (velocity-temperature) data points of the initial condition, 100% of the (velocity) boundary conditions and only 25% of the (temperature) bulk totalizing 7.334712e6 data points, which is only 1.6% of the full (i.e. 0.04% of the true) DNS spatial/temporal coordinates points. The training dataset being 3.7 times the size of the largest dataset used for the Ra = 2 · 10 7 studies, the mini-batch size is now increased to MB = 18522. The convergence of the various terms of the loss function are depicted in Figures (10). In subplot (a), standard PINN results show that the optimization is very sensitive to the incompressibility constraint (PDE 5 : yellow line), in agreement with the findings of other researchers [14]. Oscillations are very strong compared to the other components. Moreover PDE 5 convergence exhibits a piecewise behavior with error magnitude and fluctuations getting lower across changes in learning rate cycles, but a convergence that does not really progresses within each of the learning cycles. Despite being quite low, the convergence of the temperature loss (PDE 1 ) looks very flat and seems to stagnate. With the PINN velocity-pressure formulation, the pressure equation is not obtained through an additional Poisson pressure equation as it is usually done with splitting methods. In fact, the pressure is a hidden state and is obtained via the incompressibility constraint. Nevertheless, the incompressibility condition is hard to impose through our SGD algorithm. Recent works have investigated other Navier-Stokes (NS) formulations [14] including a streamfunction-pressure formulation for which the incompressibility constraint is exactly satisfied [24]. They concluded that these alternative formulations were more efficient, especially for laminar flows. Keeping our original velocity-pressure formulation, our novel idea is simply to relax the incompressibility constraint. We refer to this relaxed version of the physics-informed neural network as PINN r . Results obtained with this new approach are spectacular, and the total loss reaches a much lower value at the end of the training, cf. Figure (10-(b)). The loss corresponding to the relaxed version of the divergence-free equation now converges much faster and more regularly. Its magnitude is now comparable to the temperature loss that follows a much more flattering convergence slope as well. Very interestingly, there is a retroaction of these lower PDE residual losses onto the convergence of the label losses (cyan line). This coupling is extremely interesting as it demonstrates how an improved learning on the PDE part of the losses directly benefits the learning of the scarce temperature data. Table 7: Ra = 2 · 10 9 study: the caption is similar to the ones of Tables (3,5). Only N L is written in the table, as N R = N L for those cases. Figure 11: PINN predictive capabilities for RB flow at Ra = 2 · 10 9 : comparisons between standard and relaxed (PINN r ) approaches. Temperature scatter plots (a) and probability density function (b) compared to the reference DNS. The reference pdf is computed from the full DNS while the PINNs pdf are evaluated on the predicted test sample.
Table (7) summarizes the specifics and accuracy of the two tested models. The results are very clear and show the undeniable superiority of the PINN r approach with very good accuracy. Moreover, an ambitious validation of this approach was also carried out on the full DNS. That is to say that the model was used to predict the solution at the coordinates of the full DNS referenced hereinbefore (i.e. on a sample of size N T = 129 × 65 × 219 × 249, corresponding to a doubling of the spatial resolution in each direction and a tripling of the temporal resolution compared to the training sample). Very impressively, the errors computed in the relative L 2 norm were only: 0.3% for the temperature, 1.79% for v x , 2.708% for v y , 3.416% for v z and 4.038% for the pressure, respectively. Figures (11) present the scatter plot of the PINN temperature results of case 3C1 and 3C1 r (a), and the corresponding pdf compared to the reference DNS. The difference is for instance very striking when looking at the way the approximation is now capable of sharply capturing the long tail of the skewed temperature probability density function (PDF). This asymmetric PDF shape is typical of the mixing layer [35], in which the training domain is placed. The long quasi-exponential tail for large temperature fluctuations is the signature of the travelling hot plumes, intermittently passing through the domain. Even if temperature is only scrutinized here, quantitative improvements of the PINN r approach do occur for all flow variables. On the contrary, it can be seen that the original PINN approach miss a large part of the warm plumes, but especially the cold (descending) ones. Figures (12) were chosen to show an example on how the surrogate is capable of accurately predicting a strong small plume with very anisotropic structure, despite being located close to the domain boundaries and occurring at a time instant never visited during training. It is remarkable how well the intricate temperature distribution within the plume is approached by PINN r . Figure 12: Comparison of some ground truth (DNS) (a,d), PINN r -predicted (b,e) and standard PINN-predicted (c,f) instantaneous sliced fields with a thin sheet-like plume located at the bottom of the domain. Top row: vertical flow velocity v z (·, y 0 = 0.25, ·, t = 290.83) and bottom row: temperature T (x 0 = 0.82, ·, ·, t = 290.83) fields. The time instant is chosen so as to correspond to a DNS snapshot that is not included in the training database. Predictions are requested on the fine DNS spatial grid.
Some perspectives
These promising results open the way for more involved parametric surrogate modeling (useful in the context of design optimization, model calibration, sensitivity analysis, etc), i.e. nonlinear problem parametrized by some (potentially not well-known) physical quantities, playing the role of additional inputs to our PINNs models. Despite the large body of literature on uncertainty quantification, including aleatoric and epistemic uncertainties in fluid mechanics [36], few works have attempted to propose DNN-based scalable algorithms for parametric surrogate CFD modeling, due to the lack of a posteriori error estimation and convergence theory. Moreover, training data is a severe bottleneck in most parametric fluid dynamics problems since each data point in the parameter space requires an expensive numerical simulation based on first principles. Nevertheless, some numerical perspectives may be drafted by examining some of the limiting computational aspects of the PINN approach. The PINN algorithm infuses the system governing equations into the network by modifying the loss function with a contribution acting as a penalizing term to constrain the space of admissible solutions. The high-dimensional non-convex optimization problem of this composite loss function involves a large training cost related to the time-integration of the nonlinear PDEs and the depth of the neural network architectures. We have seen that the approach may be efficient despite a training based on a very sparse data sample, while other approaches investigate variant sampling strategy, e.g. [37,38]. But in the more demanding case of parametric surrogate construction, an effort should be pursued on the front of efficient data sampling strategy. Indeed, optimal sampling would ensure a right balance and therefore good complementarity between the information provided by the PDEs and by the data. A potential breakthrough would be to propose a dynamic selection of relevant data for the PINN learning, operating synchronously with the physical simulation, and allowing sparser spatial-temporal sampling, responding in part to the storage problem of DNS simulations. Moreover, it could be beneficial to simultaneously build a data index structure allowing to benefit from importance sampling, e.g. importance sampling tree technique [39]. In the case of parametric surrogate modeling, another interesting approach would be the one of transfer learning (TL). The TL domain seeks precisely to transfer the knowledge acquired to a training dataset to better process a new so-called "target dataset". The transfer can therefore take the form of a parallel relearning of the neural network taking into account the evolving parameters (geometric or physical for instance). More importantly, we have experienced the high sensitivity of the learning process to the way we enforce (some of) the PDEs in the PINNs framework and the impact it had on the global accuracy of the scheme. Inspired by the work of Perdikaris and co-authors [24], we believe it would be worthwhile tracking the gradients of each individual terms in the PDEs constraints with respect to the weights in each hidden layer of the neural network, rather than tracking the gradients of the aggregated loss. This will help monitoring the distribution of those back-propagated gradients during training and propose a learning rate annealing algorithm that utilizes gradient statistics to balance the interplay between the different terms in the regularization components of the composite loss function. More specifically, due to the stochastic nature of the gradient descent updates, updated learning rates should be computed as running averages of previous values and do not need to be updated at each iteration of the optimization solver.
Other perspectives and current works involve -the decomposition of the computational domain into several training sub-domains in order to better scale locally-adapted PINN models, -handling of aleatoric uncertainty associated with noisy training data by means of physics-informed Bayesian neural networks [40], -the mixing of various labeled data sources -hybrid regularization techniques combining physics-informed regularization with more classical L 2 , L 1 and/or dropout regularizations.
Acknowledgments
The DNS database has been built using granted access to the HPC resources of IDRIS under allocation 2a0326 made by GENCI. We thank Dr. Yann Fraigneau for his help and great expertise in the development of the DNS SUNFLUIDH solver.
Fig. 2 .
2The height of the square-based blocks is equal to h = 0.05H. Their base spans (0.1H × 0.1H), and their centers are located at (x = 0.4, y = 0.4) and (x = 0.6, y = 0.6), respectively.
The DNS databases are made of a collection of fields spanning a maximum time length of 19.8 convective time units. Typical temperature and vertical velocity time series are displayed on Figure (3). In addition to the turbulent character of the flow, the power spectrum of the vertical velocity shows the dominant shedding frequency f max of the plume emission. This indicates that the training domain time length typically includes about 10 plume rise times through the studied domain.
Figure 2 :
2Rayleigh-Bénard cavity flow with two square-based roughness elements (red cubes) attached to the heated bottom plate at t = 80.9. Temperature isocontours and normal velocities on the vertical planes (a) ; vorticity component ω y in plane (x, y = 0.6H, z) (b). The smaller domain Ω PINN = [0.5H, 0.7H] × [0.5H, 0.7H] × [0.055H, 0.5H] over which the PINN model is trained, is depicted as a transparent box with thick black borders, located just above one of the roughness elements, so as to maximize the chance to contain some plumes (a). Other boxes with lighter borders indicate the positioning of padding regions, used later in the study.
Figure 3 :
3DNS-predicted temperature (a) and vertical velocity (b) at location (x = 0.5H, y = 0.6H, z = 0.1H). Temporal power spectrum of temperature at location (x = 0.5H, y = 0.6H, z = 0.1H). The frequency of energy maximum is marked by an arrow at f max = 0.462 (c). The grey zones delimit the time windows over which the PINN models are trained: dark grey (∆T s ) and dark/light grey (∆T l ).
Figure 4 :
4Evolution of the loss function L -during the training of 1Ref model -against the stochastic gradient descent optimization algorithm iterations. The total loss is decomposed into its label and PDE contributions.
Figure 5 :Figure 6 :
566). Another numerical experiment has been ran (i.e. 1C7 * ), in order to check that the improvement was Comparison of some ground truth (DNS) (a,c) and 2Ref PINN-predicted (b,d) instantaneous fields (a,b: vertical slices; c,d: vertical and horizontal slices) at two different time instants. Those time instants are chosen so as to correspond to DNS snapshots that are not included in the training database. Top row: temperature (a,b):T (x 0 = 0.58,·, ·, t = 74.7) and (c,d): T (x 0 = 0.595, ·, z 0 = [0.15, 0.3, 0.4], t = 70.7), middle row: similar representation for the vertical flow velocity v z and bottom row: similar representation for the pressure field p. Number of isocontour levels have been kept low in order to visually emphasize the subtle differences. not simply due to the lower density of residual point within the training domain, instead of the padding effect. This was not the case as 1C7 * produced very poor results, cf. fourth line of Table (6). But the other padded cases are also interesting. For instance, Figure (8) compares the temporal distribution of the L 2 spatial errors integrated for case 1C7 (thin black line) and 2C4 (thick gray line) for temperature (a) and v z vertical flow velocity (b). The curves in Examples of scatter plots comparing DNS and predicted values from most accurate 1Ref (a-b), least accurate 1C7 (c-d) and padded 2C6 (e-f) PINN models, with left column: in-plane x−axis flow velocity v x , and right column: fluid pressure p.
Figure 8 :
8Comparison of the temporal distribution of L 2 spatial errors for case 1C7 (thin line) and 2C4 (thick line) for temperature (a) and v z vertical flow velocity (b). The curves in each subplot are normalized by the maximum value of the error for case 2C4 (i.e. 20% for temperature and 83% for velocity). The padded approach 2C4 controls errors much better, especially in the second half of the time window.
A DNS is performed in a computational domain of size Ω = [0, H] × [0, H/2] × [0, H], cf. Fig. 9. The spatial flow scales are obviously harder to capture with more dispersed and less organized small and thin turbulent plumes. The quite large domain Ω PINN = [0.65H, 0.85H] × [0.2H, 0.3H] × [0.05H, 0.3H] over which the PINN model is trained, is depicted as a transparent box. Another difficulty relates to the travelling direction and orientation of the plumes relative to Ω PINN . They do not necessarily travel across the domain from bottom to top because the flow velocity components are less dominated by their vertical component.
Figure 9 :
9Example of instantaneous heat dissipation isocontours (2D (x, z) slice at (y = 0.25H, t = 290.83)) from DNS of the RB cavity flow at Ra = 2·10 9 (a). The domain Ω PINN = [0.65H, 0.85H] × [0.2H, 0.3H] × [0.05H, 0.3H] over which the PINN model is trained, is depicted as a transparent box. Temporal power spectrum of temperature at location (x = 0.65H, y = 0.25H, z = 0.1H) (b)
Figure 10 :
10Ra = 2·10 9 study: details of the convergence of loss functions for standard PINN (a) and relaxed PINN r (b) for which the mass conservation PDE constraint (i.e. PDE 5 ) is relaxed during training. PDE 1 : loss term associated with the temperature equation and PDE 2−4 : sum of loss terms associated with the momentum equations, cf. system(4).
Table 1: Specifics of DNS-extracted databases. The short, long and medium time intervals are defined as: ∆T s = [62, 71.9], ∆T l = [62, 81.8] and ∆T m = [287.572, 292] while ∆t refers to the time between two successive snapshots taken in the time intervals.Database
Ra
size
∆t
snapshots
resolution
time interval domain
1Db1
2 · 10 7
2.5688e6
0.1
100
26 × 26 × 38
∆T s
Ω PINN
1Db2
2 · 10 7
1.2844e6
0.2
50
26 × 26 × 38
∆T s
Ω PINN
1Db3
2 · 10 7
6.422e5
0.4
25
26 × 26 × 38
∆T s
Ω PINN
2Db1
2 · 10 7 5.111912e6
0.1
200
26 × 26 × 38
∆T l
Ω PINN
2Db2
2 · 10 7
2.5688e6
0.2
100
26 × 26 × 38
∆T l
Ω PINN
3Db1
2 · 10 9 2.0673972e7 0.054
83
66 × 34 × 111
∆T m
Ω PINN r
Table 2 :
2PINNs training hyper-parameters. For each PINN model, the training is made of seven subsequent cycles.In this section, the goal is to compare PINNs prediction with the DNS reference and
Table 3 :
3Training and testing details of the -layer PINN models considered in this study. Each number of points spans space/time domain. When a single database is mentioned, it means that this database is used for the labeled data and for the residual points (i.e. residuals are evaluated at some DNS grid points). DNS databases used for training and testing are detailed inTable 1. Reference models are highlighted in bold.
Table ( 5
() provides a summary of the accuracy of the different PINN models considered. The model names -starting with a 1·, refer to the cases with training and testing over a short time window ∆T s , while the names -starting with a 2· refer to the cases with a longer time window ∆T l . Another major difference resides in the way models are tested. The first models are always tested on an independent sample of points coming from the same database as the one used for training. For instance, model 1C7 relies on the 1Db2 database, which contains 1.2844e6 data points collected over space/time according to the specifics ofTable (1), and from which 1e6 points are randomlyModel
Accuracy
1Ref PINN
RMSE
MAE
µ error σ error
R corr
R 2
T
3.336e-03 2.05e-03
0.01
0.4
9.993e-01 9.986e-01
v x
1.778e-03 1.260e-03
0.8
0.8
9.997e-01 9.993e-01
v y
1.953e-03 1.386e-03
0.7
1.0
9.996e-01 9.991e-01
v z
3.316e-03 2.064e-03
0.4
0.5
9.998e-01 9.996e-01
p
8.904e-04 6.341e-04
-
2.9
9.989e-01 9.971e-01
1Ref DNN
RMSE
MAE
µ error σ error
R corr
R 2
T
9.035e-04 6.362e-04
0.004
0.03
9.999e-01 9.999e-01
1Ref MOR
RMSE
MAE
µ error σ error
R corr
R 2
v x
5.95e-02
4.73e-02
0.16
51.7
4.83e-01
2.33e-01
v y
5.25e-02
4.11e-02
0.11
40.5
5.96e-01
3.55e-01
v z
1.121-e01 9.02e-02
0.06
24
7.61e-01
5.8e-01
p
1.08-e02
8.2-e03
0.03
22.15
7.78e-01
6.05e-01
Table 4 :
4Accuracy details (root mean squared error: RMSE, mean absolute error: MAE,
the correlation coefficient: R corr and the coefficient of determination : R 2 ) of the 1Ref PINN
model for each of the flow fields. Mean (µ) and standard deviation (σ) errors are expressed
as percentage. Relative error of the mean pressure is not computable as the pressure signals
are centered (zero-mean) prior to be compared. The 1Ref DNN temperature prediction and
a multi-output linear regression (MOR) are also provided for comparison.
selected from the database for training and the remaining 2.844e5 points are kept for test-
ing.
Table 5 :
5Accuracy of PINN models compared to the DNS simulation. Statistics are computed from the available testing dataset and collected for each component of the flow fields u = (v, p, T ). They are then averaged for aRMSE, e.g. aRMSE =1
nu
nu
j=1
Table ( 6
() shows the details of some numerical experiments which have been setup in order to investigate this idea. Three new cases, 2C4, 2C5 and 2C6, adopt the same database and data sampling as case 1C7, i.e. database 1Db2, but this time the residual points span a longer time period for case 2C4, a wider spatial vertical (horizontal) range for case 2C5 (2C6), respectively. Numerical testing over the 1Db2 database shows that the accuracy is much improved for those cases compared to case 1C7 for which all residual points are located within the domain of interest. In particular, improvement of case 2C6 for which the spatial domain has been horizontally padded is spectacular (i.e. spatial density of residual points is halved compared to case 1C7), cf. subplots (e-f) from theFigure (
Table 6 :
6The caption is similar to the one of Tables(3,5), but this time pay attention to the fact that label and residual data points can be sampled from different databases, e.g. (1Db2, grid 2Db2 ) means that the labeled data are chosen from the 1Db2 database while the PDEs residuals are evaluated at space/time grid points from the 2Db2 database. grid * 1Db2 refers to the 1Db2 grid which has been vertically extended to cover the domain new simple idea that considerably improves the prediction results.Ω *
PINN = [0.5, 0.7] × [0.5, 0.7] × [0.05, 0.9] with resolution (26 × 26 × 60) and grid ‡
1Db2 refers
to the 1Db2 grid which has been horizontally extended to cover the domain Ω ‡
PINN =
[0.5, 0.78] × [0.5, 0.78] × [0.05, 0.5] with resolution (37 × 37 × 38).
4 Discussion, preliminary results and perspectives
4.1 Accuracy vs. sampling
Deep learning in fluid dynamics. J N Kutz, Journal of Fluid Mechanics. 814J. N. Kutz, Deep learning in fluid dynamics, Journal of Fluid Mechanics 814 (2017) 1-4.
Machine learning for fluid mechanics. S L Brunton, B R Noack, P Koumoutsakos, Annual Review of Fluid Mechanics. 52S. L. Brunton, B. R. Noack, P. Koumoutsakos, Machine learning for fluid mechanics, Annual Review of Fluid Mechanics 52 (2020) 477-508.
I Goodfellow, Y Bengio, A Courville, Deep learning. MIT pressI. Goodfellow, Y. Bengio, A. Courville, Deep learning, MIT press, 2016.
Machine learning strategies for systems with invariance properties. J Ling, R Jones, J Templeton, Journal of Computational Physics. 318J. Ling, R. Jones, J. Templeton, Machine learning strategies for systems with invariance properties, Journal of Computational Physics 318 (2016) 22-35.
Quantification of model uncertainty in RANS simulations: A review. H Xiao, P Cinnella, Progress in Aerospace Sciences. 108H. Xiao, P. Cinnella, Quantification of model uncertainty in RANS simulations: A review, Progress in Aerospace Sciences 108 (2019) 1-31.
Turbulence modeling in the age of data. K Duraisamy, G Iaccarino, H Xiao, Annual Review of Fluid Mechanics. 51K. Duraisamy, G. Iaccarino, H. Xiao, Turbulence modeling in the age of data, Annual Review of Fluid Mechanics 51 (2019) 357-377.
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. M Raissi, P Perdikaris, G Karniadakis, 10.1016/j.jcp.2018.10.045Journal of Computational Physics. 378M. Raissi, P. Perdikaris, G. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, Journal of Computational Physics 378 (2019) 686 -707. doi: https://doi.org/10.1016/j.jcp.2018.10.045. URL http://www.sciencedirect.com/science/article/pii/S0021999118307125
New perspectives in turbulent Rayleigh-Bénard convection. F Chillà, J Schumacher, Eur. Phys. J. E. 3558F. Chillà, J. Schumacher, New perspectives in turbulent Rayleigh-Bénard convection, Eur. Phys. J. E 35 (2012) 58.
Cessation and reversals of large-scale structures in square Rayleigh-Bénard cells. A Castillo-Castellanos, A Sergent, B Podvin, M Rossi, J Fluid Mech. 877A. Castillo-Castellanos, A. Sergent, B. Podvin, M. Rossi, Cessation and reversals of large-scale structures in square Rayleigh-Bénard cells, J Fluid Mech. 877 (2019) 922 - 954.
Prediction of turbulent heat transfer using Convolutional Neural Networks. J Kim, C Lee, Journal of Fluid Mechanics. 882J. Kim, C. Lee, Prediction of turbulent heat transfer using Convolutional Neural Net- works, Journal of Fluid Mechanics 882 (2020).
Deep learning in turbulent convection networks. E Fonda, A Pandey, J Schumacher, K R Sreenivasan, Proceedings of the National Academy of Sciences. 11618E. Fonda, A. Pandey, J. Schumacher, K. R. Sreenivasan, Deep learning in turbulent convection networks, Proceedings of the National Academy of Sciences 116 (18) (2019) 8667-8672.
Reservoir computing model of two-dimensional turbulent convection. S Pandey, J Schumacher, arXiv:2001.10280arXiv preprintS. Pandey, J. Schumacher, Reservoir computing model of two-dimensional turbulent convection, arXiv preprint arXiv:2001.10280 (2020).
M Raissi, A Yazdani, G E Karniadakis, Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. 367M. Raissi, A. Yazdani, G. E. Karniadakis, Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations, Science 367 (6481) (2020) 1026-1030.
X Jin, S Cai, H Li, G E Karniadakis, arXiv:2003.06496NSFnets (Navier-Stokes flow nets): Physicsinformed neural networks for the incompressible Navier-Stokes equations. arXiv preprintX. Jin, S. Cai, H. Li, G. E. Karniadakis, NSFnets (Navier-Stokes flow nets): Physics- informed neural networks for the incompressible Navier-Stokes equations, arXiv preprint arXiv:2003.06496 (2020).
The numerical solution of linear ordinary differential equations by feedforward neural networks. A Meade, A Fernandez, 10.1016/0895-7177(94)90095-7Mathematical and Computer Modelling. 1912A. Meade, A. Fernandez, The numerical solution of linear ordinary differential equa- tions by feedforward neural networks, Mathematical and Computer Modelling 19 (12) (1994) 1 -25. doi:https://doi.org/10.1016/0895-7177(94)90095-7. URL http://www.sciencedirect.com/science/article/pii/0895717794900957
Neural-network methods for boundary value problems with irregular boundaries. I E Lagaris, A C Likas, D G Papageorgiou, IEEE Transactions on Neural Networks. 115I. E. Lagaris, A. C. Likas, D. G. Papageorgiou, Neural-network methods for bound- ary value problems with irregular boundaries, IEEE Transactions on Neural Networks 11 (5) (2000) 1041-1049.
Artificial neural networks for solving ordinary and partial differential equations. I E Lagaris, A Likas, D I Fotiadis, IEEE transactions on neural networks. 95I. E. Lagaris, A. Likas, D. I. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE transactions on neural networks 9 (5) (1998) 987-1000.
Artificial neural network method for solution of boundary value problems with exact satisfaction of arbitrary boundary conditions. K S Mcfall, J R Mahan, IEEE Transactions on Neural Networks. 208K. S. McFall, J. R. Mahan, Artificial neural network method for solution of bound- ary value problems with exact satisfaction of arbitrary boundary conditions, IEEE Transactions on Neural Networks 20 (8) (2009) 1221-1233.
Multilayer perceptrons and radial basis function neural network methods for the solution of differential equations: A survey. M Kumar, N Yadav, 10.1016/j.camwa.2011.09.028Computers & Mathematics with Applications. 6210M. Kumar, N. Yadav, Multilayer perceptrons and radial basis function neural network methods for the solution of differential equations: A survey, Computers & Mathematics with Applications 62 (10) (2011) 3796 -3811. doi:https://doi.org/10.1016/j. camwa.2011.09.028.
Application of legendre neural network for solving ordinary differential equations. S Mall, S Chakraverty, 10.1016/j.asoc.2015.10.069Applied Soft Computing. 43S. Mall, S. Chakraverty, Application of legendre neural network for solving ordinary differential equations, Applied Soft Computing 43 (2016) 347 -356. doi:https:// doi.org/10.1016/j.asoc.2015.10.069.
A unified deep artificial neural network approach to partial differential equations in complex geometries. J Berg, K Nyström, Neurocomputing. 317J. Berg, K. Nyström, A unified deep artificial neural network approach to partial differential equations in complex geometries, Neurocomputing 317 (2018) 28-41.
Data-driven projection method in fluid simulation. C Yang, X Yang, X Xiao, Computer Animation and Virtual Worlds. 27C. Yang, X. Yang, X. Xiao, Data-driven projection method in fluid simulation, Com- puter Animation and Virtual Worlds 27 (3-4) (2016) 415-424.
E Haghighat, R Juanes, 10.1016/j.cma.2020.113552SciANN: A Keras/TensorFlow wrapper for scientific computations and physics-informed deep learning using artificial neural networks. 373113552E. Haghighat, R. Juanes, SciANN: A Keras/TensorFlow wrapper for scientific com- putations and physics-informed deep learning using artificial neural networks, Com- puter Methods in Applied Mechanics and Engineering 373 (2021) 113552. doi:https: //doi.org/10.1016/j.cma.2020.113552.
S Wang, Y Teng, P Perdikaris, arXiv:2001.04536Understanding and mitigating gradient pathologies in physics-informed neural networks. arXiv preprintS. Wang, Y. Teng, P. Perdikaris, Understanding and mitigating gradient pathologies in physics-informed neural networks, arXiv preprint arXiv:2001.04536 (2020).
Distributed physics informed neural network for data-efficient solution to partial differential equations. V Dwivedi, N Parashar, B Srinivasan, arXiv:1907.08967arXiv preprintV. Dwivedi, N. Parashar, B. Srinivasan, Distributed physics informed neural net- work for data-efficient solution to partial differential equations, arXiv preprint arXiv:1907.08967 (2019).
Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data. L Sun, H Gao, S Pan, J.-X Wang, 10.1016/j.cma.2019.112732Computer Methods in Applied Mechanics and Engineering. 361112732L. Sun, H. Gao, S. Pan, J.-X. Wang, Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data, Computer Methods in Ap- plied Mechanics and Engineering 361 (2020) 112732. doi:https://doi.org/10.1016/ j.cma.2019.112732.
TensorFlow: Large-scale machine learning on heterogeneous systems. M Abadi, A Agarwal, P Barham, E Brevdo, Z Chen, C Citro, G S Corrado, A Davis, J Dean, M Devin, S Ghemawat, I Goodfellow, A Harp, G Irving, M Isard, Y Jia, R Jozefowicz, L Kaiser, M Kudlur, J Levenberg, D Mané, R Monga, S Moore, D Murray, C Olah, M Schuster, J Shlens, B Steiner, I Sutskever, K Talwar, P Tucker, V Vanhoucke, V Vasudevan, F Viégas, O Vinyals, P Warden, M Wattenberg, M Wicke, Y Yu, X Zheng, software available from tensorflow.orgM. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Is- ard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Tal- war, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, X. Zheng, TensorFlow: Large-scale machine learning on heterogeneous systems, software available from tensorflow.org (2015). URL https://www.tensorflow.org/
Learning representations by backpropagating errors. D E Rumelhart, G E Hinton, R J Williams, nature. 3236088D. E. Rumelhart, G. E. Hinton, R. J. Williams, Learning representations by back- propagating errors, nature 323 (6088) (1986) 533-536.
Practical recommendations for gradient-based training of deep architectures. Y Bengio, Neural networks: Tricks of the trade. SpringerY. Bengio, Practical recommendations for gradient-based training of deep architectures, in: Neural networks: Tricks of the trade, Springer, 2012, pp. 437-478.
D P Kingma, J Ba, arXiv:1412.6980Adam: A method for stochastic optimization. arXiv preprintD. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014).
Understanding the difficulty of training deep feedforward neural networks. X Glorot, Y Bengio, Proceedings of the thirteenth international conference on artificial intelligence and statistics. the thirteenth international conference on artificial intelligence and statisticsX. Glorot, Y. Bengio, Understanding the difficulty of training deep feedforward neu- ral networks, in: Proceedings of the thirteenth international conference on artificial intelligence and statistics, 2010, pp. 249-256.
An overview of projection methods for incompressible flows. J Guermond, P Minev, J Shen, 10.1016/j.cma.2005.10.010Comp. Meth. Appl. Mech. Eng. 19544J. Guermond, P. Minev, J. Shen, An overview of projection methods for incompressible flows, Comp. Meth. Appl. Mech. Eng. 195 (44) (2006) 6011 -6045. doi:https://doi. org/10.1016/j.cma.2005.10.010.
On the role of roughness valleys in turbulent Rayleigh-Bénard convection. M Belkadi, A Sergent, Y Fraigneau, B Podvin, preprint hal-03029901v2M. Belkadi, A. Sergent, Y. Fraigneau, B. Podvin, On the role of roughness valleys in turbulent Rayleigh-Bénard convection, HAL preprint hal-03029901v2.
PDE-constrained neural network for turbulent Rayleigh-Bénard convection. A Agrawal, D Lucor, Y Fraigneau, B Podvin, A Sergent, Workshop on Frontiers of Uncertainty Quantification in Fluid Dynamics (FrontUQ19). A. Agrawal, D. Lucor, Y. Fraigneau, B. Podvin, A. Sergent, PDE-constrained neural network for turbulent Rayleigh-Bénard convection, in: Workshop on Frontiers of Uncertainty Quantification in Fluid Dynamics (FrontUQ19), 11-13 September 2019. URL https://frontuq19.files.wordpress.com/2019/09/frontuq19_book_of_
Turbulent temperature fluctuations in a closed Rayleigh-Bénard convection cell. Y Wang, X He, P Tong, J. Fluid Mech. 874Y. Wang, X. He, P. Tong, Turbulent temperature fluctuations in a closed Rayleigh- Bénard convection cell, J. Fluid Mech. 874 (2019) 263 -284.
H Bijl, D Lucor, S , Uncertainty Quantification in Computational Fluid Dynamics. Mishra, C. SchwabChamSpringer92H. Bijl, D. Lucor, S. Mishra, C. Schwab (Eds.), Uncertainty Quantification in Com- putational Fluid Dynamics, Vol. 92 of Lecture Notes in Computational Science and Engineering, Springer, Cham, 2013.
Physics-informed neural networks for highspeed flows. Z Mao, A D Jagtap, G E Karniadakis, 10.1016/j.cma.2019.112789Computer Methods in Applied Mechanics and Engineering. 360112789Z. Mao, A. D. Jagtap, G. E. Karniadakis, Physics-informed neural networks for high- speed flows, Computer Methods in Applied Mechanics and Engineering 360 (2020) 112789. doi:https://doi.org/10.1016/j.cma.2019.112789. URL https://www.sciencedirect.com/science/article/pii/S0045782519306814
Enhancing accuracy of deep learning algorithms by training with low-discrepancy sequences. S Mishra, T K Rusch, arXiv:2005.12564arXiv preprintS. Mishra, T. K. Rusch, Enhancing accuracy of deep learning algorithms by training with low-discrepancy sequences, arXiv preprint arXiv:2005.12564 (2020).
Importance sampling tree for large-scale empirical expectation. O Canevet, C Jose, F Fleuret, Proceedings of The 33rd International Conference on Machine Learning. M. F. Balcan, K. Q. WeinbergerThe 33rd International Conference on Machine LearningNew York, New York, USAPMLR48of Proceedings of Machine Learning ResearchO. Canevet, C. Jose, F. Fleuret, Importance sampling tree for large-scale empirical expectation, in: M. F. Balcan, K. Q. Weinberger (Eds.), Proceedings of The 33rd International Conference on Machine Learning, Vol. 48 of Proceedings of Machine Learning Research, PMLR, New York, New York, USA, 2016, pp. 1454-1462. URL http://proceedings.mlr.press/v48/canevet16.html
B-PINNs: Bayesian physics-informed neural networks for forward and inverse PDE problems with noisy data. L Yang, X Meng, G E Karniadakis, Journal of Computational Physics. 425109913L. Yang, X. Meng, G. E. Karniadakis, B-PINNs: Bayesian physics-informed neural networks for forward and inverse PDE problems with noisy data, Journal of Compu- tational Physics 425 (2021) 109913.
| []
|
[
"Integrated Field Equations of Heterotic Supergravities",
"Integrated Field Equations of Heterotic Supergravities"
]
| [
"Nejat T Yılmaz [email protected] \nDepartment of Mathematics and Computer Science\nÇ ankaya University\nOgretmenler Cad. No:1406530Balgat, AnkaraTurkey\n"
]
| [
"Department of Mathematics and Computer Science\nÇ ankaya University\nOgretmenler Cad. No:1406530Balgat, AnkaraTurkey"
]
| []
| The first-order bosonic field equations of the D-dimensional effective low energy theory which describes the massless background coupling of the D-dimensional fully Higgsed heterotic string are derived. | 10.1142/s0217751x08038494 | [
"https://arxiv.org/pdf/0806.0555v1.pdf"
]
| 12,091,951 | 0806.0555 | f0271e9740bcdc011edffc99afd883b84da56e8e |
Integrated Field Equations of Heterotic Supergravities
3 Jun 2008 June 3, 2008
Nejat T Yılmaz [email protected]
Department of Mathematics and Computer Science
Ç ankaya University
Ogretmenler Cad. No:1406530Balgat, AnkaraTurkey
Integrated Field Equations of Heterotic Supergravities
3 Jun 2008 June 3, 2008
The first-order bosonic field equations of the D-dimensional effective low energy theory which describes the massless background coupling of the D-dimensional fully Higgsed heterotic string are derived.
Introduction
The ten-dimensional N =1 supergravity [1,2] which is coupled to 16 gauge multiplets with the gauge group either O(32) or E 8 × E 8 is the low energy effective limit theory which describes the massless background coupling of the ten-dimensional heterotic string [3]. If one chooses Abelian gauge multiplets then one obtains the maximal torus sub-theory of the ten-dimensional O(32) or E 8 × E 8 Yang-Mills supergravity theory. In this case the full gauge group is broken down to its maximal torus subgroup U(1) 16 , whose Lie algebra is the Cartan subalgebra of the non-abelian gauge groups mentioned above. This mechanism is due to the general Higgs vacuum structure of the heterotic string which causes a spontaneous symmetry breakdown. Thus in this respect upon compactification one obtains the D-dimensional fully Higgsed massless heterotic string coming from the maximal torus sub-theory of the ten-dimensional O(32) or E 8 × E 8 Yang-Mills supergravity. In either of the O(32) or the E 8 × E 8 heterotic string theories in order to obtain the Ddimensional massless heterotic string in the fully Higgsed compactification only the ten-dimensional 16 Cartan gauge fields are kept in the reduction since the non-cartan gauge fields lead to massive fields following the compactification. In other words only the Cartan gauge fields are kept since they are the only fields which will remain massless for generic values of the Wilson lines.
In [4] one can refer to the torodial compactification of the bosonic sector of the ten-dimensional N =1 supergravity which is coupled to 16 Abelian gauge multiplets. As we have discussed above such a reduction gives us the D-dimensional bosonic low energy theory of the massless sector of the D-dimensional fully Higgsed heterotic string. In a formalism which treats the scalar manifolds as generic G/K-cosets and which uses the solvable Lie algebra parametrization [5,6,7] the field equations of these bosonic theories are studied in [8].
Starting from the bosonic field equations of the D-dimensional effective massless fully Higgsed heterotic string which is the D-dimensional heterotic supergravity in this note we derive the first-order field equations by locally integrating the second-order field equations obtained in [8]. By integration we mean cancelling an exterior derivative on both sides of the equations. Therefore we obtain a first-order formulation of the theory. We will effectively make use of the results derived in [9] which states that there exists a onesided on-shell decoupling between the coset scalars and the gauge fields of the heterotic supergravities. For this reason to obtain the first-order field equations of the coset scalars we will adopt the general formulation of [7] which works out the first-order field equations of the pure symmetric space sigma model. We will also give a brief discussion how one can make use of the first-order field equations to perform the on-shell bosonic coset construction of the D-dimensional heterotic supergravity.
The First-Order Field Equations
The bosonic field content which constitutes the low energy effective Lagrangian that describes the bosonic sector of the D-dimensional massless background coupling of the fully Higgsed heterotic string can be given as [8] {C I (1) , A (2) , φ, φ i , χ α }. Here the (20 − 2D + 16) × (20 − 2D + 16) matrix Ω is
Ω = 0 0 −1 (10−D) 0 1 (16) 0 −1 (10−D) 0 0 ,(2.4)
where 1 (n) is the n × n unit matrix. Following the notation of [8] we use a prime in (2.2) which stands for the particular representation of O(10 − D + 16, 10 − D) defined through (2.3) that is generated by the indefinite signature metric (2.4) 1 . We should also state that we separate N = 16 in our expressions to emphasize the number of Abelian matter multiplets coupling to the ten-dimensional N =1 type I supergravity which forms when coupled to these 16 U(1) vector multiplets the low energy effective limit of the tendimensional fully Higgsed heterotic string. The coset parametrization can be constructed in the solvable Lie algebra gauge [5,6,7,8] as
ν = e 1 2 φ i H i e χ α Eα ,(2.5)
where i = 1, · · · , r and α = 1, · · · , n. Here H i are the Cartan generators and E α are the positive root generators of the solvable Lie algebra which takes part in the Iwasawa decomposition of o ′ (10 − D + 16, 10 − D) [10]. One can find a more detailed study of the solvable Lie algebra parametrization in [6,7,8]. The field equations of the bosonic fields (2.1) are already derived in [8]. They read
(−1) D d( * dφ) = 1 2 8/(D − 2) e − q 8 (D−2) φ * F (3) ∧ F (3) + 1 2 2/(D − 2) e − q 2 (D−2) φ M IJ * H I (2) ∧ H J (2) , d(e − q 8 (D−2) φ * F (3) ) = 0, d(e − q 2 (D−2) φ M I J * H J (2) ) = (−1) D e − q 8 (D−2) φ Ω I J H J (2) ∧ * F (3) , d(e γ i φ i * U γ ) = α−β=−γ N α,−β U α ∧ e β i φ i * U β , d( * dφ i ) = 1 2 β∈∆ + nc β i e 1 2 β j φ j U β ∧ e 1 2 β j φ j * U β − 1 2 (−1) D e − q 2 (D−2) φ * H (2) ∧ ν T H i νH (2) ,(2.
6) where α, β, γ whose corresponding generators enter in the solvable lie algebra parametrization of (2.5) are the elements of ∆ + nc which is the set of noncompact positive roots of o ′ (10 − D + 16, 10 − D) [7,8] 2 . The field strengths of the fields {C I (1) , A (2) , χ β } are respectively defined as
H I (2) = dC I (1) , F (3) = dA (2) + 1 2 Ω IJ C I (1) ∧ dC J (1) , U α = Ω α β dχ β . (2.7)
The matrix M is
M = ν T ν. (2.8)
In the above relations β i , γ i are the root vector components and N α,β are the structure constants of the corresponding positive root generators of the solvable Lie algebra generated by
{H i , E α } [6, 7, 8]. More specifically [H j , E γ ] = γ j E γ ,(2.9)
and
[E α , E β ] = N α,β E α+β . (2.10)
From [6,7] the definition of the n × n matrix Ω(χ β ) reads 3
Ω = (e ω − I) ω −1 , (2.11)
where ω is an n × n matrix with components
ω γ β = χ α K γ αβ . (2.12) Here K γ αβ are defined through the commutators of {E α } 4 [E α , E β ] = K γ αβ E γ . (2.13)
We should state that we freely lower and raise indices by using various dimensional Euclidean metrics when necessary for convenience of notation. In [8] it is shown that the second term on the right hand side of the last equation in (2.6) which is compactly written in matrix form can be given as
− 1 2 (−1) D e − q 2 (D−2) φ * H (2) ∧ ν T H i νH (2) = (−1) D ∂L m ∂φ i . (2.14)
Here H i are the (20 − 2D + 16)-dimensional matrix representatives of the Cartan generators and H (2) is the vector of the field strengths defined in (2.7). Also L m is the matter-scalar coupling Lagrangian [8]. However on the other hand in [9] it is proven that the expression in (2.14) vanishes onshell for the elements of the solution space indicating that the coset scalar field equations coincide with the pure sigma model ones. Therefore we can legitimately drop the second term on the right hand side of the last equation in (2.6).
As discussed in the Introduction our aim in this note is to integrate the field equations in (2.6) locally. In this respect we will use the fact that locally a closed differential form is an exact one. For this reason we first introduce the dual fields
{ C I , B, φ},(2.e − q 2 (D−2) φ M I J * H J (2) = (−1) D (d C I + Ω I K C K (1) ∧ d B). (2.17)
If we take the exterior derivative of both sides we get
d(e − q 2 (D−2) φ M I J * H J (2) ) = (−1) D Ω I K dC K (1) ∧ d B.(2.* dφ = d φ − 1 2 8/(D − 2)A (2) ∧ d B + 1 2 2/(D − 2)δ IJ C I (1) ∧ d C J . (2.19)
If we apply the exterior derivative on both sides of (2.19) we get
d( * dφ) = − 1 2 8/(D − 2)dA (2) ∧ d B + 1 2 2/(D − 2)δ IJ dC I (1) ∧ d C J . (2.20)
By using (2.7) and (2.16) the above equation can be written as
d( * dφ) = 1 2 8/(D − 2) e − q 8 (D−2) φ (−1) D * F (3) ∧ F (3) + (−1) D 1 2 2/(D − 2) e − q 2 (D−2) φ M IK * H K (2) ∧ H I(2)+ 1 2 2/(D − 2) e − q 8 (D−2) φ Ω IJ C I (1) ∧ H J (2) ∧ * F (3) − 1 2 2/(D − 2) e − q 8 (D−2) φ Ω KI C K (1) ∧ H I (2) ∧ * F (3) . (2.21)
Since Ω is a symmetric matrix the last two terms cancel and as M is also a symmetric matrix this equation gives us the first equation in (2.6). The first-order formulation of the coset scalar field equations in (2.6) is a straightforward task. Following our discussion above when we drop the second term on the right hand side of the last equation in (2.6) we obtain the pure sigma model field equations which are the same with the ones derived for a generic coset manifold in [7]. As we have remarked before this fact is a consequence of the on-shell conditions satisfied by the general solutions of the theory which are rigorously derived in [9]. The first-order field equations of the general non-split [10] scalar coset are already derived in [7]. In general the coset manifolds in (2.2) are also in non-split form. Therefore we can adopt the results of [7] for the scalar sectors of the heterotic supergravities. For the sake of completeness we will repeat the first-order scalar field equations of [7] here. From [7] we have *
⇀ Ψ = (−1) D e Γ e Λ ⇀ A. (2.22)
Here we define the (r + n)-dimensional column vectors ⇀ Ψ and ⇀ A whose components can be given as
Ψ i = 1 2 dφ i , for i = 1, ..., r, Ψ α+r = e 1 2 α i φ i Ω α γ dχ γ , for α = 1, ..., n, A i = 1 2 d φ i , for i = 1, .
.., r, and A α+r = d χ α , for α = 1, ..., n,
(2.23)
where we have introduced the dual (D − 2)-forms φ i and χ α . In (2.22) Γ(φ i ) and Λ(χ β ) are (n + r) × (n + r) matrix functions. Their components read
Γ k n = 1 2 φ i g k in , Λ k n = χ α f k αn . (2.24)
The real constant coefficients { g k in } and { f k αn } are already listed in [6] 5 . They are
f n αm = 0, m ≤ r , f i α,α+r = 1 4 α i , i ≤ r, f i α,α+r = 0, i > r , f i α,β+r = 0, i ≤ r, α = β, 5
In [6] the indices i, j, ... are taken to run from 1 to l however in the present manuscript we have preferred using r instead of l. Thus the reader may read the coefficients { g k in } and { f k αn } from equations (3.8) and (3.9) of [6] by replacing l with r.
f γ+r α,β+r = N α,−β , α − β = −γ, α = β, f γ+r α,β+r = 0, α − β = −γ, α = β, (2.25)
and g n im = 0, m ≤ r , g n im = 0, m > r, m = n,
g α iα = −α i , α > r. (2.26)
Since as discussed in detail in [6,7] beside being enumerated α, β, γ, ... correspond to the set of non-compact positive roots of o ′ (10 − D + 16, 10 − D) the conditions on them in (2.25) and (2.26) must be understood in the root sense. We should also state that likewise in [7] we assume the signature of the spacetime as s = 1. It is proven in [7] that as a consequence of the dualisation of the general symmetric space sigma model the first-order field equations in (2.22) correspond to the local integration of the last two equations of (2.6) when the term which comes from the scalar-matter coupling Lagrangian is dropped as discussed before. Therefore we have derived the entire set of first-order field equations which are obtained by locally cancelling an exterior derivative on both sides of the equations in (2.6). Namely the equations (2.16), (2.17), (2.19), and (2.22) represent the first-order formulation of the D-dimensional low energy massless background coupling of the fully Higgsed heterotic string which is the D-dimensional heterotic supergravity. Before concluding we will present a discussion of an important application of the first-order field equations of the heterotic supergravities. In [11] the locally integrated first-order bosonic field equations of the maximal and IIB supergravities are used to derive the superalgebras that lead to the complete coset constructions of the bosonic sectors of these theories. Similarly the methodology of [11] can be extended to the heterotic supergravities. We will not present the complete coset construction of the heterotic supergravities here and leave it to a future work however we will discuss the outline of deriving the superalgebra of the on-shell coset construction of the heterotic supergravities. The first task in constructing the coset formalism is to assign an algebra generator to each original and dual field in the first-order equations (2.16), (2.17), (2.19), (2.22) and then to propose a coset map. In our case this map becomes
ν = exp( 1 2 φ j H j )exp(χ m E m )exp(φK)exp(C I (1) V I )exp( 1 2 A (2) Y ) ×exp( 1 2 B Y )exp( C I V I )exp( φ K)exp( χ m E m )exp( 1 2 φ j H j ). (2.27)
The associated Cartan-form may be defined as
G = dνν −1 .
(2.28)
From [11] we know that in the doubled formalism coset construction the Cartan-form satisfies a twisted self-duality equation * G = SG, (2.29)
with S being a pseudo-involution of the coset algebra of the generators introduced in (2.27). The key ingredient of the coset construction is the requirement that (2.29) must give us the first-order field equations of the theory. Therefore the method of revealing the coset algebra structure is to calculate (2.28) in terms of the desired structure constants, then to insert it in (2.29) and finally to compare the result with the equations (2.16), (2.17), (2.19), (2.22) to read the structure constants of the coset algebra.
Conclusion
In this work, by locally integrating the second-order field equations which are derived in [8] and which govern the massless sector of the D-dimensional fully Higgsed heterotic string namely the D-dimensional heterotic supergravity we have obtained the first-order field equations of the theory which contain only a single exterior derivative acting on the potentials. In these first-order field equations we have introduced dual fields which may be considered as integration constants. The dual fields are nothing but the Lagrange multipliers associated with the Bianchi identities of the field strengths when one treats these field strengths as fundamental fields instead of their potentials [12]. The fact which is derived in [9] that as an on-shell condition the coset scalar field equations can completely be decoupled from the gauge fields provides us the usage of the first-order symmetric space sigma model field equations of [7] in our formulation.
In GR the Palatini application of the Ostrogradski method [13] of reducing the derivative order of second-order Lagrangians by including auxiliary fields is a vast research area in recent years especially for the f(R) theories of gravity. The Ostrogradski method have also been effectively used to obtain the first-order formulations of supergravity theories. First-order formulations of supergravities are studied to understand the superpotentials [14,15,16] as well as the supersymmetry transformation laws [17]. The reader may find examples of the first-order formalism of supergravity theories in various dimensions in [17,18,19,20,21,22,23]. In the general first-order formalism method of these works the field strengths of the basic fields are also considered as independent fields and a first-order Lagrangian is constructed which gives first-order field equations in terms of the basic fields and their field strengths. When the field equations of the field strengths are substituted back in the Lagrangian one recovers the second-order formalism. In comparison with this scheme our first-order field equations of the D-dimensional heterotic supergravity do contain the basic fields except the graviton but on the contrary they do not include the field strengths. Instead we have introduced dual fields which may be considered as arbitrary integration constants that algebraically came into the scene as a result of abolishing an exterior derivative on both sides of the second-order field equations. Thus our approach is purely algebraic rather than being formal. We have simply reduced the degree of the field equations without increasing the number of fields to be solved and in this process arbitrary integration constants have arouse. On the other hand we have not constructed the corresponding Lagrangian which would lead to the first-order equations we have obtained. However as we have discussed above such a Lagrangian which would kinematically be different than the one that would appear within the Ostrogradski method would rather be obtained by Lagrange multiplier method that makes use of the Bianchi identities of the field strengths.
The first-order formulation of the D-dimensional heterotic supergravity presented in this note has two important implications. The first-order field equations play an important role in the coset construction of the supergravities [11]. Thus as we have briefly discussed in the previous section the equations derived in this note can be considered to be essential ingredients of a possible coset construction of the heterotic supergravities. Secondly since the dual fields introduced in the first-order field equations can be arbitrarily varied one can make use of this fact to generate solutions. Therefore in this respect beside being first-order the integrated field equations contain-ing parameters which can be manipulated become powerful tools in seeking solutions of the heterotic supergravities.
C I (1) are (20 − 2D + 16) one-forms, and A (2) is a two-form, the rest of the fields are scalars. The scalar field φ is decoupled from the rest of the scalars which are the coset ones. The coset scalars {φ i , χ α } parametrize the coset manifold O ′ (10 − D + 16, 10 − D)/O(10 − D + 16) × O(10 − D), (2.2) whose elements which are (20 − 2D + 16)-dimensional real matrices in the fundamental representation satisfy ν T Ων = Ω. (2.3)
15) where C I are (D − 3)-forms, B is a (D − 4)-form and φ is a (D − 2)-form. It is a straightforward operation that if one applies the exterior derivative on both sides of e the second equation in (2.6). Thus (2.16) is our first first-order field equation. Next let us consider
On the other hand the unprimed notation O(10 − D + 16, 10 − D) is used in[8] for the usual representation of the generalized orthogonal group generated by the diagonalized indefinite signature metric η = diag(−, −, ..., −, +, +, ..., +).
We adopt the notation of[7] and randomly enumerate the roots in ∆ + nc from 1 to n.
The reader should be aware that the plain Ω with indices I, J, K, ... and the bold one Ω with the indices α, β, γ, ... are two different objects. We prefer using them together for the sake of conformity with the references.4 Of course by comparing (2.13) with (2.10) one may relate N α,β to K γ αβ .
Tendimensional Maxwell-Einstein supergravity, its currents, and the issue of its auxiliary fields. E Bergshoeff, M De Roo, B De Wit, P Van Nieuwenhuizen, Nucl. Phys. 19597E. Bergshoeff, M. de Roo, B. de Wit and P. van Nieuwenhuizen, "Ten- dimensional Maxwell-Einstein supergravity, its currents, and the issue of its auxiliary fields ", Nucl. Phys. B195 (1982) 97.
Unification of Yang-Mills theory and supergravity in ten-dimensions. G F Chapline, N S Manton, Phys. Lett. 120105G. F. Chapline and N. S. Manton, "Unification of Yang-Mills theory and supergravity in ten-dimensions ", Phys. Lett. B120 (1983) 105.
Introduction to superstring theory. E Kiritsis, hep-th/9709062E. Kiritsis, "Introduction to superstring theory", hep-th/9709062.
M-theory/heterotic duality: A Kaluza-Klein perspective. H Lü, C N Pope, K S Stelle, hep-th/9810159Nucl. Phys. 548H. Lü, C. N. Pope and K. S. Stelle, "M-theory/heterotic dual- ity: A Kaluza-Klein perspective", Nucl. Phys. B548 (1999) 87, hep-th/9810159.
R-R scalars, U-duality and solvable Lie algebras. L Andrianopoli, R Auria, S Ferrara, P Fre, M Trigiante, hep-th/9611014Nucl. Phys. 496L. Andrianopoli, R. D'Auria, S. Ferrara, P. Fre, M. Trigiante, "R-R scalars, U-duality and solvable Lie algebras", Nucl. Phys. B496 (1997) 617, hep-th/9611014.
Dualisation of the general scalar coset in supergravity theories. N T Yılmaz, hep-th/0301236Nucl. Phys. 664N. T. Yılmaz, "Dualisation of the general scalar coset in supergravity theories", Nucl. Phys. B664 (2003) 357, hep-th/0301236.
The non-split scalar coset in supergravity theories. N T Yılmaz, hep-th/0407006Nucl. Phys. 675N. T. Yılmaz, "The non-split scalar coset in supergravity theories", Nucl. Phys. B675 (2003) 122, hep-th/0407006.
Heterotic string dynamics in the solvable lie algebra gauge. N T Yılmaz, hep-th/0701275Nucl. Phys. 765N. T. Yılmaz, "Heterotic string dynamics in the solvable lie algebra gauge", Nucl. Phys. B765 (2007) 118, hep-th/0701275.
An implicit decoupling for the dilatons and the axions of the heterotic string. N T Yılmaz, hep-th/0703113Phys. Lett. 646N. T. Yılmaz, "An implicit decoupling for the dilatons and the axions of the heterotic string", Phys. Lett. B646 (2007) 125, hep-th/0703113.
Differential Geometry. S Helgason, Lie Groups and Symmetric Spaces. ProvidenceAmerican Mathematical Society34Graduate Studies in MathematicsS. Helgason, "Differential Geometry, Lie Groups and Symmetric Spaces", (Graduate Studies in Mathematics, vol. 34, American Math- ematical Society, Providence, 2001).
Dualisation of dualities II : Twisted self-duality of doubled fields and superdualities. E Cremmer, B Julia, H Lü, C N Pope, hep-th/9806106Nucl. Phys. 535E. Cremmer, B. Julia, H. Lü and C. N. Pope, "Dualisation of dualities II : Twisted self-duality of doubled fields and superdualities", Nucl. Phys. B535 (1998) 242, hep-th/9806106.
. C N Pope, Lecture Notes on Kaluza-Klein Theory. unpublishedC. N. Pope, "Lecture Notes on Kaluza-Klein Theory", (unpub- lished).
Ostrogradski formalism for higherderivative scalar field theories. F J De Urries, J Julve, hep-th/9802115J. Phys. 31F. J. de Urries and J. Julve, "Ostrogradski formalism for higher- derivative scalar field theories", J. Phys. A31 (1998) 6949, hep-th/9802115.
Currents and superpotentials in classical gauge invariant theories. I: Local results with applications to perfect fluids and general relativity. B Julia, S Silva, gr-qc/9804029Class. Quant. Grav. 15B. Julia and S. Silva, "Currents and superpotentials in classical gauge invariant theories. I: Local results with applications to perfect fluids and general relativity", Class. Quant. Grav. 15 (1998) 2173, gr-qc/9804029.
Noether superpotentials in supergravities. M Henneaux, B Julia, S Silva, hep-th/9904003Nucl. Phys. 563M. Henneaux, B. Julia and S. Silva, "Noether superpotentials in super- gravities", Nucl. Phys. B563 (1999) 448, hep-th/9904003.
On superpotentials and charge algebras of gauge theories. S Silva, hep-th/9809109Nucl. Phys. 558S. Silva, "On superpotentials and charge algebras of gauge theories", Nucl. Phys. B558 (1999) 391, hep-th/9809109.
On first order formulations of supergravities. B Julia, S Silva, hep-th/9911035JHEP. 000126B. Julia and S. Silva, "On first order formulations of supergravities", JHEP 0001 (2000) 026, hep-th/9911035.
Consistent supergravity. S Deser, B Zumino, Phys. Lett. 62335S. Deser and B. Zumino, "Consistent supergravity", Phys. Lett. B62 (1976) 335.
Gravity with extra gauge symmetry. I Bars, S W Macdowell, Phys. Lett. 129182I. Bars and S. W. MacDowell, "Gravity with extra gauge symmetry", Phys. Lett. B129 (1983) 182.
On the new first order formalism of d = 11 supergravity. A Higuchi, preprint YTP 85-02A. Higuchi, "On the new first order formalism of d = 11 supergravity", preprint YTP 85-02 (1985).
On the spinor form of first order gravity and supergravity. P Fre, preprint CALT-68-662P. Fre, "On the spinor form of first order gravity and supergravity", preprint CALT-68-662 (1978).
First order formulation and geometrical interpretation of d = 11 supergravity. I Bars, A Higuchi, Phys. Lett. 145329I. Bars and A. Higuchi, "First order formulation and geometrical inter- pretation of d = 11 supergravity", Phys. Lett. B145 (1984) 329.
Geometry of eleven-dimensional supergravity. R E Kallosh, Phys. Lett. 143373R. E. Kallosh, "Geometry of eleven-dimensional supergravity", Phys. Lett. B143 (1984) 373.
| []
|
[
"Cesàro summation for random fields",
"Cesàro summation for random fields"
]
| [
"Allan Gut \nUppsala University\nUniversity of Ulm\n\n",
"Ulrich Stadtmüller \nUppsala University\nUniversity of Ulm\n\n"
]
| [
"Uppsala University\nUniversity of Ulm\n",
"Uppsala University\nUniversity of Ulm\n"
]
| []
| Various methods of summation for divergent series of real numbers have been generalized to analogous results for sums of i.i.d. random variables. The natural extension of results corresponding to Cesàro summation amounts to proving almost sure convergence of the Cesàro means. In the present paper we extend such results as well as weak laws and results on complete convergence to random fields, more specifically to random variables indexed by Z 2 + , the positive two-dimensional integer lattice points. | 10.1007/s10959-009-0223-9 | [
"https://arxiv.org/pdf/0904.0538v1.pdf"
]
| 16,963,741 | 0904.0538 | b653771571fb102a627dacd5c9556b93319eacbb |
Cesàro summation for random fields
3 Apr 2009
Allan Gut
Uppsala University
University of Ulm
Ulrich Stadtmüller
Uppsala University
University of Ulm
Cesàro summation for random fields
3 Apr 2009arXiv:0904.0538v1 [math.PR]
Various methods of summation for divergent series of real numbers have been generalized to analogous results for sums of i.i.d. random variables. The natural extension of results corresponding to Cesàro summation amounts to proving almost sure convergence of the Cesàro means. In the present paper we extend such results as well as weak laws and results on complete convergence to random fields, more specifically to random variables indexed by Z 2 + , the positive two-dimensional integer lattice points.
Introduction
Various methods of summation for divergent series have been studied in the literature; see e.g. [10,21]. Several analogous results have been proved for sums of independent, identically distributed (i.i.d.) random variables.
The most commonly studied method is Cesàro summation, which is defined as follows: Let {x n , n ≥ 0} be a sequence of real numbers and set, for α > −1, A α n = (α + 1)(α + 2) · · · (α + n) n! , n = 1, 2, . . . , and A α 0 = 1.
(1.1)
The sequence {x n , n ≥ 0} is said to be (C, α)-summable iff 1 A α n n k=0 A α−1 n−k x k converges as n → ∞. (1.2) It is easily checked (with A −1 n = 0 for n ≥ 1 and A −1 0 = 1) that (C, 0)-convergence is the same as ordinary convergence, and that (C, 1)-convergence is the same as convergence of the arithmetic means. Now, let {X k , k ≥ 1} be i.i.d. random variables with partial sums {S n , n ≥ 1}, and let X be a generic random variable. The following result is a natural probabilistic analog of (1.2). Theorem 1.1 Let 0 < α ≤ 1. The sequence {X k , k ≥ 1} is almost surely (a.s.) (C, α)-summable iff E|X| 1/α < ∞. More precisely, → µ as n → ∞ ⇐⇒ E|X| 1/α < ∞ and E X = µ.
For α = 1 this is, of course, the classical Kolmogorov strong law. For proofs we refer to [14] ( 1 2 < α < 1), [1] (0 < α < 1 2 ) and [2] (α = 1 2 ).
Convergence in probability for strongly integrable random variables taking their values in real separable Banach spaces was establised in [11] under the assumption of strong integrability. In the real valued case finite mean is not necessary; for α = 1 we obtain Feller's weak law of large numbers for which a tail condition is both necessary and sufficient; cf. e.g. [8], Section 6.4.1.
Next we present Theorem 2.1 of [7] where complete convergence was obtained.
Theorem 1.2 Let 0 < α ≤ 1. The sequence {X k , k ≥ 1} converges completely to µ, i.e., ∞ n=1 P n k=0 A α−1 n−k X k − µ > A α n ε < ∞ for every ε > 0 , if and only if E|X| 1/α < ∞, for 0 < α < 1 2 , E|X| 2 log + |X| < ∞, for α = 1 2 , E|X| 2 < ∞, for 1 2 < α ≤ 1, and E X = µ.
Here and in the following log + x = max{log x, 1}.
The aim of the present paper is to generalize these results to random fields. For simplicity we shall focus on random variables indexed by Z 2 + , leaving the corresponding results for the index set Z d + , d > 2, to the readers. The definition of Cesàro summability for arrays extends as follows:
Definition 1.1 Let α, β > 0. The array {x m,n , m, n ≥ 0} is said to be (C, α, β)-summable iff 1 A α m A β n m,n m,n k,l=0 A α−1 n−k A β−1 n−l x k,l converges as m, n → ∞ . (1.3)
Our setup thus is the set {X k,l , (k, l) ∈ Z 2 + } with partial sums S m,n , (m, n) ∈ Z 2 + . The Kolmogorov and Marcinkiewicz-Zygmund strong law runs as follows. Theorem 1.3 Let 0 < r < 2, and suppose that X, {X k , k ∈ Z d } are i.i.d. random variables with partial sums S n = k≤n X k , n ∈ Z d . If E|X| r (log + |X| d−1 ) < ∞, and E X = 0 when 1 ≤ r < 2, then S n |n| 1/r a.s. → 0 as n → ∞.
Conversely, if almost sure convergence holds as stated, then E|X| r (log + |X| d−1 ) < ∞, and E X = 0 when 1 ≤ r < 2.
Here |n| = d k=1 n i and n → ∞ means inf 1≤k≤d n i → ∞, that is, all coordinates tend to infinity. The theorem was proved in [18] for the case r = 1 and, generally, in [5].
For the analogous weak laws a finite moment of order r suffices (in fact, even a little less), since convergence in probability is independent of the order of the index set.
The central object of investigation in the present paper is
1 A α m A β n m,n k,l=0 A α−1 m−k A β−1 n−l X k,l ,(1.4)
for which we shall establish conditions for convergence in probability, almost sure convergence and complete convergence Let us already at this point observe that for α = β = 1 the quantity in (1.4) reduces to that of Theorem 1.3 with r = 1, that is, to the multiindex Kolmogorov strong law obtained in [18]. A second thought leads us to extensions of Theorem 1.3 to the case when we do not normalize the partial sums with the product of the coordinates raised to some power, but the product of the coordinates raised to different powers, viz., to, for example (d = 2), S m,n m α n β for 0 < α < β ≤ 1,
(where thus the case α = β = 1/r relates to Theorem 1.3). Here we only mention that some surprises occur depending on the domain of the parameters α and β. For details concerning this "asymmetric" Kolmogorov-Marcinkiewicz-Zygmund extension we refer to [9]. After some preliminaries we present our results for the different modes of convergence mentioned above. A final appendix contains a collection of so-called elementary but tedious calculations.
Preliminaries
Here we collect some facts that will be used on and off, in general without specific reference.
•
The first fact we shall use is that whenever weak forms of convergence or sums of probabilites are inyvolved we may equivalently compute sums "backwards", which, in view of the i.i.d. assumption shows that, for example m,n m,n k,l=0
P (A α−1 m−k A β−1 n−l |X k | > A α m A β n ) < ∞ ⇐⇒ m,n m,n k,l=1 P (A α−1 k A β−1 l |X| > A α m A β n ) < ∞. (2.1)
In the same vein the order of the index set is irrelevant, that is, one-dimensional results and methods remain valid.
• Secondly we recall from (1.1) that A α 0 = 1 and that.
A α n = (α + 1)(α + 2) · · · (α + n) n! , n = 1, 2, . . . , which behaves asymptotically as
A α n ∼ n α Γ(α + 1) as n → ∞, (2.2)
where ∼ denotes that the limit as n → ∞ of the ratio between the members on either side equals 1. Combining the two relations above tells us that m,n m,n k,l=0
P (A α−1 m−k A β−1 n−l |X| > A α m A β n ) < ∞ ⇐⇒ m,n m,n k,l=1 P (k α−1 l β−1 |X| > m α n β ) < ∞ .(2.3) •
We shall also make abundant use of the fact that if {a k ∈ R, k ≥ 1}, then a n → 0 as n → ∞ =⇒ 1 n n k=1 a k → 0 as n → ∞,
(2.4) that if, in addition, w k ∈ R + , k ≥ 1, with B n = n k=1 w k , n ≥ 1, where B n ր ∞ as n → ∞, then 1 B n n k=1 w k a k → 0 as n → ∞,(2.5)
as well as integral versions of the same.
Convergence in probability
We thus begin by investigating convergence in probability. We do not aim at optimal conditions, except that, as will be seen, the weak law does not require finiteness of the mean (whereas the strong law does so).
Theorem 3.1 Let 0 < α ≤ β ≤ 1 and suppose that {X k,l , k, l ≥ 0} are i.i.d. random variables. Further, set, for 0 ≤ k ≤ m, 0 ≤ l ≤ n, Y m,n k,l = A α−1 m−k A β−1 n−l X k,l I{|X k,l | ≤ A α m A β n }, S ′ m,n = m,n k,l=0 Y m,n k,l and µ m,n = E S ′ m,n . Then 1 A α m A β n m,n k,l=0 A α−1 m−k A β−1 n−l X k,l − µ m,n p → 0 as m, n → ∞ (3.1) if nP (|X| > n) → 0 as n → ∞ . (3.2)
If, in addition,
µ m,n A α m A β n → 0 as m, n → ∞, (3.3) then 1 A α m A β n m,n k,l=0 A α−1 m−k A β−1 n−l X k,l p → 0 as m, n → ∞ . (3.4) Remark 3.1 Condition (3.
2) is short of E|X| < ∞, i.e., the theorem extends the Kolmogorov-Feller weak law [12], [13], and [3], Section VII.7, to a weak law for weigthed random fields for a class of weights decaying as powers of order less than 1 in each direction. .3) we may, equivalently, prove the theorem for the respective powers of k and l, viz., we redefine the truncated means as
Y m,n k,l = k α−1 l β−1 X k,l I{k α−1 l β−1 |X k,l | ≤ m α n β },(3.5)
with partial sums and means as
P (k α−1 l β−1 |X| > m α n β ) = 1 m α n β m,n k,l=1 k α−1 l β−1 · m α n β k 1−α l 1−β P (k α−1 l β−1 |X| > m α n β ),
which converges to 0 as m, n → ∞ via (2.5).
In order to verify (3.8) we apply the usual "slicing device" to obtain
1 m 2α n 2β m,n k,l=1 Var Y m,n k,l ≤ 1 m 2α n 2β m,n k,l=1 E Y m,n k,l 2 ≤ 1 m 2α n 2β m,n k,l=1 E k 2(α−1) l 2(β−1) X 2 I{k α−1 l β−1 |X| ≤ m α n β } = 1 m 2α n 2β m,n k,l=1 k 2(α−1) l 2(β−1) mn β/α j=1 E X 2 I{(j − 1) α < k α−1 l β−1 |X| ≤ j α } ≤ 1 m 2α n 2β m,n k,l=1 mn β/α j=1 j 2α P (j − 1) α < k α−1 l β−1 |X| ≤ j α ≤ C m 2α n 2β m,n k,l=1 mn β/α j=1 j i=1 i 2α−1 P (j − 1) α < k α−1 l β−1 |X| ≤ j α ≤ C m 2α n 2β m,n k,l=1 mn β/α i=1 i 2α−1 P (|X| ≥ i α k 1−α l 1−β ) = C m α n β m,n k,l=1 k α−1 l β−1 1 m α n β mn β/α i=1 i α−1 i α k 1−α l 1−β P (|X| ≥ i α k 1−α l 1−β ) ,
→ 0 as m, n → ∞ , by applying (2.5) twice to (3.2). This completes the proof of (3.1), from which (3.4) is immediate. 2 Proof of Corollary 3.1. In order to conclude that also (3.4) holds we use the usual method to show that the normalized trruncated means tend to zero, where w.l.o.g. we assume that E X = 0. Then
1 m α n β m,n k,l=1 E k (α−1) l (β−1) XI{k (α−1) l (β−1) |X| ≤ m α n β } = − 1 m α n β m,n k,l=1 E k (α−1) l (β−1) XI{k (α−1) l (β−1) |X| > m α n β } ≤ 1 m α n β m,n k,l=1
E k (α−1) l (β−1) |X|I{k (α−1) l (β−1) |X| > m α n β } → 0 as n, m → ∞.
A α−1 m−k A β−1 n−l X k,l − µ > A α m A β n ε < ∞ for every ε > 0 , if and only if E|X| 1 α , for 0 < α < 1/2 , α < β ≤ 1, E|X| 1 α log + |X|, for 0 < α = β < 1 2 , E|X| 2 (log + |X|) 3 , for α = β = 1 2 , E|X| 2 (log + |X|) 2 , for α = 1 2 < β ≤ 1, E|X| 2 log + |X|, for 1 2 < α ≤ β ≤ 1. and E X = µ.
Proof. For the proof of the sufficiency we refer to the Appendix.
As for the necessity, we argue as in [6], p. 59. We first suppose that the distribution is symmetric. Now, if complete convergence holds, then, using the fact that max 0≤k,l≤m,n
A α−1 m−k A β−1 n−l |X k,l | ≤ 2 max 0≤µ,ν≤m,n µ,ν k,l=0 A α−1 m−k A β−1 n−l X k,l ,
together with the Lévy inequalities we must have, say, m,n P max 0≤k,l≤m,n
A α−1 m−k A β−1 n−l |X k,l | > A α m A β n < ∞ ,
so that, by the first Borel-Cantelli lemma
P (A α−1 m−k A β−1 n−l |X k,l | > A α m A β n i.o. for 1 ≤ k, l ≤ m, n ; m, n ≥ 1) = 0.
At this point we use a device from [17], p. 379. Namely, if the sums m,n k,l=1 A α−1 m−k A β−1 n−l X k,l were independent, we would conclude that m,n m,n
k,l=1 P (A α−1 m−k A β−1 n−l |X| > A α m A β n ) were finite.
Since, however, finiteness of the sum is only a matter of the tail probabilities, the sum is also finite in the general case.
An application of (A.6) now tells us that the finiteness of the sum is equivalent to the moment conditions as given in the statement of the theorem.
This proves the necessity in the symmetric case. The general case follows the standard desymmetrization procedure, for which we use Theorem 3.1 in order to take care of the asymptotics for the normalized medians (cf. [5], p. 472 for analogous details in the multiindex setting of the Marcinkiewicz-Zygmund strong laws).
A α−1 m−k A β−1 n−l X k,E|X| 1 α log + |X|, for 0 < α = β ≤ 1.
and E X = µ.
Proof. Since complete convergence always implies almost sure convergence, the sufficiency follows immediately for the case α < 1/2. Thus, let in the following 1/2 ≤ α ≤ β ≤ 1. We first consider the symmetric case (and recall Section 2. In analogy with [11], p. 538, the moment assumptions permit us to choose an array {η k,l , k, l ≥ 1} of nonincreasing reals in (0, 1) converging to 0, and such that ∞ k,l=1
P (|X k,l | > η k,l k α l β ) < ∞. Defining Y k,l = X k,l I{|X k,l | ≤ η k,l k α l β } and S ′ m,n = m,n k,l=0 Y m,n k,l ,
it thus remains to prove the theorem for the truncated sequence. This will be achieved via the multiindex Kolmogorov convergence criterion (see e.g [4]) and the multiindex Kronecker lemma (cf. [16]). The first series has just been taken care of, the second one vanishes since we are in the symmetric case, so it remains to check the third series.
Toward that end, let, for k, l ≥ 1,
Z k,l = (m − k) α−1 (n − l) β−1 m α n β Y k,l .
Then
|Z k,l | ≤ (m − k) α−1 (n − l) β−1 m α n β k α l β η k,l ≤ η k,l ≤ η 00 . (5.1)
Now, for any ε > 0, arbitrarily small, we may choose our η-sequence such that η 00 < ε, so that an application of the (iterated) Kahane-Hoffman-Jørgensen inequality (cf. [8], Theorem 3.7.5) yields P m,n k,l=0 Z k,l > 3 j ε < ∞.
Z k,l > 3 j ε ≤ C j P m,n k,l=0 Z k,l > ε 2 j ≤ C j m,n k,l=0 (m − k) (α−1) (n − l) β−1 1/α E|X| 1/α εm α n β 1/α 2 j = C ′ j m,n k,l=0 k (1−1/α) l (β−1)/α mn β/α 2 j = C ′′ j 1 (mn) 1 α −1 2 j , for 1 2 < α < β < 1, C ′′ j log m nm 2 j , for 1 2 = α < β < 1,
By replacing 3 j ε by ε we have thus, due to the arbitrariness of ε, shown that P |Z k,l | > ε i.o. = 0 for any ε > 0, (5.2) from which the desired almost sure convergence follows via the multiindex Kronecker lemma referred to above. This proves the sufficiency in the symmetric case from which the general case follows by the standard desymmetrization procedure hinted at in the proof of Theorem 4.1.
Finally, suppose that almost sure convergence holds as stated. It then follows that which, in turn, is equivalent to the given moment conditions. This concludes the proof of the theorem. 2
A α−1 0 A β−1 0 X m,n A α m A β
Concluding remarks
We close with some comments on the present and related work.
• Convergence in probability has earlier been established in [11] via approximation with indicator variables, and under the assumption of finite mean. Our proof is simpler (more elementary) and presupposes only a Feller condition.
• As pointed out above, almost sure convergence was established in three steps ( [14], [1] and [2]) with different proofs. Our proof, which also works for the case d = 1, takes care of the whole proof in one go (since our proof also works for the case α < 1/2).
• For simplicity we have confined ourselves to the case d = 2. The same ideas can be modified for the case d > 2 and (C, α 1 , α 2 , . . . , α d )-summability. However, the moment conditions then depend on the number of α:s that are equal to the smallest one (corresponding to α < β or α = β in the present paper); see [9] for Kolmogorov-Marcinkiewicz-Zygmund laws.
• Results on complete convergence are special cases of results on convergence rates. In this vein our results are extendable to results concerning m,n n r−2 m r−2 P m,n k,l=0
A α−1 m−k A β−1 n−l X k,l − µ > A α m A β n ε < ∞ for every ε > 0 (cf. [7] for the case d = 1). For the proofs one would need i.a. extensions of the relevant computations in the appendix below.
A Appendix
In this appendix we collect a number of so-called elementary but tedious calculations.
First, let 0 < α ≤ β < 1. Then m,n m,n k,l=1
P (k α−1 l β−1 |X| > m α n β ) < ∞ ⇐⇒ ∞ 1 ∞ 1 x 1 y 1 P (|X| > u 1−α v 1−β x α y β ) dudvdxdy < ∞ ⇐⇒ u 1−α x α = z, v 1−β y β = w ∞ 1 ∞ 1 x x α y y β z x α 1−α w y β 1−β P (|X| > zw) dzdwdxdy < ∞ ⇐⇒ ∞ 1 ∞ 1 z 1/α z dx x α 1−α w 1/β w dy y β 1−β z α 1−α w β 1−β P (|X| > zw) dzdw < ∞ . (A.1)
In case 0 < α < β = 1 we have m,n m,n k,l=1 from which it follows that
∞ 1 ∞ 1 z 1/α z dx x α 1−α w 1/β w dy y β 1−β z α 1−α x β 1−β P (|X| > zw) dzdw = x = zw, y = z = ∞ 1 x 1 x 1−β β y 1 α − 1 β −1 P (|X| > x) dydx = C ∞ 1 x 1 α −1 P (|X| > x) dx, for 0 < α < β < 1 2 , ∞ 1 x 1 x 1−α α 1 y P (|X| > x) dydx = C ∞ 1 x 1−α α log xP (|X| > x) dx,
for 0 < α = β < 1 2 ,
∞ 1
x 1 1 2 x(log x) 2 1 y − x (log y) 2 y P (|X| > x) dxdy
= 1 6 ∞ 1 x(log x) 3 P (|X| > x) dx, for α = β = 1 2 , ∞ 1 x 1 xy 1 α −2 (log x − log y)P (|X| > x) dydx = C ∞ 1 x 1 α −1 P (|X| > x) dx, for α < β = 1 2 , ∞ 1 x 1 xy 1 α −2 P (|X| > x) dydx = C ∞ 1 x 1 α −1 P (|X| > x) dx, for α < 1 2 < β ≤ 1, ∞ 1
x 1 x log y y P (|X| > x) dydx = 1 2 ∞ 1 x(log x) 2 P (|X| > x) dx, for α = 1 2 < β ≤ 1, ∞ 1
x 1 x 1 y P (|X| > x) dydx = 1 2 ∞ 1 x log xP (|X| > x) dx, for 1 2 < α ≤ β ≤ 1.
(A.4)
Summarizing this we have shown that, for 0 < α ≤ β < 1, m,n m,n k,l=1
P (A α−1 k A β−1 l |X| > A α m A β n ) < ∞ ⇐⇒ (A.5) E|X| 1 α ,
for 0 < α < 1/2, α < β ≤ 1, E|X| 1 α log + |X|, for 0 < α = β < 1 2 , E|X| 2 (log + |X|) 3 , for α = β = 1 2 , E|X| 2 (log + |X|) 2 , for α = 1 2 < β ≤ 1, E|X| 2 log + |X|, for 1 2 < α ≤ β ≤ 1.
(A.6)
P
In order to check the conditions of the degenerate convergence criterion we thus wish to show that, if (3.2) is satisfied, then (k α−1 l β−1 |X| > m α n β ) → 0 as m, n → ∞ ,
2 5
2Almost sure convergence
α = β,(since the usual first term in the RHS vanishes in view of (5.1)).By choosing j sufficiently large it then follows that
as m, n → ∞, which, in view of i.i.d. assumption and the second Borel-Cantelli lemma, tells us that m,n P (|X| > m α n β ) < ∞,
Corollary 3.1 If, in addition, E X = 0, then(3.4) holds (and if the mean µ is not equal to zero the limit in(3.4) equals µ).Proof of Theorem 3.1. The proof of the theorem amounts to an application of the so-called degenerate convergence criterion, see e.g.Corollary 3.2 If, in addition, the distribution of the summands is symmetric, then (3.2) alone
suffices for (3.4) to hold.
[8], Theorem 6.3.3.
Recalling (2.1) and (2
Proof of Corollary 3.2. Immediate, since the truncated means are (also) equal to zero. 2 Theorem 4.1 Let 0 < α ≤ β ≤ 1. The field {X k,l , k, l ≥ 0} converges completely to µ, i.e.,4 Complete convergence
m n
P
m,n
k,l=0
AcknowledgementThe work on this paper has been supported by Kungliga Vetenskapssamhället i Uppsala. Their support is gratefully acknowledged. In addition, the second author likes to thank his partner Allan Gut for the great hospitality during two wonderful and stimulating weeks at the University of Uppsala.Next we note thatfor α < 1 2 < β ≤ 1, zw log z, for α = 1 2 < β ≤ 1, zw, for 1 2 < α ≤ β ≤ 1,
Limiting behavior of weighted sums of independent random variables. Y S Chow, T L Lai, Ann. Probab. 1Chow, Y.S. and Lai, T.L. (1973). Limiting behavior of weighted sums of independent random variables. Ann. Probab. 1, 810-824.
Sur la convergence presque sure, au sens de Cesàro d'ordre α, 0 < α < 1, de variables aléatoires indépendantes et identiquement distribuées. Y Déniel, Y Derriennic, Probab. Th. Rel. Fields. 79Déniel, Y. and Derriennic, Y. (1988). Sur la convergence presque sure, au sens de Cesàro d'ordre α, 0 < α < 1, de variables aléatoires indépendantes et identiquement distribuées. Probab. Th. Rel. Fields 79, 629-636.
W Feller, An Introduction to Probability Theory and Its Applications. New YorkWiley22nd ed.Feller, W. (1971). An Introduction to Probability Theory and Its Applications, Vol 2, 2nd ed. Wiley, New York.
An inequality for sums of independent random variables indexed by finite dimensional filtering sets and its applications to the convergence of series. J.-P Gabriel, Ann. Probab. 5Gabriel J.-P. (1977). An inequality for sums of independent random variables indexed by finite dimensional filtering sets and its applications to the convergence of series. Ann. Probab. 5, 779-786.
Marcinkiewicz laws and convergence rates in the law of large numbers for random variables with multidimensional indices. A Gut, Ann. Probab. 6Gut, A. (1978). Marcinkiewicz laws and convergence rates in the law of large numbers for random variables with multidimensional indices. Ann. Probab. 6, 469-482.
Complete convergence for arrays. A Gut, Period. Math. Hungar. 25Gut, A. (1992). Complete convergence for arrays. Period. Math. Hungar. 25, 51-75.
Complete convergence and Cesàro summation for i.i.d. random variables. A Gut, Probab. Th. Rel. Fields. 97Gut, A. (1993). Complete convergence and Cesàro summation for i.i.d. random variables. Probab. Th. Rel. Fields 97, 169-178.
Probability: A Graduate Course, Corr. 2nd printing. A Gut, Springer-VerlagNew YorkGut, A. (2007). Probability: A Graduate Course, Corr. 2nd printing. Springer-Verlag, New York.
An asymmetric Marcinkiewicz-Zygmund LLN for random fields. A Gut, U Stadtmüller, U.U.D.M38Uppsala UniversityReportGut, A. and Stadtmüller, U. (2008). An asymmetric Marcinkiewicz-Zygmund LLN for random fields. Report U.U.D.M. 2008:38, Uppsala University.
Divergent Series. G H Hardy, Oxford University PressHardy, G.H. (1949). Divergent Series. Oxford University Press.
An infinite-dimensional law of large numbers in Cesaro's sense. J. Theoret. B Heinkel, Probab. 3Heinkel, B. (1990). An infinite-dimensional law of large numbers in Cesaro's sense. J. The- oret. Probab. 3, 533-546.
Über die Summen durch den Zufall bestimmter unabhängiger Größen. A N Kolmogorov, Math. Ann. 99Kolmogorov, A.N. (1928).Über die Summen durch den Zufall bestimmter unabhängiger Größen. Math. Ann. 99, 309-319.
Über die Summen zufälliger Größen. A N Kolmogorov, Math. Ann. 102Bemerkungen zu meiner ArbeitKolmogorov, A.N. (1930). Bemerkungen zu meiner Arbeit "Über die Summen zufälliger Größen". Math. Ann. 102, 484-488.
Borel and Banach properties of methods of summation. G G Lorentz, Duke Math. J. 22Lorentz G.G. (1955). Borel and Banach properties of methods of summation. Duke Math. J. 22, 129-141.
Sur les fonctions indépendantes. J Marcinkiewicz, A Zygmund, Fund. Math. 29Marcinkiewicz, J. and Zygmund, A. Sur les fonctions indépendantes. Fund. Math. 29, 60-90 (1937).
Summable Series and Convergence Factors. C N Moore, Dover, New YorkMoore, C.N. (1966). Summable Series and Convergence Factors. Dover, New York.
On the convergence of supercritical general (C-M-J) branching processes. O Nerman, Z. Wahrsch. verw. Gebiete. 57Nerman, O. (1981). On the convergence of supercritical general (C-M-J) branching processes. Z. Wahrsch. verw. Gebiete 57, 365-395.
Strong laws of large number for r-dimensional arrays of random variables. R Smythe, Ann. Probab. 1Smythe, R. (1973). Strong laws of large number for r-dimensional arrays of random variables. Ann. Probab. 1, 164-170.
Strong laws for delayed sums of random fields. U Stadtmüller, M Thalmaier, University of UlmPreprintStadtmüller, U. and Thalmaier, M. (2008). Strong laws for delayed sums of random fields. Preprint, University of Ulm.
. M Thalmaier, Grenzwertsätze für gewichtete Summen von Zufallsvariablen und Zufallsfeldern. Dissertation, University of UlmThalmaier, M. (2008): Grenzwertsätze für gewichtete Summen von Zufallsvariablen und Zufallsfeldern. Dissertation, University of Ulm.
Trigonometric Series. A Zygmund, Cambridge University PressZygmund, A. (1968). Trigonometric Series. Cambridge University Press.
. Email: [email protected] URL. Email: [email protected] URL: http://www.math.uu.se/~allan
. Email [email protected], Url, Email [email protected] URL: http://www.mathematik.uni-ulm.de/matheIII/members/stadtmueller/stadtmueller.html
| []
|
[
"Akid: A Library for Neural Network Research and Production From A Dataism Approach",
"Akid: A Library for Neural Network Research and Production From A Dataism Approach"
]
| [
"Shuai Li [email protected] \nChinese University of Hong Kong\n\n"
]
| [
"Chinese University of Hong Kong\n"
]
| []
| Neural networks are a revolutionary but immature technique that is fast evolving and heavily relies on data. To benefit from the newest development and newly available data, we want the gap between research and production as small as possibly. On the other hand, differing from traditional machine learning models, neural network is not just yet another statistic model, but a model for the natural processing engine -the brain. In this work, we describe a neural network library named akid. It provides higher level of abstraction for entities (abstracted as blocks) in nature upon the abstraction done on signals (abstracted as tensors) by Tensorflow, characterizing the dataism observation that all entities in nature processes input and emit out in some ways. It includes a full stack of software that provides abstraction to let researchers focus on research instead of implementation, while at the same time the developed program can also be put into production seamlessly in a distributed environment, and be production ready. At the top application stack, it provides out-of-box tools for neural network applications. Lower down, akid provides a programming paradigm that lets user easily build customized models. The distributed computing stack handles the concurrency and communication, thus letting models be trained or deployed to a single GPU, multiple GPUs, or a distributed environment without affecting how a model is specified in the programming paradigm stack. Lastly, the distributed deployment stack handles how the distributed computing is deployed, thus decoupling the research prototype environment with the actual production environment, and is able to dynamically allocate computing resources, so development (Devs) and operations (Ops) could be separated. It has been open source, and please refer to http://akid.readthedocs.io/en/latest/ for documentation. | null | [
"https://arxiv.org/pdf/1701.00609v1.pdf"
]
| 6,770,505 | 1701.00609 | 6f750743a246983f53d98f050a50cfdc085a0095 |
Akid: A Library for Neural Network Research and Production From A Dataism Approach
Shuai Li [email protected]
Chinese University of Hong Kong
Akid: A Library for Neural Network Research and Production From A Dataism Approach
neural networklibraryblockdistributed computing
Neural networks are a revolutionary but immature technique that is fast evolving and heavily relies on data. To benefit from the newest development and newly available data, we want the gap between research and production as small as possibly. On the other hand, differing from traditional machine learning models, neural network is not just yet another statistic model, but a model for the natural processing engine -the brain. In this work, we describe a neural network library named akid. It provides higher level of abstraction for entities (abstracted as blocks) in nature upon the abstraction done on signals (abstracted as tensors) by Tensorflow, characterizing the dataism observation that all entities in nature processes input and emit out in some ways. It includes a full stack of software that provides abstraction to let researchers focus on research instead of implementation, while at the same time the developed program can also be put into production seamlessly in a distributed environment, and be production ready. At the top application stack, it provides out-of-box tools for neural network applications. Lower down, akid provides a programming paradigm that lets user easily build customized models. The distributed computing stack handles the concurrency and communication, thus letting models be trained or deployed to a single GPU, multiple GPUs, or a distributed environment without affecting how a model is specified in the programming paradigm stack. Lastly, the distributed deployment stack handles how the distributed computing is deployed, thus decoupling the research prototype environment with the actual production environment, and is able to dynamically allocate computing resources, so development (Devs) and operations (Ops) could be separated. It has been open source, and please refer to http://akid.readthedocs.io/en/latest/ for documentation.
INTRODUCTION
Neural network, which is a cornerstone technique of a pool of techniques under the name of Deep Learning nowadays, seems to have the potential to lead to another technology revolution. It has incurred wide enthusiasm in industry, and serious consideration in public sector and impact evaluation in government. However, though being a remarkable breakthrough in high dimensional perception problems academically and intellectually stimulating and promising [8] [13] [5] [9] [14], it is still rather an immature technique that is fast moving and in short of understanding [11]. Temporarily its true value lies in the capability to solve perception related data analytic problems in industry, e.g. self-driving cars, detection of lung cancer etc. On the other hand, Neural Network is a technique that heavily relies on a large volume of data. It is critical for businesses that use such a technique to leverage on newly available data as soon as possible, which helps form a positive feedback loop that reinforces the quality of service.
Accordingly, to benefit from the newest development and newly available data, we want the gap between research and production as small as possible. In this package, we explore technology stack abstraction that enable fast research prototyping and are production ready.
akid tries to provide a full stack of software that provides abstraction to let researchers focus on research instead of implementation, while at the same time the developed program can also be put into production seamlessly in a distributed environment, and be production ready.
At the top application stack, it provides out-of-box tools for neural network applications. Lower down, akid provides programming paradigm that lets users easily build customized models, which is the major intellectual innovation of akid that provides higher level of abstraction for entities in nature (abstracted as blocks) upon the abstraction done on signals (abstracted as tensors) by Tensorflow. The distributed computing stack handles the concurrency and communication, thus letting models be trained or deployed to a single GPU, multiple GPUs, or a distributed environment without affecting how a model is specified in the program-ming paradigm stack. Lastly, the distributed deployment stack handles how the distributed computing is deployed, thus decoupling the research prototype environment with the actual production environment, and is able to dynamically allocate computing resources, so development (Devs) and operations (Ops) could be separated. An illustration of the four stack is shown in Figure 1.
From a feature point of view as a library, it aims to enable fast prototyping and production ready at the same time by offering the following features:
• supports fast prototyping built-in data pipeline framework that standardizes data preparation and data augmentation. arbitrary connectivity schemes (including multiinput and multi-output training), and easy retrieval of parameters and data in the network meta-syntax to generate neural network structure before training support for visualization of computation graph, weight filters, feature maps, and training dynamics statistics.
• be production ready built-in support for distributed computing compatibility to orchestrate with distributed file systems, docker containers, and distributed operating systems such as Kubernetes.
The name comes from the Kid saved by Neo in Matrix, and the metaphor to build a learning agent, which we call kid in human culture.
The rest of the paper discusses related works in Section 2, each stack of akid in detail in Section 3.
RELATED WORKS
akid differs from existing packages from the perspective that it does not aim to be yet another wrapper for another machine learning model. Subtle it seems. The fundamental difference lies in the design. It aims to reproduce how signal propagates in nature by introducing Block. If Tensor in Tensorflow can be viewed as the abstraction for signals in nature, Block can be viewed as the abstraction for entities in nature, which all process inputs in some way, and emit output. It also aims to integrate technology stacks to solve both research prototyping and industrial production by clearly defining the behavior for each stack. We compare akid with existing packages in the following briefly. Note that since Tensorflow is used as the computation backend, we do not discuss speed here, which is not our concern for akid.
Theano [12], Torch [3], Caffe [7], MXNet [2] are packages that aim to provide a friendly front end to complex computation back-end that are written in C++. Theano is a python front end to a computational graph compiler, which has been largely superseded by Tensorflow in the compilation speed, flexibility, portability etc, while akid is built on of Tensorflow. MXNet is a competitive competitor to Tensorflow. Torch is similar with theano, but with the front-end language to be Lua, the choice of which is mostly motivated from the fact that it is much easier to interface with C using Lua than Python. It has been widely used before deep learning has reached wide popularity, but is mostly a quick solution to do research in neural networks when the integration with community and general purpose production programming are not pressing. Caffe is written in C++, whose friendly front-end, aka the text network configuration file, loses its affinity when the model goes more than dozens of layer.
DeepLearning4J is an industrial solution to neural networks written in Java and Scala, and is too heavy weight for research prototyping.
Perhaps the most similar package existing with akid is Keras, which both aim to provide a more intuitive interface to relatively low-level library, i.e. Tensorflow. akid is different from Keras in at least two fundamental aspects. First, akid mimics how signals propagate in nature by abstracting everything as a semantic block, which holds many states, thus is able to provide a wide range of functionalities in a easily customizable way, while Keras uses a functional API that directly manipulates tensors, which is a lower level of abstraction, e.g. it has to do class attributes traverse to retrieve layer weights with a fixed variable name while in akid variable are retrieved by names. Second, Keras mostly only provides an abstraction to build a neural network topology, which is roughly the programming paradigm stack of akid, while akid provides unified abstraction that includes application stack, programming stack, and distributed computing stack. A noticeable improvement is that Keras needs the user to handle communication and concurrency, while the distributed computing stack of akid hides them.
AKID STACK
Now we go technical to discuss each stack provided by akid. The major novel intellectual design of akid is the programming paradigm that provides a higher level abstraction upon signal/tensor. We introduce it first, then we discuss the application stack, and distributed computing and deployment stack. akid builds another layer of abstraction on top of Tensor : Block. Tensor can be taken as the media/formalism signal propagates in digital world, while Block is the data processing entity that processes inputs and emits outputs.
Programming Paradigm
It is all about signal processing blocks
It coincides with a branch of "ideology" called dataism that takes everything in this world is a data processing entity. An interesting one that may come from A Brief History of Tomorrow by Yuval Noah Harari.
Best designs mimic nature. akid tries to reproduce how signals in nature propagates. Information flow can be abstracted as data propagating through inter-connected blocks, each of which processes inputs and emits outputs. For example, a vision classification system is a block that takes image inputs and gives classification results. Everything is a Block in akid.
A block could be as simple as a convonlutional neural network layer that merely does convolution on the input data and outputs the results; it also be as complex as an acyclic graph that inter-connects blocks to build a neural network, or sequentially linked block system that does data augmentation.
Compared with pure symbol computation approach, like the one in tensorflow, a block is able to contain states associated with this processing unit. Signals are passed between blocks in form of tensors or list of tensors. Many heavy lifting has been done in the block (Block and its subclasses), e.g. pre-condition setup, name scope maintenance, copy functionality for validation and copy functionality for distributed replicas, setting up and gathering visualization summaries, centralization of variable allocation, attaching debugging ops now and then etc.
akid offers various kinds of blocks that are able to connect to other blocks in an arbitrary way, as illustrated in Figure 2. It is also easy to build one's own blocks. The Kid class is essentially an assembler that assemblies blocks provided by akid to mainly fulfill the task to train neural networks. Here we show how to build an arbitrary acyclic graph of blocks using class Brain, to illustrate how to use blocks in akid.
A brain is the data processing engine to process data supplied by Sensor to fulfill certain tasks. More specifically,
• it builds up blocks to form an arbitrary network • offers sub-graphs for inference, loss, evaluation, summaries
• provides access to all data and parameters within
To use a brain, data as a list should be fed in, as how it is done in with any other block. Some pre-specified brains are available under akid.models.brains. An example could be: # ... first get a feed sensor sensor.setup() brain = OneLayerBrain(name="brain") input = [sensor.data(), sensor.labels()] brain.setup(input) Note in this case, data() and labels() of sensor returns tensors. It is not always the case. If it does not, saying return a list of tensors, you need do things like: input = [sensor.data()] input.extend(sensor.labels()) Act accordingly. Similarly, all blocks work this way. A brain provides easy ways to connect blocks. For example, a one layer brain can be built through the following:
class OneLayerBrain(Brain):
def __init__(self, **kwargs): super(OneLayerBrain, self).__init__(**kwargs) self.attach( ConvolutionLayer(ksize= [5,5], strides=[1, 1, 1, 1], padding="SAME", out_channel_num=32, name="conv1") ) self.attach(ReLULayer(name="relu1")) self.attach( PoolingLayer(ksize= [1,5,5,1], strides= [1,5,5,1], padding="SAME", name="pool1") ) self.attach(InnerProductLayer( out_channel_num=10, name="ip1")) self.attach(SoftmaxWithLossLayer( class_num=10, inputs=[ {"name": "ip1", "idxs": [0]}, {"name": "system_in", "idxs": [1]}], name="loss")) It assembles a convolution layer, a ReLU Layer, a pooling layer, an inner product layer and a loss layer. To attach a block (layer) that directly takes the outputs of the previous attached layer as inputs, just directly attach the block. If inputs exists, the brain will fetch corresponding tensors by name of the block attached and indices of the outputs of that layer. See the loss layer above for an example. Note that even though there are multiple inputs for the brain, the first attached layer of the brain will take the first of these input by default, given the convention that the first tensor is the data, and the remaining tensors are normally labels, which is not used till very late.
As an example to build more complex connectivity scheme, residual units can be built using Brain as shown in Figure 3.
Self-modifying brains -parameter tuning
akid offers automatic parameter tuning through defining template using tune function. A function tune that takes a Brain jinja2 template class and a parameters to fill the template in runtime.
The tune function would use all available GPUs to train networks with all given different set of parameters. If available GPUs are not enough, the ones that cannot be trained will wait till some others finish, and get its turn.
Tunable parameters are divided into two sets, network hyper parameters, net_paras_list, and optimization hyper parameters, opt_paras_list. Each set is specified by a list whose item is a dictionary that holds the actual value of whatever hyper parameters defined as jinja2 templates. Each item in the list corresponds to a tentative training instance. Network paras and optimization paras combine with each other exponentially (or in Cartesian Product way if we could use Math terminology), which is to say if you have two items in network parameter list, and two in optimization parameters, the total number of training instances will be four. On the left is the branch that builds up patterns complexity, and on the right is the stem branch that shortcuts any layers to any layers. They merge at the at the start and at the end of the branching points.
Given the available GPU numbers, a semaphore is created to control access to GPUs. A lock is created to control access to the mask to indicator which GPU is available. After a process has modified the gpu mask, it releases the lock immediately, so other process could access it. But the semaphore is still not release, since it is used to control access to the actual GPU. A training instance will be launched in a subshell using the GPU acquired. The semaphore is only released after the training has finished.
For example, to tune the activation function and learning rates of a network, first we set up network parameters in net_paras_list, optimization parameters in opt_paras_list, build a network in the setup function, then pass all of it to tune: net_paras_list = [] net_paras_list.append({ "activation": [ {"type": "relu"}, {"type": "relu"}, {"type": "relu"}, {"type": "relu"}], "bn": True}) net_paras_list.append({ "activation": [ {"type": "maxout", "group_size": 2}, {"type": "maxout", "group_size": 2}, {"type": "maxout", "group_size": 2}, {"type": "maxout", "group_size": 5}], "bn": True}) opt_paras_list = [] opt_paras_list.append({"lr": 0.025}) opt_paras_list.append({"lr": 0.05}) def setup(graph):
brain.attach(cnn_block( ksize= [8,8], init_para={ "name": "uniform", "range": 0.005}, wd={"type": "l2", "scale": 0.0005}, out_channel_num=384, pool_size= [4,4], pool_stride= [2,2], activation={{ net_paras["activation"][1] }}, keep_prob=0.5, bn={{ net_paras["bn"] }})) tune(setup, opt_paras_list, net_paras_list)
Application stack
At the top of the stack, akid could be used as a part of application without knowing the underlying mechanism of neural networks.
akid provides full machinery from preparing data, augmenting data, specifying computation graph (neural network architecture), choosing optimization algorithms, specifying parallel training scheme (data parallelism etc), logging and visualization.
Neural network training -A holistic example
To create better tools to train neural network has been at the core of the original motivation of akid. Consequently, in this section, we describe how akid can be used to train neural networks. Currently, all the other features resolve around this.
The snippet below builds a simple neural network, and trains it using MNIST, the digit recognition dataset.
from akid import AKID_DATA_PATH from akid import FeedSensor from akid import Kid from akid import MomentumKongFu from akid import MNISTFeedSource from akid.models.brains import LeNet brain = LeNet(name="Brain") source = MNISTFeedSource( name="Source", url='http://yann.lecun.com/exdb/mnist/', work_dir=AKID_DATA_PATH + '/mnist', center=True, scale=True, num_train=50000, num_val=10000) sensor = FeedSensor(name='Sensor', source_in=source) s = Kid(sensor, brain, MomentumKongFu(name="Kongfu"), max_steps=100) kid.setup() kid.practice() It builds a computation graph as shown in Figure 4. The underlying stories are described in the following section, which also debriefs the design motivation and vision behind akid.
akid is a kid who has the ability to keep practicing to improve itself. The kid perceives a data Source with its Sensor and certain learning methods (nicknamed KongFu) to improve itself (its Brain), to fulfill a certain purpose. The world is timed by a clock. It represents how long the kid has been practicing. Technically, the clock is the conventional training step.
To break things done, Sensor takes a Source which either provides data in form of tensors from Tensorflow or numpy arrays. Optionally, it can make jokers on the data using Joker, meaning doing data augmentation. The data processing engine, which is a deep neural network, is abstracted as a Brain. Brain is the name we give to the data processing system in living beings. A Brain incarnates one of data processing system topology, or in the terminology of neural network, network structure topology, such as a sequentially linked together layers, to process data. Available topology is defined in module systems. The network training methods, which are first order iterative optimization methods, is abstracted as a class KongFu. A living being needs to keep practicing Kong Fu to get better at tasks needed to survive.
A living being is abstracted as a Kid class, which assemblies all above classes together to play the game. The metaphor means by sensing more examples, with certain genre of Kong Fu(different training algorithms and policies), the data processing engine of the Kid, the brain, should get better at doing whatever task it is doing, letting it be image classification or something else.
Visualization
As a library gearing upon research, it also has rich features to visualize various components of a neural network. It has built-in training dynamics visualization, more specifically, distribution visualization on multi-dimensional tensors, e.g., weights, activation, biases, gradients, etc, and line graph visualization on on scalars, e.g., training loss, validation loss, learning rate decay, regularization loss in each layer, sparsity of neuron activation etc, and filter and feature map visualization for neural networks.
Distribution and scalar visualization are built in for typical parameters and measures, and can be easily extended, and distributedly gathered. Distribution visualizations are shown in Figure 5, and scalar visualizations are shown in Figure 6. Reading from top to bottom, the lines have the following meaning: [maximum, 93%, 84%, 69%, 50%, 31%, 16%, 7%, minimum] These percentiles can also be viewed as standard deviation boundaries on a normal distribution: [maximum, µ+1.5σ, µ+σ, µ+0.5σ, µ, µ-0.5σ, µ-σ, µ-1.5σ, minimum] so that the colored regions, read from inside to outside, have widths [σ, 2σ, 3σ] respectively.
akid supports visualization of all feature maps and filters with control on the layout through Observer class. When having finished creating a Kid, pass it to Observer, and call visualization as the following.
Distributed Computation
The distributed computing stack is responsible to handle concurrency and communication between different computing nodes, so the end user only needs to deal with how to build a power network. All complexity has been hidden in the class Engine. The usage of Engine is just to pick and use.
More specifically, akid offers built-in data parallel scheme in form of class Engine. Currently, the engine mainly works with neural network training, which is be used with Kid by specifying the engine at the construction of the kid.
As an example, we could do data parallelism on multiple towers using: kid = kids.Kid( sensor, brain, MomentumKongFu( lr_scheme={ "name": LearningRateScheme.placeholder}), engine={"name": "data_parallel", "num_gpu": 2}, log_dir="log", max_epoch=200) The end computational graph constructed is illustrated in Figure 9.
Distributed Deployment
The distributed deployment stack handles the actual production environment, thus decouples the development/prototyping environment and production environment. We leverage on recent developments of distributed system in the open source community to build a distributed deployment solution for akid. More specifically, we investigate and test out three cornerstone techniques that provides network file system, i.e. Glusterfs, containerization, i.e. Docker, and distributed scheduler, i.e. Kubernetes, functionality. More would come when we get the chance to test them out in a real production environment.
CONCLUSION
We have described akid, a neural network library that provides a four-layer stack to enable fast research prototyping and be production ready. It has a clean and intuitive application facing interface, nature inspired programming paradigm, and abstracts away distributing computing, decouples developments and operations. Figure 9: Illustration of computational graph constructed by a data parallel engine. It partitions a mini-batch of data into subsets, as indicated by the data_split blue blocks, and passes the subsets to replicates of neural network models at different computing towers, as indicated by the gray blocks one level above blue blocks, then after the inference results have been computed, the results and the labels (from the splitted data block) will be passed to the optimizers in the same tower, as indicated by red and orange blocks named opt, to compute the gradients. Lastly, the gradients will be passed to a tower that computes the average of the gradients, and pass them back to neural networks of each computing towers to update their parameters.
Figure 1 :
1Illustration of stack abstraction of akid.
Figure 2 :
2Illustration of the arbitrary connectivity supported by akid. Forward connection, branching and mergine, and feedback connection are supported.
Figure 3 :
3A residual unit.
Figure 4 :
4Computational graph of the simple neural network built for MNIST digit recognition example.
Figure 5 :
5Visualization of how distribution of multidimensional tensors change over time. Each line on the chart represents a percentile in the distribution over the data: for example, the bottom line shows how the minimum value has changed over time, and the line in the middle shows how the median has changed.
filters as the following o.visualize_filters() # Or visualize feature maps as the following o.visualize_activation() Various layouts are provided when drawing the filters. Additional features are also available. The post-preprocessed visualization results of filters are shown in Figure 8, and that of feature maps are shown Figure 7.
Figure 6 :
6Visualization of how important scalar measures change over time.
Figure 7 :
7Visualization of feature maps learned.
Figure 8 :
8Visualization of filters learned.
. B Burns, B Grant, D Oppenheimer, E Brewer, J Wilkes, Omega Borg, Kubernetes Queue, 14B. Burns, B. Grant, D. Oppenheimer, E. Brewer, and J. Wilkes. Borg, Omega, and Kubernetes. Queue, 14(1):70-93, jan 2016.
MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. T Chen, M Li, Y Li, M Lin, N Wang, M Wang, T Xiao, B Xu, C Zhang, Z Zhang, NIPS, Workshop on Machine Learning Systems. T. Chen, M. Li, Y. Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu, C. Zhang, and Z. Zhang. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. In NIPS, Workshop on Machine Learning Systems, dec 2016.
Torch: A Modular Machine Learning Software Library. R Collobert, R Collobert, S Bengio, J Marithoz, Technical reportR. Collobert, R. Collobert, S. Bengio, and J. Marithoz. Torch: A Modular Machine Learning Software Library. Technical report, 2002.
. J D J Deng, W D W Dong, R Socher, L.-J L L Li, K L K Li, L F , -F L Fei-Fei, ImageNet: A large-scale hierarchical image database. CVPRJ. D. J. Deng, W. D. W. Dong, R. Socher, L.-J. L. L.-J. Li, K. L. K. Li, and L. F.-F. L. Fei-Fei. ImageNet: A large-scale hierarchical image database. CVPR, 2009.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. K He, X Zhang, S Ren, J Sun, ICCV. K. He, X. Zhang, S. Ren, and J. Sun. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In ICCV, 2015.
Identity Mappings in Deep Residual Networks. K He, X Zhang, S Ren, J Sun, ECCV. K. He, X. Zhang, S. Ren, and J. Sun. Identity Mappings in Deep Residual Networks. In ECCV, 2016.
Caffe: Convolutional Architecture for Fast Feature Embedding. Y Jia, E Shelhamer, J Donahue, S Karayev, J Long, R Girshick, S Guadarrama, T Darrell, Technical reportY. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional Architecture for Fast Feature Embedding. Technical report, jun 2014.
ImageNet Classification with Deep Convolutional Neural Networks. A Krizhevsky, I Sutskever, G E Hinton, NIPS. A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In NIPS, 2012.
Understanding deep convolutional networks. S Mallat, Philosophical transactions. Series A, Mathematical, physical, and engineering sciences. 374S. Mallat. Understanding deep convolutional networks. Philosophical transactions. Series A, Mathematical, physical, and engineering sciences, 374(2065), 2016.
Docker: lightweight Linux containers for consistent development and deployment. Dirk Merkel, Linux Journal. 239Merkel and Dirk. Docker: lightweight Linux containers for consistent development and deployment. Linux Journal, (239):2, 2014.
C Szegedy, W Zaremba, I Sutskever, arXiv: . . .Intriguing properties of neural networks. arXiv preprintC. Szegedy, W. Zaremba, and I. Sutskever. Intriguing properties of neural networks. arXiv preprint arXiv: . . . , pages 1-10, 2013.
. T T D The Theano Development Team, R Al-Rfou, G Alain, A Almahairi, C Angermueller, D Bahdanau, N Ballas, F Bastien, J Bayer, A Belikov, A Belopolsky, Y Bengio, A Bergeron, J Bergstra, V Bisson, J B Snyder, N Bouchard, N Boulanger-Lewandowski, X Bouthillier, A De Brébisson, O Breuleux, P.-L Carrier, K Cho, J Chorowski, P Christiano, T Cooijmans, M.-A Côté, M Côté, A Courville, Y N Dauphin, O Delalleau, J Demouth, G Desjardins, S Dieleman, L Dinh, M Ducoffe, V Dumoulin, S E Kahou, D Erhan, Z Fan, O Firat, M Germain, X Glorot, I Goodfellow, M Graham, C Gulcehre, P Hamel, I Harlouchet, J.-P Heng, B Hidasi, S Honari, A Jain, S Jean, K Jia, M Korobov, V Kulkarni, A Lamb, P Lamblin, E Larsen, C Laurent, S Lee, S Lefrancois, S Lemieux, N Léonard, Z Lin, J A Livezey, C Lorenz, J Lowin, Q Ma, P.-A Manzagol, O Mastropietro, R T Mcgibbon, R Memisevic, B Van Merriënboer, V Michalski, M Mirza, A Orlandi, C Pal, R Pascanu, M Pezeshki, C Raffel, D Renshaw, M Rocklin, A Romero, M Roth, P Sadowski, J Salvatier, F Savard, J Schlüter, J Schulman, G Schwartz, I V Serban, D Serdyuk, S Shabanian, É Simon, S Spieckermann, S R Subramanyam, J. Sygnowski, J. Tanguay, G. van Tulder, J. Turian, S. Urban, P. Vincent, F. VisinT. T. D. The Theano Development Team, R. Al-Rfou, G. Alain, A. Almahairi, C. Angermueller, D. Bahdanau, N. Ballas, F. Bastien, J. Bayer, A. Belikov, A. Belopolsky, Y. Bengio, A. Bergeron, J. Bergstra, V. Bisson, J. B. Snyder, N. Bouchard, N. Boulanger-Lewandowski, X. Bouthillier, A. de Brébisson, O. Breuleux, P.-L. Carrier, K. Cho, J. Chorowski, P. Christiano, T. Cooijmans, M.-A. Côté, M. Côté, A. Courville, Y. N. Dauphin, O. Delalleau, J. Demouth, G. Desjardins, S. Dieleman, L. Dinh, M. Ducoffe, V. Dumoulin, S. E. Kahou, D. Erhan, Z. Fan, O. Firat, M. Germain, X. Glorot, I. Goodfellow, M. Graham, C. Gulcehre, P. Hamel, I. Harlouchet, J.-P. Heng, B. Hidasi, S. Honari, A. Jain, S. Jean, K. Jia, M. Korobov, V. Kulkarni, A. Lamb, P. Lamblin, E. Larsen, C. Laurent, S. Lee, S. Lefrancois, S. Lemieux, N. Léonard, Z. Lin, J. A. Livezey, C. Lorenz, J. Lowin, Q. Ma, P.-A. Manzagol, O. Mastropietro, R. T. McGibbon, R. Memisevic, B. van Merriënboer, V. Michalski, M. Mirza, A. Orlandi, C. Pal, R. Pascanu, M. Pezeshki, C. Raffel, D. Renshaw, M. Rocklin, A. Romero, M. Roth, P. Sadowski, J. Salvatier, F. Savard, J. Schlüter, J. Schulman, G. Schwartz, I. V. Serban, D. Serdyuk, S. Shabanian,É. Simon, S. Spieckermann, S. R. Subramanyam, J. Sygnowski, J. Tanguay, G. van Tulder, J. Turian, S. Urban, P. Vincent, F. Visin,
Theano: A Python framework for fast computation of mathematical expressions. H Vries, D Warde-Farley, D J Webb, M Willson, K Xu, L Xue, L Yao, S Zhang, Y Zhang, Technical reportH. de Vries, D. Warde-Farley, D. J. Webb, M. Willson, K. Xu, L. Xue, L. Yao, S. Zhang, and Y. Zhang. Theano: A Python framework for fast computation of mathematical expressions. Technical report, may 2016.
Achieving Human Parity in Conversational Speech Recognition. W Xiong, J Droppo, X Huang, F Seide, M Seltzer, A Stolcke, D Yu, G Zweig, Technical reportW. Xiong, J. Droppo, X. Huang, F. Seide, M. Seltzer, A. Stolcke, D. Yu, and G. Zweig. Achieving Human Parity in Conversational Speech Recognition. Technical report, oct 2016.
Visualizing and understanding convolutional networks. M Zeiler, R Fergus, ECCVM. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. ECCV, 2014.
| []
|
[
"Alpha Decay Energies of Superheavy Nuclei: Systematic Trends",
"Alpha Decay Energies of Superheavy Nuclei: Systematic Trends"
]
| [
"E Olsen \nDepartment of Physics and Astronomy\nFRIB Laboratory\nMichigan State University\n48824East LansingMichiganUSA\n",
"W Nazarewicz \nDepartment of Physics and Astronomy\nFRIB Laboratory\nMichigan State University\n48824East LansingMichiganUSA\n"
]
| [
"Department of Physics and Astronomy\nFRIB Laboratory\nMichigan State University\n48824East LansingMichiganUSA",
"Department of Physics and Astronomy\nFRIB Laboratory\nMichigan State University\n48824East LansingMichiganUSA"
]
| []
| Background: New superheavy nuclei are often identified through their characteristic α-decay energies, which requires accurate calculations of Qα values. While many Qα predictions are available, little is known about their uncertainties, and this makes it difficult to carry out extrapolations to yet-unknown systems.Purpose: This work aims to analyze several models, compare their predictions to available experimental data, and study their performance for the unobserved α-decay chains of 296 120 and 298 120, which are of current experimental interest. Our quantified results will also serve as a benchmark for future, more sophisticated statistical studies.Methods: We use nuclear superfluid Density Functional Theory (DFT) with several Skyrme energy density functionals (EDFs). To estimate systematic model uncertainties we employ uniform model averaging.Results: We evaluated the Qα values for even-even nuclei from Fm to Z = 120. For well deformed nuclei between Fm and Ds, we find excellent consistency between different model predictions, and a good agreement with experiment. For transitional nuclei beyond Ds, inter-model differences grow, resulting in an appreciable systematic error. In particular, our models underestimate Qα for the heaviest nucleus 294 Og.Conclusions:The robustness of DFT predictions for well deformed superheavy nuclei supports the idea of using experimental Qα values, together with theoretical predictions, as reasonable (Z, A) indicators. Unfortunately, this identification method is not expected to work well in the region of deformed-to-spherical shape transition as one approaches N = 184. The use of Qα values in the identification of new superheavy nuclei will benefit greatly from both progress in developing new spectroscopic-quality EDFs and more sophisticated statistical techniques of uncertainty quantification. | 10.1103/physrevc.99.014317 | [
"https://arxiv.org/pdf/1811.00427v2.pdf"
]
| 118,997,419 | 1811.00427 | 8ee3719ad72c59b64d13ca13bfb3081d29ccd7e6 |
Alpha Decay Energies of Superheavy Nuclei: Systematic Trends
E Olsen
Department of Physics and Astronomy
FRIB Laboratory
Michigan State University
48824East LansingMichiganUSA
W Nazarewicz
Department of Physics and Astronomy
FRIB Laboratory
Michigan State University
48824East LansingMichiganUSA
Alpha Decay Energies of Superheavy Nuclei: Systematic Trends
(Dated: January 16, 2019)
Background: New superheavy nuclei are often identified through their characteristic α-decay energies, which requires accurate calculations of Qα values. While many Qα predictions are available, little is known about their uncertainties, and this makes it difficult to carry out extrapolations to yet-unknown systems.Purpose: This work aims to analyze several models, compare their predictions to available experimental data, and study their performance for the unobserved α-decay chains of 296 120 and 298 120, which are of current experimental interest. Our quantified results will also serve as a benchmark for future, more sophisticated statistical studies.Methods: We use nuclear superfluid Density Functional Theory (DFT) with several Skyrme energy density functionals (EDFs). To estimate systematic model uncertainties we employ uniform model averaging.Results: We evaluated the Qα values for even-even nuclei from Fm to Z = 120. For well deformed nuclei between Fm and Ds, we find excellent consistency between different model predictions, and a good agreement with experiment. For transitional nuclei beyond Ds, inter-model differences grow, resulting in an appreciable systematic error. In particular, our models underestimate Qα for the heaviest nucleus 294 Og.Conclusions:The robustness of DFT predictions for well deformed superheavy nuclei supports the idea of using experimental Qα values, together with theoretical predictions, as reasonable (Z, A) indicators. Unfortunately, this identification method is not expected to work well in the region of deformed-to-spherical shape transition as one approaches N = 184. The use of Qα values in the identification of new superheavy nuclei will benefit greatly from both progress in developing new spectroscopic-quality EDFs and more sophisticated statistical techniques of uncertainty quantification.
I. INTRODUCTION
Superheavy nuclei with Z ≥ 104 occupy the upper right-hand corner of the nuclear chart [1,2]. The study of these massive systems has been prompted by a desire to answer many fundamental questions pertaining to nuclear and atomic physics, and chemistry [3,4].
In particular, the search for long-lived superheavy nuclei in nature has been active for many decades. Early theoretical calculations predicted the superheavy magic numbers (the so-called "island of stability" [5]) at Z = 114 and N = 184 [6][7][8]. As time progressed and models improved, the superheavy magic numbers were suggested at 114, 120, 124, or 126 for protons and either 172 or 184 for neutrons [9][10][11][12][13][14]. However, unlike with traditional magic numbers, these predictions for superheavy nuclei are more likely to correspond to extended half-lives rather than stable systems [15]; this is due to both large Coulomb repulsion and the high density of single-particle levels [16,17] resulting in a diffused shell structure [16][17][18].
Through the experimental techniques of cold and hot heavy-ion fusion [19,20], many isotopes of new elements between Z = 114 (Fl) and Z = 118 (Og) were discovered and added to the nuclear chart in the last decade [21][22][23]. At present, efforts to identify nuclei beyond Og and more neutron-rich systems have been unsuccessful [24][25][26].
Known superheavy nuclei primarily decay through α emission and spontaneous fission [27][28][29]. As a result, new isotopes are often identified through the observation of their characteristic α-decay chains [30] based on experimental data and theoretical predictions. As such, calculations of Q α values with quantified uncertainties are useful for future superheavy nuclei searches.
Numerous calculations of Q α values for superheavy nuclei are available, see, e.g., Refs. [16,[31][32][33][34][35][36][37][38][39][40][41][42][43]. Some of these studies also include calculations of α-decay halflives using empirical formulas [5,[44][45][46][47][48][49][50][51], in which halflives are expressed as functions of Q α . In this respect, Q α values and half-lives carry the same information content. Except for the recent surveys [16,36], the emphasis of theoretical studies was on the performance of a specific model. It is the purpose of this paper to take another approach: analyze and compare Q α values predicted by several Skyrme-DFT models. In this way, their systematic uncertainties and robustness can be estimated more thoroughly through both direct analysis of different parameterizations and model mixing.
This paper is structured as follows. In Sec. II we discuss the theoretical approach used. Section III displays our results for known superheavy nuclei found and unknown 296 120 and 298 120. It also presents results of the model-mixing analysis. Finally, in Sec. IV we discuss conclusions and perspectives.
II. THEORETICAL APPROACH
All of our calculations were performed within the framework of nuclear Density Functional Theory (DFT) [52], where the total energy of the system is expressed in terms of the energy density, which is a functional of onebody local densities and currents. Nuclear DFT is the tool of choice for making global predictions for complex heavy nuclei. As emphasized in Ref. [53], this method is general enough to be applied anywhere on the nuclear chart; it can incorporate nuclear deformations through intrinsic symmetry breaking; and it can provide quantified predictions for a variety of observables. The main ingredient of nuclear DFT is the energy density functional (EDF), which represents an effective in-medium nuclear interaction. An EDF contains a number of coupling constants which are adjusted to selected experimental data and theoretical pseudo-data [52,54,55]; depending on the optimization methodology and strategy chosen, these low-energy couplings change, and a new EDF parameterization is developed.
For this work, seven effective Skyrme EDFs [56,57] in the particle-hole channel augmented by a densitydependent, zero-range pairing term of mixed type [58] were chosen: SkM* [59], SLy4 [60], SV-min [54], UNEDF0 [55], UNEDF1 [61], UNEDF2 [62], and UNEDF1 SO [63]. The functionals SkM* (developed with a focus on surface energy to properly account for the fission barrier of 240 Pu using a semiclassical method) and SLy4 (developed with an emphasis on neutron-rich nuclei) are included for their value as traditional Skyrme EDFs, and serve as a benchmark against the performance of the newer parameterizations. The EDF SV-min was parameterized with the binding energies, charge and diffraction radii, and surface thicknesses of semimagic nuclei. The UNEDF0 parameterization was optimized to the binding energies, charge radii, and odd-even binding energy differences of spherical and deformed nuclei. The EDF UNEDF1 was developed for fission studies and extended the data set of UNEDF0 with the inclusion of new masses and the excitation energies of fission isomers. The functional UNEDF2 considers the tensor terms ignored in the previous UNEDF parametrizations; it was developed for studies of shell structure and extended the data set of UNEDF1 with single-particle energy splittings of doubly-magic nuclei. Finally, UNEDF1 SO is an EDF locally optimized in the transuranic region with the spin-orbit and pairing parameters fine-tuned to achieve a better agreement with both the excitation spectra and odd-even mass differences in 249 Bk and 251 Cf (with all of the other parameters being identical to UNEDF1). The selection of several EDFs, based on different optimization methodologies, allows for an estimation of systematic errors.
The procedure we used to perform our calculations was identical to that in our previous work on nuclear drip lines [64]. For a given nucleus, we solved the Hartree-Fock-Bogoliubov (HFB) equations of nuclear DFT [65] to find its ground-state binding energy and other global nuclear properties. Given the impact of shape deformation on nuclear binding energy, it was necessary to solve the HFB equations for several different nuclear configurations; the nuclear deformation (and other global nuclear properties) corresponding to the minimum binding energy were then recorded for each nucleus and used in subsequent calculations. Since there were thousands of calculations to make for many different nuclei, we utilized high-performance computing to expedite the process.
As the focus of our work was on superheavy systems, we limited ourselves to nuclei with proton numbers 98 ≤ Z ≤ 120. Also, we limited ourselves to nuclei with even numbers of protons and neutrons to avoid the complexities associated with odd-A and odd-odd systems [66][67][68]. To carry out our calculations, we used the DFT code HFBTHOv300 [69], which solves the HFB equations through direct diagonalization in the deformed harmonic oscillator basis. We included constraints on the quadrupole deformation β 2 to account for prolate, oblate, and spherical deformations. To expedite calculations, we imposed axial and reflection symmetry. Though the presence of triaxial shapes in this region is well established [70], their impact on the ground-state binding energy is predicted to be small [71], so this is a reasonable approximation. To approximately restore particle number symmetry broken in the HFB method, we used the Lipkin-Nogami procedure outlined in Ref. [72].
While for several EDFs the mass tables computed in Ref. [64] have been stored in the database MassExplorer [73], in order to be sure that the quadrupole deformations do not suddenly jump to extreme values, we recomputed all the mass tables and updated MassExplorer. From the calculated binding energies we extracted Q α values:
Q α = 28.3 MeV − BE(Z, N ) + BE(Z − 2, N − 2). (1)
To assess the quality of our Q α values we took two approaches. The first was to directly analyze the results from all 7 EDFs individually and compare them to one another and available experimental values. The second approach, used to estimate systematic uncertainties, was to mix several models.
III. RESULTS
We begin by evaluating our calculations for the αdecay chains of selected nuclei. In Fig. 1 we compare our nuclear DFT results for the Q α values of the α-decay chain of 270 Ds to experimental data; this nucleus was chosen for the availability of experimental data for every nucleus in its α-decay chain. The first thing we observe here is the overall consistency of the Skyrme EDF results: with the exception of 266 Hs for SkM*, every predicted Q α value follows the pattern of experimental data and increases with increasing mass number. We also notice that, when excluding SkM*, the spread of calculated Q α values is less than 1 MeV, and each data point lies within this range ( 266 Hs being an exception, where it is 25 keV above the closest result from SV-min). While it almost always overestimates the experimental data, the performance of SkM* here is not surprising given the improvement of later EDFs.
In Fig. 2 from systematic trends highlights the scarcity of experimental data in this region. Up to Ds, the pattern looks very similar to that of Fig. 1 for the given extrapolated values. However, at 284 Cn there is a slight decrease in the extrapolated Q α value, followed by an increase in the experimental Q α value at 288 Fl. This is due to an abrupt shape transition from prolate to oblate deformation near N = 174 caused by the triaxial softness of this region [36,70]. As a result, the consistency of the EDF results suffers, most noticeably with UNEDF0, whose results decrease from Fl to Lv while the experimental data increases. The reduced impact of the shape transition on UNEDF1 and UNEDF2 appears to highlight the necessity of including fission isomer and single-particle energy data in the global EDF optimization.
We also want to extend our analysis to nuclei which have not yet been observed. For this, we show our results for the Q α values of the α-decay chains of 296 120 and 298 120 in Fig. 3. Just like in Figs. 1 and 2 derestimation of the experimental value by UNEDF0, in comparison with UNEDF1, UNEDF1 SO , and UNEDF2, illustrates the need to incorporate more data into future EDFs. We also notice a trend in each calculated Q α value to increase for both 296 120 and 298 120; given the pattern seen in experimental data from Fl to Og, this behavior is promising. Figure 4 shows the analysis of Q α values along the isotopic chains from Fm to Z = 120. The borders between regions of prolate and oblate deformations and spherical shapes are marked. (See Ref. [36] for discussion of deformation predictions in other models.) The irregularity seen around N = 164, particularly well pronounced for SLy4, SV-min, and UNEDF1, is due to a prolate-deformed neutron subshell closure [76,77]. Once again, we observe an overall consistency for each EDF, with similar patterns emerging in the theoretical calculations for each isotopic chain, even in the regions where shape transitions occur. The proximity of our theoretical results to the experimental values, expressed through root-mean-square (rms) deviations δ(Q α ), is quite reasonable. As discussed earlier, the largest deviations from experiment are obtained for the heaviest elements Lv and Og, which are predicted to lie in the region of prolateto-oblate shape transition. When inspecting the individual rms deviations, the best performer is UNEDF1 with δ(Q α ) = 0.31 MeV, while the earliest Skyrme EDF SkM* yields δ(Q α ) = 0.81 MeV. In general, the range of rms deviations here is consistent with that of other DFT models [36]. For instance, if one considers the relativistic EDFs of Ref. [16], the δ(Q α ) values range from 0.32 MeV for PC-PK1 to 0.68 MeV for NL3*.
Assessing the uncertainty of a prediction made by a model in regions where experimental data are unavailable is a central issue in modern nuclear theory. So far we have estimated our uncertainty through the use of many different EDFs and compared their individual performances to experiment. Following Ref. [64], we now calculate the uniform average of the results of several models along with the corresponding standard deviations to determine the systematic uncertainty. For this we have chosen SVmin, UNEDF0, UNEDF1, and UNEDF2, as they are the most recently developed global EDFs used in this study. While this procedure may seem naïve, without additional information or costly statistical calculations, the choice of uniform weights is essentially optimal [78]. Also, by giving each model equal weight within the average, we can gain an idea of how more sophisticated model mixing may perform.
In Fig. 5 we show the model averaged results for the α-decay chains of 296 120 and 298 120. For both chains, between Fm and Ds, we see excellent agreement between calculated and experimental/recommended values. However, from Cn and beyond, the effects of the shape tran- sition spoil this agreement. As discussed earlier, the difference is particularly noticeable for 294 Og in the 298 120 α-decay chain; this is likely due to the low Q α value predicted by UNEDF0 (see Fig. 3). However, even when excluding UNEDF0 from the average (and including UNEDF1 SO , whose Q α value is much larger), the discrepancy remains substantial. In Fig. 6, we show the model-averaged results for the Q α values of isotopic chains for which experimental data exist. From Fm to Fl, the proximity to experimental data is quite good, and the error bands are relatively small. However, for Lv and Og, we see a similar behavior as in Fig. 5.
IV. CONCLUSION
In this paper we studied Q α values for even-even superheavy nuclei from Fm to Z = 120 within the framework of nuclear DFT with several different Skyrme EDFs. In order to estimate systematic errors, we analyzed theo- retical predictions for α-decay chains by comparing individual models, and also through model averaging. In the region of well deformed superheavy nuclei, the theoretical predictions are robust, with each EDF giving relatively consistent results. This robustness is somewhat reduced for shape-transitional nuclei. In general, the observed agreement with experimental data is quite reasonable. The behavior of individual functionals, particularly from the UNEDF family, also proved enlightening. Among the models used, the best performer is UN-EDF1 with an rms deviation of δ(Q α ) = 0.31 MeV. The improvement in the results of UNEDF1 and UNEDF2 over UNEDF0 in the region of shape transition indicates the significance of data on fission isomers and onequasiparticle states in the EDF optimization. We also analyzed the performance of the functional UNEDF1 SO that was locally optimized to the transuranic isotopes of Bk and Cf. Given its fine-tuning, it is interesting to see that its performance for Q α values is similar, or slightly worse, as compared to the other UNEDF parametrizations.
In general, the method of nuclide identification through Q α values is not expected to work well in the region of deformed-to-spherical shape transition. In this context, theory will benefit greatly from both progress in developing new spectroscopic-quality global EDFs and more sophisticated statistical techniques of uncertainty quantification. Experimentally, work on identifying new superheavy nuclei from the upper superheavy (hot fusion) region, without the use of Q α values, is already underway [79][80][81].
As the search continues for elements beyond Og [82][83][84][85], the accurate calculation of Q α values will prove more and more beneficial. The performance of our model mixing results in assessing and reducing uncertainty seems promising. Also, further improvements of predictability are expected through model mixing techniques of a more sophisticated type that utilize Bayesian model averaging [78,86,87], where the simple average is reweighted using model posterior probabilities computed by integrating the respective likelihoods over the parameter space. In the near future, however, to make more reliable extrapolations of Q α values, we intend to use Bayesian machine learning techniques as described in the recent Ref. [88].
V. ACKNOWLEDGMENTS
Discussions with Nicolas Schunck are appreciated. The computational facility at Lawrence Livermore National Laboratory housing Quartz was instrumental in this work. This work was supported by the U.S. Department of Energy under Award Numbers DOE-DE-NA0002847 (NNSA, the Stewardship Science Academic Alliances program), de-sc0013365 (Office of Science), and de-sc0018083 (Office of Science, NUCLEI SciDAC-4 collaboration).
FIG. 3 .
3Similar to Figs. 1 and 2 but for 296 120 (a) and 298 120 (b).
FIG. 5 .
5Model averaged Qα values for the nuclei along the αdecay chains of 296 120 (a) and 298 120 (b) calculated with the three UNEDF models and SV-min. Uniform model weights were assumed. Error bars on the theoretical predictions represent standard deviations. Experimental and recommended values from Ref.[74] are shown as stars with corresponding error bars.
we show our Q α results for the α-decay chain of 292 Lv, where the large number of values extrapolated FIG. 1. Qα values for nuclei along the α-decay chain of 270 Ds computed with several Skyrme EDFs. Experimental values from Ref. [74] are represented as stars.250 254 258 262 266 270
Mass number
7
8
9
10
11
12
Q α
(MeV)
Ds
Hs
Sg
Rf
No
Fm
270 Ds
Exp
SkM*
SLy4
SV-min
UNEDF0
UNEDF1
UNEDF1 SO
UNEDF2
FIG.4. Qα values for the isotopic chains of nuclei from Fm to Z = 120 for each global Skyrme EDF used in this study. Areas of prolate, oblate, and spherical shapes are marked. Experimental data from Ref.[74] are shown as circles and match the color of the corresponding line representing theoretical calculations (we note that no experimental data currently exist for the even-even isotopes of Cn). The root-mean-square (rms) deviation from experimental data (in MeV) is indicated for each model. For the local EDF UNEDF1 SO (not shown), the rms deviation was 0.46 MeV.Fm
No Rf
Sg Hs Ds
Cn
Fl Lv
Og
120
prolate
oblate sp.
SkM*
0.81
140 150 160 170 180
Neutron number
Fm
No Rf
Sg Hs
Ds Cn Fl
Lv Og
120
prolate
oblate sp.
UNEDF1
0.31
Fm
No
Rf
Sg Hs
Ds Cn
Fl
Lv Og
120
prolate
oblate sp.
SLy4
0.34
140 150 160 170 180
Fm
No Rf
Sg Hs
Ds Cn Fl
Lv Og
120
prolate
oblate sp.
UNEDF2
0.34
Fm
No Rf
Sg Hs
Ds
Cn
Fl
Lv Og
120
prolate
oblate sp.
SV-min
0.37
0.60
Theory
Experiment
146 150 154 158 162 166 170 174 FIG. 6. Model-averaged Qα values for the isotopic chains of even-even nuclei from Fm to Z = 120 (excluding Cn) calculated with the three UNEDF models and SV-min. Uniform model weights were assumed. Error bars and error bands represent standard deviations. Experimental data from Ref.[74] are shown as circles and match the color of the line representing theoretical predictions. The rms deviation from experimental data for the model-averaged results is 0.35 MeV.Neutron number
6
7
8
9
10
11
12
Q α
(MeV)
Fm
No
Rf
Sg
Hs Ds
Fl
Lv
Og
SV-min+UNEDF0+UNEDF1+UNEDF2
Theory
Experiment
D C Hoffman, A Ghiorso, G T Seaborg, The Transuranium People: The Inside Story. Imperial College PressD. C. Hoffman, A. Ghiorso, and G. T. Seaborg, The Transuranium People: The Inside Story (Imperial Col- lege Press, 2000).
G T Seaborg, W D Loveland, The Elements Beyond Uranium. Wiley-InterscienceG. T. Seaborg and W. D. Loveland, The Elements Be- yond Uranium (Wiley-Interscience, 1990).
. S Giuliani, Z Matheson, W Nazarewicz, E Olsen, P.-G Reinhard, J Sadhukhan, B Schuetrumpf, N Schunck, P Schwerdtfeger, Rev. Mod. Phys. S. Giuliani, Z. Matheson, W. Nazarewicz, E. Olsen, P.- G. Reinhard, J. Sadhukhan, B. Schuetrumpf, N. Schunck, and P. Schwerdtfeger, Rev. Mod. Phys. (2018).
. W Nazarewicz, 10.1038/s41567-018-0163-3Nature Physics. 14537W. Nazarewicz, Nature Physics 14, 537 (2018).
. V Viola, G Seaborg, 10.1016/0022-1902(66)80412-8J. Inorg. Nucl. Chem. 28741V. Viola and G. Seaborg, J. Inorg. Nucl. Chem. 28, 741 (1966).
. W D Myers, W J Swiatecki, 10.1016/0029-5582(66)90639-0Nucl. Phys. 811W. D. Myers and W. J. Swiatecki, Nucl. Phys. 81, 1 (1966).
. S G Nilsson, C F Tsang, A Sobiczewski, Z Szymański, S Wycech, C Gustafson, I.-L Lamm, P Möller, B Nilsson, 10.1016/0375-9474(69)90809-4Nucl. Phys. A. 1311S. G. Nilsson, C. F. Tsang, A. Sobiczewski, Z. Szymański, S. Wycech, C. Gustafson, I.-L. Lamm, P. Möller, and B. Nilsson, Nucl. Phys. A 131, 1 (1969).
. A Sobiczewski, F Gareev, B Kalinkin, 10.1016/0031-9163(66)91243-1Phys. Lett. 22500A. Sobiczewski, F. Gareev, and B. Kalinkin, Phys. Lett. 22, 500 (1966).
. A V Afanasjev, T L Khoo, S Frauendorf, G A Lalazissis, I Ahmad, 10.1103/PhysRevC.67.024309Phys. Rev. C. 6724309A. V. Afanasjev, T. L. Khoo, S. Frauendorf, G. A. Lalazissis, and I. Ahmad, Phys. Rev. C 67, 024309 (2003).
. A V Afanasjev, Phys. Scr. 62A. V. Afanasjev, Phys. Scr. 2006, 62 (2006).
. M Bender, K Rutz, P.-G Reinhard, J A Maruhn, W Greiner, 10.1103/PhysRevC.60.034304Phys. Rev. C. 6034304M. Bender, K. Rutz, P.-G. Reinhard, J. A. Maruhn, and W. Greiner, Phys. Rev. C 60, 034304 (1999).
. S Ćwiok, J Dobaczewski, P.-H Heenen, P Magierski, W Nazarewicz, 10.1016/S0375-9474(96)00337-5Nucl. Phys. A. 611211S. Ćwiok, J. Dobaczewski, P.-H. Heenen, P. Magierski, and W. Nazarewicz, Nucl. Phys. A 611, 211 (1996).
. A T Kruppa, M Bender, W Nazarewicz, P.-G Reinhard, T Vertse, S Ćwiok, 10.1103/PhysRevC.61.034313Phys. Rev. C. 6134313A. T. Kruppa, M. Bender, W. Nazarewicz, P.-G. Rein- hard, T. Vertse, and S. Ćwiok, Phys. Rev. C 61, 034313 (2000).
. K Rutz, M Bender, T Bürvenich, T Schilling, P.-G Reinhard, J A Maruhn, W Greiner, 10.1103/PhysRevC.56.238Phys. Rev. C. 56238K. Rutz, M. Bender, T. Bürvenich, T. Schilling, P.-G. Reinhard, J. A. Maruhn, and W. Greiner, Phys. Rev. C 56, 238 (1997).
. C E Düllmann, M Block, 10.1038/scientificamerican0318-46Sci. Am. 31846C. E. Düllmann and M. Block, Sci. Am. 318, 46 (2018).
. S E Agbemava, A V Afanasjev, T Nakatsukasa, P Ring, 10.1103/PhysRevC.92.054310Phys. Rev. C. 9254310S. E. Agbemava, A. V. Afanasjev, T. Nakatsukasa, and P. Ring, Phys. Rev. C 92, 054310 (2015).
. M Bender, W Nazarewicz, P.-G Reinhard, Phys. Lett. B. 51542M. Bender, W. Nazarewicz, and P.-G. Reinhard, Phys. Lett. B 515, 42 (2001).
. P Jerabek, B Schuetrumpf, P Schwerdtfeger, W Nazarewicz, 10.1103/PhysRevLett.120.053001Phys. Rev. Lett. 12053001P. Jerabek, B. Schuetrumpf, P. Schwerdtfeger, and W. Nazarewicz, Phys. Rev. Lett. 120, 053001 (2018).
. Y Oganessian, A Demin, A Iljinov, S Tretyakova, A Pleve, Y Penionzhkevich, M Ivanov, Y Tretyakov, 10.1016/0375-9474(75)91140-9Nucl. Phys. A. 239157Y. Oganessian, A. Demin, A. Iljinov, S. Tretyakova, A. Pleve, Y. Penionzhkevich, M. Ivanov, and Y. Tretyakov, Nucl. Phys. A 239, 157 (1975).
. Y Oganessian, J. Phys. G. 34165Y. Oganessian, J. Phys. G 34, R165 (2007).
. R Barber, P J Karol, H Nakahara, E Vardaci, E W Vogt, 10.1351/PAC-REP-10-05-01Pure Appl. Chem. 831485R. Barber, P. J. Karol, H. Nakahara, E.Vardaci, and E. W. Vogt, Pure Appl. Chem. 83, 1485 (2011).
. P J Karol, R C Barber, B M Sherrill, V Emanuele, Y Toshimitsu, 10.1515/pac-2015-0502Pure Appl. Chem. 88139P. J. Karol, R. C. Barber, B. M. Sherrill, V. Emanuele, and Y. Toshimitsu, Pure Appl. Chem. 88, 139 (2016).
. P J Karol, R C Barber, B M Sherrill, V Emanuele, Y Toshimitsu, 10.1515/pac-2015-0501Pure Appl. Chem. 88155P. J. Karol, R. C. Barber, B. M. Sherrill, V. Emanuele, and Y. Toshimitsu, Pure Appl. Chem. 88, 155 (2016).
. C E Düllmann, http:/iopscience.iop.org/article/10.1088/1402-4896/aa53c1/pdfReportC. E. Düllmann, GSI Scientific Report 2011 GSI Report 2012-1, 206 (2011).
. C E Düllmann, 10.1051/epjconf/201716300015EPJ Web Conf. 16315C. E. Düllmann, EPJ Web Conf. 163, 00015 (2017).
. Y T Oganessian, 10.1103/PhysRevC.79.024603Phys. Rev. C. 7924603Y. T. Oganessian et al., Phys. Rev. C 79, 024603 (2009).
. S Hofmann, F Heßberger, D Ackermann, S Antalic, P Cagarda, B Kindler, P Kuusiniemi, M Leino, B Lommel, O Malyshev, R Mann, G Muâĺnzenberg, A Popeko, S S´aro, B Streicher, A Yeremin, 10.1016/j.nuclphysa.2004.01.018Nucl. Phys. A. 73493S. Hofmann, F. Heßberger, D. Ackermann, S. An- talic, P. Cagarda, B. Kindler, P. Kuusiniemi, M. Leino, B. Lommel, O. Malyshev, R. Mann, G. MuÂĺnzenberg, A. Popeko, S. S´aro, B. Streicher, and A. Yeremin, Nucl. Phys. A 734, 93 (2004).
. Y T Oganessian, V K Utyonkov, Rep. Prog. Phys. 7836301Y. T. Oganessian and V. K. Utyonkov, Rep. Prog. Phys. 78, 036301 (2015).
. Y T Oganessian, A Sobiczewski, G M Ter-Akopian, Phys. Scr. 9223003Y. T. Oganessian, A. Sobiczewski, and G. M. Ter- Akopian, Phys. Scr. 92, 023003 (2017).
. Y T Oganessian, 10.1103/PhysRevC.74.044602Phys. Rev. C. 7444602Y. T. Oganessian et al., Phys. Rev. C 74, 044602 (2006).
. M Bender, 10.1103/PhysRevC.61.031302Phys. Rev. C. 6131302M. Bender, Phys. Rev. C 61, 031302 (2000).
. J F Berger, D Hirata, M Girod, J Decharge, 10.1142/S021830130400176XInt. J. Mod. Phys. E. 1379J. F. Berger, D. Hirata, M. Girod, and J. Decharge, Int. J. Mod. Phys. E 13, 79 (2004).
. S Ćwiok, W Nazarewicz, P.-H Heenen, 10.1103/PhysRevLett.83.1108Phys. Rev. Lett. 831108S. Ćwiok, W. Nazarewicz, and P.-H. Heenen, Phys. Rev. Lett. 83, 1108 (1999).
. J Erler, K Langanke, H P Loens, G Martínez-Pinedo, P.-G Reinhard, 10.1103/PhysRevC.85.025802Phys. Rev. C. 8525802J. Erler, K. Langanke, H. P. Loens, G. Martínez-Pinedo, and P.-G. Reinhard, Phys. Rev. C 85, 025802 (2012).
. Y K Gambhir, A Bhagwat, M Gupta, 10.1103/PhysRevC.71.037301Phys. Rev. C. 7137301Y. K. Gambhir, A. Bhagwat, and M. Gupta, Phys. Rev. C 71, 037301 (2005).
. P.-H Heenen, J Skalski, A Staszczak, D Vretenar, 10.1016/j.nuclphysa.2015.07.016Nucl. Phys. A. 944415P.-H. Heenen, J. Skalski, A. Staszczak, and D. Vretenar, Nucl. Phys. A 944, 415 (2015).
. P Jachimowicz, M Kowal, J Skalski, 10.1103/PhysRevC.89.024304Phys. Rev. C. 8924304P. Jachimowicz, M. Kowal, and J. Skalski, Phys. Rev. C 89, 024304 (2014).
. I Muntian, S Hofmann, Z Patyk, A Sobiczewski, Acta Phys. Pol. 342073I. Muntian, S. Hofmann, Z. Patyk, and A. Sobiczewski, Acta Phys. Pol. 34, 2073 (2003).
. A Sobiczewski, K Pomorski, 10.1016/j.ppnp.2006.05.001Prog. Part. Nucl. Phys. 58292A. Sobiczewski and K. Pomorski, Prog. Part. Nucl. Phys. 58, 292 (2007).
. S V Tolokonnikov, Y S Lutostansky, E E Saperstein, 10.1134/S1063778813060136Phys. At. Nucl. 76708S. V. Tolokonnikov, Y. S. Lutostansky, and E. E. Saper- stein, Phys. At. Nucl. 76, 708 (2013).
. S V Tolokonnikov, I N Borzov, M Kortelainen, Y S Lutostansky, E E Saperstein, 10.1140/epja/i2017-12220-yEur. Phys. J. A. 5333S. V. Tolokonnikov, I. N. Borzov, M. Kortelainen, Y. S. Lutostansky, and E. E. Saperstein, Eur. Phys. J. A 53, 33 (2017).
. S Typel, B A Brown, 10.1103/PhysRevC.67.034313Phys. Rev. C. 6734313S. Typel and B. A. Brown, Phys. Rev. C 67, 034313 (2003).
. M Warda, J L Egido, 10.1103/PhysRevC.86.014322Phys. Rev. C. 8614322M. Warda and J. L. Egido, Phys. Rev. C 86, 014322 (2012).
. B A Brown, 10.1103/PhysRevC.46.811Phys. Rev. C. 46811B. A. Brown, Phys. Rev. C 46, 811 (1992).
. A Budaca, R Budaca, I Silisteanu, 10.1016/j.nuclphysa.2016.03.048Nucl. Phys. A. 95160A. Budaca, R. Budaca, and I. Silisteanu, Nucl. Phys. A 951, 60 (2016).
. P R Chowdhury, C Samanta, D N Basu, 10.1103/PhysRevC.77.044603Phys. Rev. C. 7744603P. R. Chowdhury, C. Samanta, and D. N. Basu, Phys. Rev. C 77, 044603 (2008).
. J Dong, W Zuo, W Scheid, 10.1016/j.nuclphysa.2011.06.016Nucl. Phys. A. 8611J. Dong, W. Zuo, and W. Scheid, Nucl. Phys. A 861, 1 (2011).
. H Koura, 10.1080/00223131.2012.704158J. Nucl. Sci. Technol. 49816H. Koura, J. Nucl. Sci. Technol. 49, 816 (2012).
. A Parkhomenko, A Sobiczewski, Acta Phys. Pol. B. 363095A. Parkhomenko and A. Sobiczewski, Acta Phys. Pol. B 36, 3095 (2005).
. G Royer, H F Zhang, 10.1103/PhysRevC.77.037602Phys. Rev. C. 7737602G. Royer and H. F. Zhang, Phys. Rev. C 77, 037602 (2008).
. D E Ward, B G Carlsson, S Åberg, 10.1103/PhysRevC.92.014314Phys. Rev. C. 9214314D. E. Ward, B. G. Carlsson, and S. Åberg, Phys. Rev. C 92, 014314 (2015).
. M Bender, P.-H Heenen, P.-G Reinhard, 10.1103/RevModPhys.75.121Rev. Mod. Phys. 75121M. Bender, P.-H. Heenen, and P.-G. Reinhard, Rev. Mod. Phys. 75, 121 (2003).
. M Stoitsov, J Dobaczewski, W Nazarewicz, P Borycki, 10.1016/j.ijms.2006.01.040Int. J. Mass Spectrom. 251243M. Stoitsov, J. Dobaczewski, W. Nazarewicz, and P. Bo- rycki, Int. J. Mass Spectrom. 251, 243 (2006).
. P Klüpfel, P.-G Reinhard, T J Bürvenich, J A Maruhn, 10.1103/PhysRevC.79.034310Phys. Rev. C. 7934310P. Klüpfel, P.-G. Reinhard, T. J. Bürvenich, and J. A. Maruhn, Phys. Rev. C 79, 034310 (2009).
. M Kortelainen, T Lesinski, J Moré, W Nazarewicz, J Sarich, N Schunck, M V Stoitsov, S Wild, 10.1103/PhysRevC.82.024313Phys. Rev. C. 8224313M. Kortelainen, T. Lesinski, J. Moré, W. Nazarewicz, J. Sarich, N. Schunck, M. V. Stoitsov, and S. Wild, Phys. Rev. C 82, 024313 (2010).
. T Skyrme, 10.1016/0029-5582(58)90345-6Nucl. Phys. 9615T. Skyrme, Nucl. Phys. 9, 615 (1958).
. D Vautherin, D M Brink, 10.1103/PhysRevC.5.626Phys. Rev. C. 5626D. Vautherin and D. M. Brink, Phys. Rev. C 5, 626 (1972).
. J Dobaczewski, W Nazarewicz, M Stoitsov, 10.1140/epja/i2001-10218-8Eur. Phys. J. A. 1521J. Dobaczewski, W. Nazarewicz, and M. Stoitsov, Eur. Phys. J. A 15, 21 (2002).
. J Bartel, P Quentin, M Brack, C Guet, H.-B Håkansson, 10.1016/0375-9474(82)90403-1Nucl. Phys. A. 38679J. Bartel, P. Quentin, M. Brack, C. Guet, and H.-B. Håkansson, Nucl. Phys. A 386, 79 (1982).
. E Chabanat, P Bonche, P Haensel, J Meyer, R Schaeffer, 10.1016/S0375-9474(98)00180-8Nucl. Phys. A. 635231E. Chabanat, P. Bonche, P. Haensel, J. Meyer, and R. Schaeffer, Nucl. Phys. A 635, 231 (1998).
. M Kortelainen, J Mcdonnell, W Nazarewicz, P.-G Reinhard, J Sarich, N Schunck, M V Stoitsov, S M Wild, 10.1103/PhysRevC.85.024304Phys. Rev. C. 8524304M. Kortelainen, J. McDonnell, W. Nazarewicz, P.-G. Reinhard, J. Sarich, N. Schunck, M. V. Stoitsov, and S. M. Wild, Phys. Rev. C 85, 024304 (2012).
. M Kortelainen, J Mcdonnell, W Nazarewicz, E Olsen, P.-G Reinhard, J Sarich, N Schunck, S M Wild, D Davesne, J Erler, A Pastore, 10.1103/PhysRevC.89.054314Phys. Rev. C. 8954314M. Kortelainen, J. McDonnell, W. Nazarewicz, E. Olsen, P.-G. Reinhard, J. Sarich, N. Schunck, S. M. Wild, D. Davesne, J. Erler, and A. Pastore, Phys. Rev. C 89, 054314 (2014).
. Y Shi, J Dobaczewski, P T Greenlees, 10.1103/PhysRevC.89.034309Phys. Rev. C. 8934309Y. Shi, J. Dobaczewski, and P. T. Greenlees, Phys. Rev. C 89, 034309 (2014).
. J Erler, N Birge, M Kortelainen, W Nazarewicz, E Olsen, A M Perhac, M Stoitsov, 10.1038/nature11188Nature. 486509J. Erler, N. Birge, M. Kortelainen, W. Nazarewicz, E. Olsen, A. M. Perhac, and M. Stoitsov, Nature 486, 509 (2012).
The Nuclear Many-Body Problem. P Ring, P Schuck, SpringerNew YorkP. Ring and P. Schuck, The Nuclear Many-Body Problem (Springer, New York, 1980).
. A V Afanasjev, J. Phys. G. 4234002A. V. Afanasjev, J. Phys. G. 42, 034002 (2015).
. L Bonneau, P Quentin, P Möller, 10.1103/PhysRevC.76.024320Phys. Rev. C. 7624320L. Bonneau, P. Quentin, and P. Möller, Phys. Rev. C 76, 024320 (2007).
. N Schunck, J Dobaczewski, J Mcdonnell, J Moré, W Nazarewicz, J Sarich, M V Stoitsov, 10.1103/PhysRevC.81.024316Phys. Rev. C. 8124316N. Schunck, J. Dobaczewski, J. McDonnell, J. Moré, W. Nazarewicz, J. Sarich, and M. V. Stoitsov, Phys. Rev. C 81, 024316 (2010).
. R N Perez, N Schunck, R.-D Lasseri, C Zhang, J Sarich, 10.1016/j.cpc.2017.06.022Comp. Phys. Comm. 220363R. N. Perez, N. Schunck, R.-D. Lasseri, C. Zhang, and J. Sarich, Comp. Phys. Comm. 220, 363 (2017).
. S Ćwiok, P.-H Heenen, W Nazarewicz, 10.1038/nature03336Nature. 433review ArticleS. Ćwiok, P.-H. Heenen, and W. Nazarewicz, Nature 433, 705 (2005), review Article.
. P Möller, R Bengtsson, B Carlsson, P Olivius, T Ichikawa, H Sagawa, A Iwamoto, 10.1016/j.adt.2008.05.002At. Data Nucl. Data Tables. 94758P. Möller, R. Bengtsson, B. Carlsson, P. Olivius, T. Ichikawa, H. Sagawa, and A. Iwamoto, At. Data Nucl. Data Tables 94, 758 (2008).
. M V Stoitsov, J Dobaczewski, R Kirchner, W Nazarewicz, J Terasaki, 10.1103/PhysRevC.76.014308Phys. Rev. C. 7614308M. V. Stoitsov, J. Dobaczewski, R. Kirchner, W. Nazarewicz, and J. Terasaki, Phys. Rev. C 76, 014308 (2007).
. M Wang, G Audi, F Kondev, W Huang, S Naimi, X Xu, Chin. Phys. C. 4130003M. Wang, G. Audi, F. Kondev, W. Huang, S. Naimi, and X. Xu, Chin. Phys. C 41, 030003 (2017).
. N T Brewer, 10.1103/PhysRevC.98.024317Phys. Rev. C. 9824317N. T. Brewer et al., Phys. Rev. C 98, 024317 (2018).
. S Ćwiok, V Pashkevich, J Dudek, W Nazarewicz, 10.1016/0375-9474(83)90201-4Nucl. Phys. A. 410254S. Ćwiok, V. Pashkevich, J. Dudek, and W. Nazarewicz, Nucl. Phys. A 410, 254 (1983).
. P Möller, J R Nix, J. Phys. G. 201681P. Möller and J. R. Nix, J. Phys. G 20, 1681 (1994).
J M Bernardo, A F M Smith, Bayesian Theory. New JerseyWileyJ. M. Bernardo and A. F. M. Smith, Bayesian Theory (Wiley, New Jersey, 1994).
. J M Gates, 10.1103/PhysRevC.92.021301Phys. Rev. C. 9221301J. M. Gates et al., Phys. Rev. C 92, 021301 (2015).
. J M Gates, 10.1051/epjconf/201613108003EPJ Web Conf. 1318003J. M. Gates, EPJ Web Conf. 131, 08003 (2016).
. J M Gates, Phys. Rev. Lett. J. M. Gates et al., Phys. Rev. Lett. (2018).
. C E Düllmann, 10.1051/epjconf/201613108004EPJ Web Conf. 1318004C. E. Düllmann, EPJ Web Conf. 131, 08004 (2016).
. F P Heßberger, D Ackermann, 10.1140/epja/i2017-12307-5Eur. Phys. J. A. 53123F. P. Heßberger and D. Ackermann, Eur. Phys. J. A 53, 123 (2017).
. S Hofmann, 10.1140/epja/i2016-16180-4Eur. Phys. J. A. 52180S. Hofmann et al., Eur. Phys. J. A 52, 180 (2016).
. J B Roberto, K P Rykaczewski, 10.1080/01496395.2017.1290658Sep. Sci. Technol. 531813J. B. Roberto and K. P. Rykaczewski, Sep. Sci. Technol. 53, 1813 (2018).
. J A Hoeting, D Madigan, A E Raftery, C T Volinsky, 10.1214/ss/1009212519Statist. Sci. 14382J. A. Hoeting, D. Madigan, A. E. Raftery, and C. T. Volinsky, Statist. Sci. 14, 382 (1999).
. L Wasserman, 10.1006/jmps.1999.1278J. Math. Psych. 4492L. Wasserman, J. Math. Psych. 44, 92 (2000).
. L Neufcourt, Y Cao, W Nazarewicz, F Viens, 10.1103/PhysRevC.98.034318Phys. Rev. C. 9834318L. Neufcourt, Y. Cao, W. Nazarewicz, and F. Viens, Phys. Rev. C 98, 034318 (2018).
| []
|
[
"CONICAL SQUARE FUNCTION ESTIMATES IN UMD BANACH SPACES AND APPLICATIONS TO H ∞ -FUNCTIONAL CALCULI",
"CONICAL SQUARE FUNCTION ESTIMATES IN UMD BANACH SPACES AND APPLICATIONS TO H ∞ -FUNCTIONAL CALCULI"
]
| [
"Tuomas Hytönen ",
"Jan Van Neerven ",
"Pierre Portal "
]
| []
| []
| We study conical square function estimates for Banach-valued functions, and introduce a vector-valued analogue of the Coifman-Meyer-Stein tent spaces. Following recent work of Auscher-M c Intosh-Russ, the tent spaces in turn are used to construct a scale of vector-valued Hardy spaces associated with a given bisectorial operator A with certain off-diagonal bounds, such that A always has a bounded H ∞ -functional calculus on these spaces. This provides a new way of proving functional calculus of A on the Bochner spaces L p (R n ; X) by checking appropriate conical square function estimates, and also a conical analogue of Bourgain's extension of the Littlewood-Paley theory to the UMDvalued context. Even when X = C, our approach gives refined p-dependent versions of known results. | 10.1007/s11854-008-0051-3 | [
"https://arxiv.org/pdf/0709.1350v1.pdf"
]
| 16,863,763 | 0709.1350 | fb7c8549a241e90b5be6ae04f22d64c3f95ac4b3 |
CONICAL SQUARE FUNCTION ESTIMATES IN UMD BANACH SPACES AND APPLICATIONS TO H ∞ -FUNCTIONAL CALCULI
10 Sep 2007
Tuomas Hytönen
Jan Van Neerven
Pierre Portal
CONICAL SQUARE FUNCTION ESTIMATES IN UMD BANACH SPACES AND APPLICATIONS TO H ∞ -FUNCTIONAL CALCULI
10 Sep 2007arXiv:0709.1350v1 [math.FA]
We study conical square function estimates for Banach-valued functions, and introduce a vector-valued analogue of the Coifman-Meyer-Stein tent spaces. Following recent work of Auscher-M c Intosh-Russ, the tent spaces in turn are used to construct a scale of vector-valued Hardy spaces associated with a given bisectorial operator A with certain off-diagonal bounds, such that A always has a bounded H ∞ -functional calculus on these spaces. This provides a new way of proving functional calculus of A on the Bochner spaces L p (R n ; X) by checking appropriate conical square function estimates, and also a conical analogue of Bourgain's extension of the Littlewood-Paley theory to the UMDvalued context. Even when X = C, our approach gives refined p-dependent versions of known results.
Introduction
Since the development of the Littlewood-Paley theory, square function estimates of the form
∞ 0 t √ ∆e −t √ ∆ f 2 dt t 1 2 L p (R n ) f L p (R n ) ,
have been widely used in harmonic analysis. When dealing with functions which takes values in a UMD Banach space X, such estimates have to be given an appropriate meaning. This is done through a linearisation of the square function using randomisation, which gives (see [14])
∞ 0 t √ ∆e −t √ ∆ f dW t √ t L 2 (Ω;L p (R n ;X)) f L p (R n ;X) ,
where the integral is a Banach space-valued stochastic integral with respect to a standard Brownian motion W on a probability space (Ω, P) (see [25]), or, in a simpler discrete form,
(1.1) k∈Z ε k 2 k √ ∆e −2 k √ ∆ f L 2 (Ω;L p (R n ;X)) f L p (R n ;X)
where (ε k ) is a sequence of independent Rademacher variables on (Ω, P). The latter was proven by Bourgain in [6], thereby starting the development of harmonic analysis for UMD-valued functions. In recent years, research in this field has accelerated as it appeared that its tools, and in particular square function estimates, are of fundamental importance in the study of the H ∞ -functional calculus (see [20]) and in stochastic analysis in UMD Banach spaces (see [24]). To some extent, even the scalar-valued theory (i.e. X = C) has benefited from this probabilistic point of view (see for instance [16,22]). However this fruitful linearisation has, so far, been limited to the above "vertical" square functions estimates, leaving aside the "conical" estimates of the form (1.2)
R n |y−x|<t t √ ∆e −t √ ∆ f (y) 2 dy dt t n+1 p 2 dx 1 p f L p (R n ) , 1 < p ≤ 2.
In the meantime, such estimates have attracted much attention as it was realised that they could be used to extend the real variable theory of Hardy spaces in a way which is suitable to treat operators beyond the Calderón-Zygmund class (see [3,9,13]). Indeed, elliptic operators of the form −divB∇, where B is a matrix with L ∞ entries, are not, in general, sectorial on L p for all 1 < p < ∞. Their study thus requires the L p -spaces to be replaced by appropriate Hardy spaces, on which they have good functional calculus properties (in the same way as L 1 has to be replaced by H 1 when dealing with the Laplacian). To define such spaces, conical square functions have to be used, since the use of vertical ones would impose severe restrictions on the class of operators under consideration (namely, L p (R-)sectoriality). The present paper gives extensions of (1.2) to the UMD-valued context. This starts with the construction of appropriate tent spaces, which is carried out in Section 4 by reinterpreting and extending [11] using the methods of stochastic analysis in Banach spaces from [19,24,25]. Relevant notions and results from this theory are recalled in Section 2, while the crucial technical estimate is proven in Section 3. Following ideas developed in [3], we then prove appropriate estimates for operators acting on these tent spaces in Section 5. After collecting some basic results on bisectorial operator in Section 6, this allows us in Section 7 to define Hardy spaces associated with bisectorial operators of the form A ⊗ I X , where A acts on L 2 (R n , H) (H being a Hilbert space and X a UMD Banach space) and satisfies suitable off-diagonal estimates. We prove that A ⊗ I X always has an H ∞functional calculus on these Hardy spaces. Finally, in Section 8, we specialise to differential operators A, and, in particular, give a conical analogue to Bourgain's square function estimate (1.1).
Specialising to the case X = C, our approach allows to define Hardy spaces (associated with operators) using a class of functions which is wider than in [3]. This is due to the fact that our estimates (see Proposition 7.5) are directly obtained for a given value of p (and actually depend on the type and cotype of L p ), instead of using interpolation.
To conclude this introduction, let us now point out the possible uses of our results. First, one can deduce the boundedness of the functional calculus of an operator A⊗I X from conical square function estimates. For instance, with Theorem 8.2, we recover the well-known fact that, if X is UMD and 1 < p < ∞, ∆ ⊗ I X admits an H ∞ -calculus on L p (X). Note that this characterises the UMD spaces among all Banach spaces and thus indicates that it cannot be expexted that the results presented here extend beyond the UMD setting.
Another application is to deduce conical square function estimates for functions with limited decay from such estimates for functions with good decay properties. In particular, Theorem 8.2 together with Theorem 7.10 give the following estimates: We use the notations
S + θ = {z ∈ C \ {0} : | arg(z)| < θ}, Ψ β α (S + θ ) = f ∈ H ∞ (S + θ ) : ∃C |f (z)| ≤ C min(|z| α , |z| −β ) ∀ z ∈ S + θ .
Let θ, ε > 0, and assume that either
ψ ∈ Ψ n/2+ε 1 (S + θ ) and 1 < p < 2n n − 2 , or ψ ∈ Ψ 1 n/2+ε (S + θ ) and 2n n + 2 < p < ∞. Then R n |y−x|<t |ψ(t∆)u(y)| 2 dy dt t n+1 p/2 dx u p L p .
Neerven visited the Centre for Mathematics and its Applications (CMA) at the Australian National University (ANU), and it was finished during Pierre Portal's visit to the Department of Mathematics and Statistics at the University of Helsinki. Tuomas Hytönen was supported by the Academy of Finland (SA) project 114374 "Vector-valued singular integrals", and by the CMA while in Canberra. Jan van Neerven was supported by the VIDI subsidy 639.032.201 and VICI subsidy 639.033.604 of the Netherlands Organisation for Scientific Research (NWO). Pierre Portal was supported by the CMA and the Australian Research Council as a postdoctoral fellow, and by the above-mentioned SA project while in Helsinki. He would like to thank Alan M c Intosh for his guidance. The authors also wish to thank Alan M c Intosh for his kind hospitality at ANU, and for many discussions which motivated and influenced this work.
Preliminaries
In this section we establish some terminology and collect auxiliary results needed in the main body of the paper.
Let X and Y be Banach spaces and let L (X, Y ) denote the space of all bounded linear operators acting from X into Y . A family of bounded operators T ⊆ L (X, Y ) is called γ-bounded if there is a constant C such that for all integers k 1 and all T 1 , . . . , T k ∈ T and ξ 1 , . . . , ξ j ∈ X we have
(2.1) E k j=1 γ j T j ξ j 2 C 2 E k j=1 γ j ξ j 2 .
Here, γ 1 , . . . , γ k are independent standard normal variables defined on some probability space (Ω, F , P) and E denotes the expectation with respect to P. The least admissible constant in (2.1) is denoted by γ(T ). By the Kahane-Khintchine inequality, the exponent 2 may be replaced by any exponent 1 p < ∞ at the cost of a possibly different constant.
Upon replacing the standard normal variables by Rademacher variables in (2.1) one arrives at the notion of R-boundedness. Every R-bounded family is γ-bounded, and the converse holds if Y has finite cotype. Since we are primarily interested in UMD spaces Y , which have finite cotype, the distinction between γ-boundedness and R-boundedness is immaterial. We prefer the former since our techniques are Gaussian and therefore the use of Gaussian variables seems more natural.
Let H be a Hilbert space. A linear operator R :
H → X is called γ-summing if R γ ∞ (H,X) := sup E k j=1 γ j Rh j 2 1 2 < ∞,
where the supremum is taken over all integers k 1 and all finite orthonormal systems h 1 , . . . , h k in H. The space γ ∞ (H, X), endowed with the above norm, is a Banach space. The closed subspace of γ ∞ (H, X) spanned by the finite rank operators is denoted by γ(H, X). A linear operator R : H → X is said to be γ-radonifying if it belongs to γ(H, X).
A celebrated result of Hoffman-Jørgensen and Kwapień [12,21] implies that
γ ∞ (H, X) = γ(H, X)
for Banach spaces X not containing an isomorphic copy of c 0 .
If H is separable with orthonormal basis (h n ) n 1 , then an operator R : H → X is γ-radonifying if and only if the sum n 1 γ n Rh n converges in L 2 (Ω; X), in which case we have
R γ(H,X) = E j 1 γ j Rh j 2 1 2 .
The following criterium for membership of γ(H, X) will be referred to as covariance domination.
Proposition 2.1. Let S ∈ L (H, X) and T ∈ γ(H, X) satisfy
S * ξ * C T * ξ * , ξ * ∈ X * ,
with C independent of ξ * . Then S ∈ γ(H, X) and S γ(H,X) C T γ(H,X) .
For more details we refer to [19,24] and the references therein. Let (A, Σ, µ) be a σ-finite measure space, H a Hilbert spaces and X a Banach space. In the formulation of the next result, which is a multiplier result due to Kalton and Weis [19], we identify H ⊗X-valued functions f ⊗ξ, where f ∈ L 2 (A, H) and ξ ∈ X, with the operator R f ⊗ξ ∈ γ(L 2 (A; H), X) defined by
(2.2) R f ⊗ξ g := f, g ⊗ ξ, g ∈ L 2 (A; H).
where f, h denotes the scalar product on L 2 (A; H). This follows from a simple application of the Kahane-Khintchine inequality; we refer to [24,Proposition 2.6] for the details. Here, H and X are allowed to be arbitrary Hilbert spaces and Banach spaces, respectively; the norm constants in the isomorphism are independent of H.
Lemma 2.2. Let X be a Banach space, let (A, Σ, µ) be a σ-finite measure space, and let M : A → L (X) be a function such that a → M (a)ξ is strongly µ-measurable for all ξ ∈ X. If the set M = {M (a) : a ∈ A} is γ-bounded, then the mapping f (·) ⊗ ξ → f (·) ⊗ M (·)ξ,
Let γ = (γ n ) n 1 be a sequence of independent standard normal variables on a probability space (Ω, F , P). Recall that a Banach space X is called K-convex if the mapping
π γ : f → n 1 γ n E(γ n f ), f ∈ L 2 (Ω; X),
defines a bounded operator on L 2 (Ω; X). This notion is well-defined: if π γ is bounded for some sequence γ, then it is bounded for all sequences γ. A celebrated result of Pisier [26] states that X is K-convex if and only if X is B-convex if and only if X has nontrivial type. If X is K-convex, then the isometry I γ : γ(H, X) → L 2 (Ω; X) defined by
I γ R := n 1
γ n Rh n maps γ(H, X) onto a complemented subspace of L 2 (Ω; X). Indeed, for all R ∈ γ(H, X) we have
π γ I γ R = n 1 γ n Eγ n j 1 γ j Rh j = n 1 γ n Rh n = I γ R.
Hence, the range of I γ is contained in the range of π γ . Since the range of π γ is spanned by the functions γ n ⊗ ξ = I γ (h n ⊗ ξ), the range is π γ is contained in the range of I γ . We conclude that the ranges of π γ and I γ coincide and the claim is proved. As an application of this we are able to describe complex interpolation spaces of the spaces γ(H, X).
Proposition 2.3. If X 1 and X 2 are K-convex, then for all 0 < θ < 1 we have
[γ(H, X 1 ), γ(H, X 2 )] θ = γ(H, [X 1 , X 2 ] θ ) with equivalent norms.
Proof. In view of the preceding observations this follows from general results on interpolation of complemented subspaces [5, Chapter 5].
Main estimate
The main estimate of this paper is a γ-boundedness estimate for some averaging operators, which is proven below.
We start by recalling some known results. The first is Bourgain's extension to UMD spaces of Stein's inequality [6] (see [7] for a complete proof).
Lemma 3.1. Let 1 < p < ∞ and let X be a UMD space. Let (F m ) m∈Z be a filtration on a probability space (Ω, F , P). Then the family of conditional expectations
E = {E( · |F m ) : m ∈ Z} is γ-bounded on L p (Ω; X).
Let us agree that a cube in R n is any set Q of the form x + [0, ℓ) n with x ∈ R n and ℓ > 0. We denote ℓ(Q) := ℓ and call it the side-length of Q. A system of dyadic cubes is a collection ∆ = k∈Z ∆ 2 k , where ∆ 2 k is a disjoint cover of R n by cubes of side-length 2 k , and each Q ∈ ∆ 2 k is the union of 2 n cubes R ∈ ∆ 2 k−1 . We recall the following geometric lemma of Mei [23]: Lemma 3.2. There exist n + 1 systems of dyadic cubes ∆ 0 , . . . , ∆ n and a constant C < ∞ such that for any ball B ⊂ R n there is a Q ∈ n k=0 ∆ k which satisfies B ⊂ Q and |Q| ≤ C |B|.
The following results can be found in [16]:
Lemma 3.3. Let X be a UMD space and 1 < p < ∞. Let r ∈ Z n \ {0} and x Q ∈ X for all Q ∈ ∆. Then E k∈Z ε k Q∈∆ 2 k 1 Q+rℓ(Q) x Q p ≤ C(1 + log |r|)E k∈Z ε k Q∈∆ 2 k 1 Q x Q p . Lemma 3.4. Let X be a UMD space, 1 < p < ∞, and m ∈ Z + . For each Q ∈ ∆, let Q ′ , Q ′′ ∈ ∆ be subcubes of Q of side-length 2 −m Q. Then for all ℓ ∈ Z and all x Q ∈ X E k≡ℓ ε k Q∈∆ 2 k 1 Q ′′ x Q p ≤ CE k≡ℓ ε k Q∈∆ 2 k 1 Q ′ x Q p , where k ≡ ℓ is short-hand for k ≡ ℓ mod (m + 1).
The previous lemmas will now be used to prove our main estimate.
Proposition 3.5. Let X be a UMD space, 1 < p < ∞, and let L p (X) have type τ . For α 1, let A α be the family of operators
f → A α B f := 1 αB − B f dx,
where B runs over all balls in R n . Then A α is γ-bounded on L p (X) with the γ-bound at most C(1 + log α)α n/τ and C depends only on X, p, τ and n.
Proof. We have to show that
E k j=1 ε j 1 αBj − Bj f j dx p ≤ CE k j=1 ε j f j p .
By splitting all the balls B j into n + 1 subsets and considering each of them separately, we may assume by Mei's lemma that there is a system of dyadic cubes ∆ and Q 1 , . . . , Q k ∈ ∆ such that B j ⊂ Q j and |Q j | ≤ C |B j |. Let m be the integer for which 2 m−1 ≤ α < 2 m . Let Q * j ∈ ∆ be the unique cube in the dyadic system which has side-length 2 m ℓ(Q j ) and contains Q j . Then αB j is contained in the union of Q * j and at most 2 n − 1 of adjacent cubes R ∈ ∆ of the same size. Writing g j = 1 Bj f j , we observe that
− Bj f j dx = |Q j | |B j | − Qj g j dx.
Since |Q j | / |B j | ≤ C, by the contraction principle it suffices to show that
E k j=1 ε j 1 Rj − Qj g j dx p ≤ CE k j=1 ε j g j p ,
where R j = Q * j + rℓ(Q * j ) for some |r| ≤ n. Thanks to Lemma 3.3, it suffices to consider r = 0. We next write Q * j as the union M i=1 Q ji , where Q ji ∈ ∆ are the M := 2 nm subcubes of Q * j of side-length ℓ(Q j ). Let us fix the enumeration so that Q j1 = Q j . Writing x j := − Qj g j dx for short, it follows that
E k j=1 ε j 1 Q * j x j p = E M i=1 k j=1 ε j 1 Qji x j p ≤ CE ′ E M i=1 ε ′ i k j=1 ε j 1 Qji x j p ≤ C M i=1 E k j=1 ε j 1 Qji x j τ p 1/τ
where the first estimate follows from the Khintchine-Kahane inequality and the disjointness of the Q ji for each fixed j, and the second from the assumed type-τ property.
If we assume, for the moment, that all the side-lengths 2 k(j) := ℓ(Q j ) satisfy k(j) ≡ k(j ′ ) mod (m + 1), we may apply Lemma 3.4 to continue the estimate with
≤ C M i=1 E k j=1 ε j 1 Qj x j τ p 1/τ ≤ CM 1/τ E k j=1 ε j 1 Qj − Qj g j dx p ≤ CM 1/τ E k j=1 ε j g j p ,
where the last estimate applied Stein's inequality, observing that the operators g → 1 Qj − Qj g dx are conditional expectations related to the dyadic filtration induced by ∆. Since M = 2 nm ≤ 2 n α n , we obtain the assertion even without the logarithmic factor in this case.
In general, the above assumption may not be satisfied, but we can always split the indices j into m + 1 ≤ c(1 + log α) subsets which verify the assumption, and this concludes the proof.
Remark 3.6. The proof simplies considerably in the important special case α = 1.
The vector-valued tent spaces T p,2 (X)
In order to motivate our approach we begin with a simple characterisation of tent spaces in the scalar case. We put R n+1
+ := R n × R + and denote Γ(x) = {(y, t) ∈ R n+1 + : |x − y| < t}. Thus (y, t) ∈ Γ(x) ⇔ y ∈ B(x, t), where B(x, t) = {y ∈ R n : |x − y| < t}. We shall write L p = L p (R n ), L 2 ( dy dt t n+1 ) = L 2 R n+1 + ,
dy dt t n+1 , where dy and dt denote the Lebesgue measures on R n and R + . Similar conventions will apply to their vector-valued analogues. The dimension n 1 is considered to be fixed. For 1 p, q < ∞, the tent space T p,q = T p,q (R n+1 + ) consists of all (equivalence classes of) measurable functions f : R n+1
+ → C with the property that R n Γ(x) |f (y, t)| q dy dt t n+1 p q dx is finite. With respect to the norm f T p,q (R n+1 + ) := Γ(·) |f (y, t)| q dy dt t n+1 1 q L p ,
T p,q is a Banach space. Tent spaces were introduced in the 1980's by Coifman, Meyer, and Stein [8]. Some of the principal results of that paper were simplified by Harboure, Torrea, and Viviani [11], who exploited the fact that
J : f → x → [(y, t) → 1 B(x,t) (y)f (y, t)]
maps T p,q isometrically onto a complemented subspace of L p (L q ( dy dt t n+1 )) for 1 < p, q < ∞.
We now take q = 2, H a Hilbert space, and extend the mapping J to functions in C c (H) ⊗ X by J(g ⊗ ξ) := Jg ⊗ ξ and linearity. Here, C c (H) denotes the space of H-valued continuous functions on R n+1 + with compact support. Note that by (2.2),
J(g ⊗ ξ) defines an element of L p (γ(L 2 ( dy dt t n+1 ; H), X)) in a natural way. Definition 4.1. Let 1 ≤ p < ∞. The tent space T p,2 (H; X) is defined as the completion of C c (H) ⊗ X with respect to the norm f T p,2 (H;X) := Jf L p (γ(L 2 ( dy dt t n+1
;H),X)) . T p,2 (C; X) will simply be denoted by T p,2 (X).
It is immediate from this definition that J defines an isometry from T p,2 (H; X) onto a closed subspace of L p (γ(L 2 ( dy dt t n+1 ; H), X)). In what follows we shall always identify T p,2 (H; X) with its image in L p (γ(L 2 ( dy dt t n+1 ; H), X). Using the identification γ(L 2 ( dy dt t n+1 ), C) = L 2 ( dy dt t n+1 ) we see that our definition extends the definition of tent spaces in the scalar-valued case.
Our first objective is to prove that if X is a UMD space, then T p,2 (H; X) is complemented in L p (γ(L 2 ( dy dt t n+1 ; H), X)). Proposition 4.2. Let 1 < p < ∞, H a Hilbert space, and X a UMD space. The mapping
N f (x, y, t) := 1 B(y,t) (x) |B(y, t)| B(y,t) f (z, y, t) dz,
initially defined for operators of the form (2.2), extends to a bounded projection in
L p (γ(L 2 ( dy dt t n+1 ; H), X)) whose range is T p,2 (H; X).
Proof. We follow the proof of Harboure, Torrea, and Viviani [11, Theorem 2.1] for the scalar-valued case, the main difference being that the use of maximal functions is replaced by a γ-boundedness argument using averaging operators.
First we prove that N is a bounded operator. In view of the isomorphism (2.3) it suffices to prove that N acts as a bounded operator on γ(L 2 ( dy dt t n+1 ; H), L p (X)). This will be achieved by identifying N as a pointwise multiplier on L p (X) with γ-bounded range, and then applying Lemma 2.2. In fact, putting
N (y, t) g := 1 B(y,t) |B(y, t)| B(y,t) g(z) dz, g ∈ L p (X), and f y,t (x) := f (x, y, t) := f (y, t) ⊗ g(x), we have N f (·, y, t) = f (y, t) ⊗ N (y, t)g = f (y, t) ⊗ A B(y,t) g.
The γ-boundedness of {N (y, t) : (y, t) ∈ R n+1 + } now follows from Proposition 3.5. Knowing that N is bounded on L p (γ(L 2 ( dy dt t n+1 ; H), X)), the fact that it is a projection follows from the scalar case, noting that the linear span of the functions of the form
1 B(x,t) ⊗ (f ⊗ ξ), with f ∈ C c (H), x ∈ R n , and t > 0, is dense in L p (γ(L 2 ( dy dt t n+1 ; H), X)). For α > 0 the vector-valued tent space T p,2
α (H; X) may be defined as above in terms of the norm
f T p,2 α (H;X) := J α f L p (γ(L 2 ( dy dt t n+1 ;H),X)) , where J α f := x → [(y, t) → 1 B(x,αt) (y)f (y, t)] . Theorem 4.3. Let 1 < p < ∞, H a Hilbert space and X a UMD space such that L p (H ⊗ X) has type τ . For all α > 0, a strongly measurable function f : R n+1 + → H ⊗ X belongs to T p,2 (H; X) if and only if it belongs to T p,2 α (H; X). Moreover, there exists a constant C = C(p, X) such that (4.1) f T p,2 (H;X) f T p,2 α (H;X) C(1 + log α)α n/τ f T p,2 (H;X)
for f ∈ T p,2 (H; X) and α > 1.
Proof. It suffices to prove the latter estimate in (4.1). On L p (γ(L 2 ( dy dt t n+1 ; H), X)), we consider the operator
N α f (x, y, t) := 1 B(y,αt) (x) |B(y, t)| B(y,t) f (z, y, t) dz.
Simple algebra shows that N α Jf = J α f , and hence
f T p,2 α (X) = J α f L p (γ(L 2 ( dy dt t n+1 ;H),X)) = N α Jf L p (γ(L 2 ( dy dt t n+1 ;H),X)) ≤ N α L (L p (γ(L 2 ( dy dt t n+1 ;H),X))) Jf L p (γ(L 2 ( dy dt t n+1 ;H),X))
.
By the isomorphism (2.3), we may consider the boundedness of N α on the space γ(L 2 ( dy dt t n+1 ; H), L p (X)) instead, and here this operator acts as the pointwise multiplier
N α ( f ⊗ g)(·, y, t) = f (y, t) ⊗ A α B(y,t) g.
So, its boundedness with the asserted estimate follows from Proposition 3.5. To see this, consider functions of the form f (y, t) = 1 [1,2] (t)g(y). Then
f T p,2 α = (η α * |g| 2 ) 1/2 p , where the η α are functions having pointwise bounds c1 B(0,α) ≤ η α ≤ C1 B(0,Cα) for some constants C > 1 > c > 0 depending only on n.
Let us take g = |g| 2 = 1 B(0,1) . Then (η α * |g| 2 ) 1/2 =η α , whereη α is another similar function, and hence
f T p,2 α = (η α ) 1/2 p α n/p α n/p f T p,2 .
This proves the sharpness for p ≤ 2.
Let us then choose g = g α = 1 B(0,α) . Then
η α * |g α | 2 = α n η α , η 1 * |g α | 2 = η α ,
where η α , η α are yet more similar functions as η α . Writing f α (y, t) = 1 [1,2] (t)g α (y), we have
f α T p,2 α = (α n η α ) 1/2 p = α n/2 (η α ) 1/2 p α n/2 (η α ) 1/2 p = α n/2 f α T p,2 .
This proves the sharpness for p 2.
In fact, for p = 2, a simple application of Fubini's theorem shows that we have the equality f T 2,2 α = α n/2 f T 2,2 for all f ∈ T 2,2 and α > 0, so the logarithmic factor is unnecessary in this case.
Sometimes it is useful to use tent space norms defined with a smooth cut-off instead of the sharp cut-
off 1 B(x,t) (y). Given a function φ ∈ C ∞ c (R) such that φ(w) = 1 if |w| ≤ 1 2 and φ(w) = 0 if |w| 1, we are thus led to consider the mapping J φ f := x → [(y, t) → φ( |y−x| t )f (y, t)] and f T p,2 φ (H;X) := J φ f L p (γ(L 2 ( dy dt t n+1 ;H),X))
.
Proposition 4.5. Let 1 < p < ∞, H a Hilbert space and X a UMD space. A strongly measurable function f : R n+1 + → H ⊗ X belongs to T p,2 (H; X) if and only if it belongs to T p,2 φ (H; X). Moreover, f T p,2 φ (H;X) f T p,2 (H;X) for f ∈ T p,2 (H; X).
Proof. The proof is the same as that of Theorem 4.3. Consider the operators
N φ f (x, y, t) := φ( |y−x| t ) |B(y, t)| B(y,t) f (z, y, t) dz, N 1 2 f (x, y, t) := 1 B(x, t 2 ) B(y, t 2 ) B(y, t 2 ) f (z, y, t) dz.
We have J φ = N φ J and J 1 2 = N 1 2 J φ . Moreover the operators N φ and N 1 2 act as the pointwise multipliers
N φ ( f ⊗ g)(·, y, t) = f (y, t) ⊗ M φ y,t A 1 B(y,t) g, N 1 2 ( f ⊗ g)(·, y, t) = f (y, t) ⊗ A 1 B(y, t 2 ) g. where M φ y,t g(x) := φ( |y−x| t )g(x)
. By Lemma 2.2 and Theorem 4.3 the result follows from Proposition 3.5 and Kahane's contraction principle.
If X is a UMD space, H a Hilbert space, and 1 < p, q < ∞ satisfy 1 p + 1 q = 1, we have natural isomorphisms (L p (γ(L 2 ( dy dt t n+1 ; H), X))) * L q ((γ(L 2 ( dy dt t n+1 ; H), X)) * ) L q (γ(L 2 ( dy dt t n+1 ; H), X * ))). The first of these follows from the fact that X, and therefore γ(L 2 ( dy dt t n+1 ; H), X), is reflexive, and the second follows from the K-convexity of UMD spaces. Denoting by N the projection of Proposition 4.2, it is easily verified that under the above identification the adjoint N * is given by the same formula. As a result we obtain the following representation for the dual of T p,2 (H; X): Theorem 4.6. If X is a UMD space, H a Hilbert space, and 1 < p, q < ∞ satisfy 1 p + 1 q = 1, we have a natural isomorphism (T p,2 (H; X)) * T q,2 (H; X * ).
As an immediate consequence of Proposition 2.3 we obtain the following result.
Theorem 4.7. Let 1 < p 0 p 1 < ∞, H a Hilbert space, and let X 0 and X 1 be UMD spaces. Then for all 0 < θ < 1 we have
[T p0,2 (H; X 0 ), T p1,2 (H; X 1 )] θ = T p θ ,2 (H; [X 0 , X 1 ] θ ), 1 p θ = 1 − θ p 0 + θ p 1 .
Proof. The result follows by combining (2.3) with the following facts:
(i) if X is a UMD space, then L p (X) is a UMD space for all 1 < p < ∞, (ii) UMD spaces are K-convex, (iii) for 1 p 0 p 1 < ∞ we have [L p0 (X 0 ), L p1 (X 1 )] θ = L p θ ([X 0 , X 1 ] θ )
with p θ as above.
We conclude this section with a result showing that certain singular integral operators are bounded from L p (X) to T p,2 (X). This gives a Banach space-valued extension of [11,Section 4].
Sf (t, y) = R n k t (y, z)f (z) dz
for f ∈ C c (R n ) and a measurable complex-valued function (t, y, z) → k t (y, z). Assume that
(1) S ∈ L (L 2 , T 2,2 ), (2) There exists α > 0 such that for all y, z ∈ R n and t > 0 we have
|k t (y, z)| t α (|y − z| + t) n+α ,(3)
There exists β > 0 such that for all t > 0 and all y, z, z ′ ∈ R n satisfying
|z − y| + t > 2|z − z ′ | we have |k t (y, z) − k t (y, z ′ )| t β |z − z ′ | (|y − z| + t) n+1+β ,(4)
For all t > 0 and y ∈ R n we have R n k t (y, z) dz = 0.
Let 1 < p < ∞. Then S ⊗I X extends to a bounded operator from L p (X) to T p,2 (X).
Proof. We consider the auxiliary operator T taking X-valued functions to ones with values in γ(L 2 ( dy dt t n+1 ), X), given by
T f (x) = R n K(x, z) ⊗ f (z) dz, f ∈ C c (X),
where K(x, z) is the L 2 ( dy dt t n+1 )-valued kernel defined by
K(x, z) : (y, t) → φ |y − x| t k t (y, z)
for some even φ ∈ C ∞ c (R) such that φ(w) = 1 if |w| ≤ 1 2 , φ(w) = 0 if |w| 1, and 1 0 φ(r)r n−1 dr = 0. The claim of the theorem follows if we can show that T extends to a bounded operator from L p (X) to L p (γ(L 2 ( dy dt t n+1 ); X)). This is proved by applying a version of the T (1) theorem for Hilbert space -valued kernels from [15] (which, in turn, is based on results from [17,18]). We first remark that the condition T (1) = 0 follows directly from (4), whereas the vanishing integral assumption on φ guarantees that T ′ (1) = 0, too. It remains to check the following L 2 ( dy dt t n+1 )-valued versions of the standard estimates:
(4.2) sup x,z∈R n |x − z| n K(x, z) L 2 ( dy dt t n+1 ) 1, (4.3) sup x,x ′ ,z∈R n |x−z|>2|x−x ′ | |x − z| n+1 |x − x ′ | K(x, z) − K(x ′ , z) L 2 ( dy dt t n+1 ) 1, (4.4) sup x,z,z ′ ∈R n |x−z|>2|z−z ′ | |x − z| n+1 |z − z ′ | K(x, z) − K(x, z ′ ) L 2 ( dy dt t n+1 )
1, and the weak boundedness property: for any η, η ∈ C ∞ c (B(0, 1)) which satisfy the bounds η ∞ , η ∞ , ∇η ∞ , ∇ η ∞ ≤ 1, one should have
(4.5) sup (u,r)∈R n ×R+ R n R n K(x, z)η x − u r η( z − u r ) dz dx r n L 2 ( dy dt t n+1 )
1.
Proof of (4.2): Using (2) and noting that we have φ |y−x|
t = 0 for y ∈ B(x, t), ∞ 0 R n φ |y − x| t k t (y, z) 2 dy dt t n+1 |x−z| 0 B(x,t) t α (|x − z| + t − |y − x|) n+α 2 dy dt t n+1 + ∞ |x−z| B(x,t) dy dt t 3n+1 |x−z| 0 t 2α−1 |x − z| 2n+2α dt + ∞ |x−z| dt t 2n+1 |x − z| −2n .
Proof of (4.3): Using (2) and the mean value theorem and reasoning as above, for
x, x ′ , z satisfying |x − z| > 2|x − x ′ | we have ∞ 0 R n φ |y − x| t − φ |y − x ′ | t k t (y, z) 2 dy dt t n+1 ∞ 0 B(x,t) |x − x ′ |t α t(|y − z| + t) n+α 2 dy dt t n+1 + similar |x−z| 0 B(x,t) |x − x ′ |t α t(|x − z| + t − |y − x|) n+α 2 dy dt t n+1 + ∞ |x−z| |x − x ′ | 2 dt t 2n+3 + similar |x−z| 0 t 2α−3 |x − x ′ | 2 |x − z| 2n+2α dt + |x − x ′ | 2 |x − z| 2n+2 + similar |x − x ′ | 2 |x − z| 2n+2 ,
where the words "similar" above refer to a copy of the other terms appearing in the same step, with all the occurences of x and x ′ interchanged.
Proof of (4.4):
Using (3), for x, z, z ′ satisfying |x − z| > 2|z − z ′ | we have ∞ 0 R n φ |y − x| t k t (y, z) − k t (y, z ′ ) 2 dy dt t n+1 ∞ 0 B(x,t) t β |z − z ′ | (|z − y| + t) n+1+β 2 dy dt t n+1 |x−z| 0 B(x,t) t β |z − z ′ | (|z − x| + t − |y − x|) n+1+β 2 dy dt t n+1 + ∞ |x−z| |z − z ′ | 2 t 2n+3 dt |x−z| 0 t 2β−1 |z − z ′ | 2 |z − x| 2n+2+2β dt + ∞ |x−z| |z − z ′ | 2 t 2n+3 dt |z − z ′ | 2 |x − z| 2n+2 .
Proof of (4.5): Using the Cauchy-Schwarz inequality and (1) we have
∞ 0 R n R n R n φ |y − x| t k t (y, z)η x − u r η z − u r dz dx r n 2 dy dt t n+1 1 r n ∞ 0 R n R n φ |y − x| t R n k t (y, z) η z − u r dz 2 dy dt dx t n+1 1 r n S η · − u r 2 T 2,2 η 2 L 2 1.
This concludes the proof.
Off-diagonal estimates and their consequences
We start by recalling some terminology. for all Borel sets E, F ⊆ R n and all f ∈ L 2 (R n ; H) with support in F . Here, a = 1 + |a| and d(E, F ) = inf{|x − y| :
x ∈ E, y ∈ F }. The set of such operators is denoted by OD t (M ).
Note that a single operator belongs to OD t (M ) if and only if it belongs to OD s (M ) whenever s, t > 0. However, the related constant C will typically not be the same. The scale of the off-diagonal estimates becomes very relevant when we want uniformity in the constants for a family of bounded operators. Thus we say that (T z ) z∈Σ ⊆ L 2 (H), where Σ ⊆ C, satisfies off-diagonal estimates of order M if T z ∈ OD |z| (M ) for all z ∈ Σ with the same constant C.
Theorem 5.2. Let 1 < p < ∞, H be a Hilbert space, X be a UMD Banach space, and L p (X) have type τ . Let (T t ) t>0 be a uniformly bounded family of operators on L 2 (H) satisfying off-diagonal estimates of order M for some M > n/τ . Then the operator T , defined on C c (H) ⊗ X by T (g ⊗ ξ)(y, t) := T t (g(·, t))(y) ⊗ ξ, extends uniquely to a bounded linear operator on T p,2 (H; X).
Proof. Let us consider a function
f = i g i ⊗ ξ i ∈ C c (H) ⊗ X. We define the sets C 0 (x, t) := B(x, 2t), C m (x, t) := B(x, 2 m+1 t) \ B(x, 2 m , t), m = 1, 2, . . . , so that there is a disjoint union ∞ m=0 C m (x, t) = R n . Let (u m ) ∞ m=0 be the functions u m : x → (y, t) → 1 B(x,t) (y)T t 1 Cm(x,t) f (·, t) (y) , where T t 1 Cm(x,t) f (·, t) (y) := i T t (1 Cm(x,t) g i (·, t))(y) ⊗ ξ i .
We then have the formal expansion J(T f ) = ∞ m=0 u m , and for a fixed x ∈ R n , we separately estimate the γ(L 2 ( dy dt t n+1 ; H), X)-norms of each u m (x). Fix ξ * ∈ X * , and denote by | · | the norm on H. Let us also write f (y, t), ξ * := i g i (y, t) ξ i , ξ * . For m = 0 we estimate, using the uniform boundedness of the operators T t on L 2 (H),
u 0 (x) * ξ * 2 L 2 ( dy dt t n+1 ;H) = R n+1 + 1 B(x,t) (y) T t 1 B(x,2t) f (·, t), ξ * (y) 2 dy dt t n+1 R n+1 + 1 B(x,2t) (y)| f (y, t), ξ * | 2 dy dt t n+1 .
Hence, by covariance domination (Proposition 2.1),
u 0 (x) γ(L 2 ( dy dt t n+1 ;H),X) (y, t) → 1 B(x,2t) (y)f (y, t) γ(L 2 ( dy dt t n+1 ;H),X)
, and we conclude that For m 1, the off-diagonal estimates of order M imply
u m (x) * ξ * 2 L 2 ( dy dt t n+1 ;H) = R n+1 + 1 B(x,t) (y) T t 1 Cm(x,t) f (·, t), ξ * (y) 2 dy dt t n+1 ≤ 2 −2mM R n+1 + 1 B(x,2 m+1 t) (y)| f (y, t), ξ * 2 dy dt t n+1 .
Hence, by covariance domination,
u m (x) γ(L 2 ( dy dt t n+1 ;H),X) 2 −mM (y, t) → 1 B(x,2 m+1 t) (y)f (y, t) γ(L 2 ( dy dt t n+1 ;H),X)
, and from Theorem 4.3 we conclude that
u m L p (γ(L 2 ( dy dt t n+1 ;H),X)) 2 −mM f T p,2 2 m+1 (H;X) 2 −mM · m · 2 mn/τ f T p,2 (H;X) .
Keeping in mind that M > n/τ , we may sum over m to see that the formal expansion J(T f ) = ∞ m=0 u m converges absolutely in L p (γ(L 2 ( dy dt t n+1 ; H), X)), and we obtain the desired result.
Remark 5.3. The T p,2 (H; X)-boundedness of the operator T as considered above can be seen as a (p and X dependent) property of the (parameterised) operator family (T t ) t>0 ⊂ L (L 2 (H)). Let us call this property tent-boundedness. A simple example of a tent-bounded family consists of the translations T t f (x) = f (x + ty), where y is some unit vector. Indeed, these are obviously uniformly bounded in L 2 (and in L p as well) and satisfy off-diagonal estimates of any order. In contrast to this, even when X = C, it is well known that this family is not γ-bounded in L p unless p = 2.
We next consider operators of the form
(T f ) t := ∞ 0 T t,s f s ds s , f ∈ C c (H) ⊗ X,
where T t,s ∈ L (L 2 (H)). This is first done separately for upper and lower diagonal "kernels" T t,s .
Proposition 5.4. Let 1 < p < ∞, H be a Hilbert space, X be a UMD space, and let L p (X) have type τ . Let (U t,s ) 0<t≤s<∞ be a uniformly bounded family of operators on L 2 (H) such that (U t,s ) s t ∈ OD s (M ) uniformly in t for some M > n/τ . Let further α > n/2. Then
(U F ) t = ∞ t t s α U t,s F s ds s
extends to a bounded operator on T p,2 (H; X).
Proof. Let F ∈ C c (H) ⊗ X be arbitrary and fixed. It suffices to estimate the norm of the functions u k ∈ L p (γ(L 2 ( dy dt t n+1 ; H), X)) defined by Let x ∈ R n be fixed for the moment. To estimate the relevant γ(L 2 ( dy dt t n+1 ; H), X)norm at this point, we wish to use the covariance domination. Hence let ξ * ∈ X * , write f s := F s (·), ξ * ∈ L 2 (H) for short, and consider the quantity
u k : x → (y, t) → 1 B(x,t) ∞ t t s α U t,s (1 C k (x,(u k (x))(y, t), ξ * = 1 B(x,t) ∞ t t s α U t,s (1 C k (x,s) f s )(y) ds s ∈ H. Its norm in L 2 ( dy dt t n+1 ; H) is dominated by ∞ 0 ∞ t t s α 1 B(x,t) U t,s (1 C k (x,s) f s ) L 2 (H) ds s 2 dt t n+1 1/2 ≤ ∞ 0 ∞ t t s 2ǫ ds s ∞ t t s 2(α−ǫ) 1 B(x,t) U t,s (1 C k (x,s) f s ) 2 L 2 (H) ds s dt t n+1 1/2 ∞ 0 ∞ t t s 2(α−ǫ) 2 −kM 1 B(x,2 k+1 s) f s L 2 (H) 2 ds s dt t n+1 1/2 2 −kM ∞ 0 1 B(x,2 k+1 s) f s 2 L 2 (H) ds s n+1 1/2 ,
where in the last step we exchanged the order of integration and integrated out the t variable; the convergence required that 2(α − ǫ) > n, which holds for sufficiently small ǫ > 0, since α > n/2.
The right-hand side of our computation is 2 −kM times the L 2 ( dy dt t n+1 ; H)-norm of 1 B(x,2 k+1 s) F s (y), ξ * , so that covariance domination gives us
u k (x) γ(L 2 ( dy dt t n+1 ;H),X) 2 −kN (J 2 k+1 F )(x) γ(L 2 ( dy dt t n+1 ;H),X)
.
Taking L p -norms and using Theorem 4.3 yields Recalling that M > n/τ , we find that the formal expansion J(U F ) = ∞ k=0 u k converges absolutely in L p (γ(L 2 ( dy dt t n+1 ; H), X)), and we obtain the desired estimate U F T p,2 (X) F T p,2 (X) .
Proposition 5.5. Let 1 < p < ∞, H be a Hilbert space, X be a UMD space, and let L p (X) have type τ . Let (L t,s ) 0<s≤t<∞ be a uniformly bounded family of operators on L 2 (H) such that (L t,s ) t s ∈ OD t (N ) uniformly in s for some N > n/τ . Let further β > n(1/τ − 1/2). Then
(LF ) t = t 0 s t β L t,s F s ds s
extends to a bounded operator on T p,2 (H; X).
Proof. The proof follows a similar approach as the previous one. This time, we expand J(LF ) in a double series
∞ k,m=0 v k,m , where v k,m : x → (y, t) → 2 −m t 2 −(m+1) t s t β 1 B(x,t) (y)L t,s (1 C k (x,t) F s )(y) ds s .
Again, we wish to estimate the γ(L 2 ( dy dt t n+1 ; H), X)-norm of v k,m (x) by covariance domination, for which purpose we take ξ * ∈ X * , write f s := F s (·), ξ * , and com-
pute v k,m (x), ξ * L 2 ( dy dt t n+1 ;H) ≤ ∞ 0 2 −m t 2 −(m+1) t 2 −mβ 1 B(x,t) L t,s (1 C k (x,t) F s ) L 2 (H) ds s 2 dt t n+1 1/2 2 −mβ ∞ 0 2 −m t 2 −(m+1) t 2 −kN 1 B(x,2 k+1 t) F s L 2 (H) 2 ds s dt t n+1 1/2 2 −m(β+n/2) 2 −kN ∞ 0 1 B(x,2 k+m+2 s) F s 2 L 2 (H) ds s n+1 1/2 . This is 2 −m(β+n/2) 2 −kN times the L 2 ( dy dt t n+1 ; H)-norm of 1 B(x,2 k+m+2 s) (y) F s (y), ξ * ; hence by covariance domination v k,m (x) γ(L 2 ( dy dt t n+1 ;H),X) 2 −m(β+n/2) 2 −kN (J 2 k+m+2 F )(x) γ(L 2 ( dy dt t n+1 ;H),X)
.
Taking L p -norms and using Theorem 4.3 we get v k,m L p (γ(L 2 ( dy dt
t n+1 ;H),X)) 2 −m(β+n/2) 2 −kN F T p,2 2 k+m+2 (H;X) 2 −m(β+n/2) 2 −kN (1 + k + m)2 (k+m)n/τ F T p,2 (H;X) ,
and we can sum up the series over k and m since β + n/2 > n/τ and N > n/τ .
Combining the previous two propositions with a duality argument, we finally obtain:
Theorem 5.6. Let 1 < p < ∞, H be a Hilbert space, X be a UMD space, and let L p (X) have type τ and cotype γ. Let (T t,s ) 0<t,s<∞ be a uniformly bounded family of operators on L 2 (H) such that:
(i) (T t,s ) s>t ∈ OD s (M ) uniformly in t, (ii) (T t,s ) t>s ∈ OD t (N ) uniformly in s. Then Proof. We split T into a sum U +L of upper and lower triangular parts as considered in the previous two propositions. Part (a) is an immediate consequence, since the conditions on M and α guarantee the boundedness of U and those on N and β that of L. For part (b), the boundedness of U follows as before. As for L, we observe that its (formal) adjoint on T p ′ ,2 (H; X * ) is the upper triangular operator
(T F ) t = ∞ 0 min t s α , s t β T t,(L * G) t = ∞ t t s β T * s,t G s ds s ,
where T * s,t ∈ OD s (N ) and L p ′ (X * ) = (L p (X)) * has type γ ′ = γ/(γ − 1). We know that this operator is bounded on T p ′ ,2 (H; X * ) under the conditions that N > n/γ ′ = n(1 − 1/γ) and β > n/2.
Parts (c) and (d) are proved similarly by considering U * and L, and U * and L * , respectively.
The most important case for us is when N = M , and we record this as a corollary for later reference. In this situation, the condition (b) of Theorem 5.6 becomes redundant, since it is always contained in condition (a).
Corollary 5.7. Let 1 < p < ∞, H be a Hilbert space, X be a UMD space, and let L p (X) have type τ and cotype γ. Let (T t,s ) 0<t,s<∞ be a uniformly bounded family of operators on L 2 (H) such that T t,s ∈ OD max{t,s} (M ) uniformly in t and s. Then
(5.1) (T F ) t = ∞ 0 min t s α , s t β T t,s F s ds s
extends to a bounded operator on T p,2 (H; X) if at least one of the following three conditions is satisfied: (a) M > n/τ , α > n/2, and β > n(1/τ − 1/2), (c) M > n · max{1/τ, 1 − 1/γ}, α > n(1/2 − 1/γ), and β > n(1/τ − 1/2), (d) M > n(1 − 1/γ), α > n(1/2 − 1/γ), and β > n/2.
Remark 5.8. If X = C (or more generally a Hilbert space), then one can take τ = min(2, p) and γ = max(2, p) in Corollary 5.7. For p ∈ [2, ∞) (so that τ = 2), part (a) provides the following sufficient condition for the T p,2 -boundedness of (5.1): M, α > n/2, and β > 0. For p ∈ (1, 2] (so that γ = 2), part (d) in turn gives M, β > n/2, and α > 0. This recovers the corresponding result in [3] in the Euclidean case for p ∈ (1, ∞). Note that in [3] the end-points p ∈ {1, ∞} are also considered; in fact, the proof for p ∈ (1, 2) goes via interpolating between estimates available in the atomic space T 1,2 and the Hilbert space T 2,2 . See also [1], where a weak type (1, 1) estimate is obtained.
Bisectorial operators and functional calculus
In this section we collect some generalities concerning bisectorial operators and their H ∞ -calculus. We denote by S θ the (open) bisector of angle θ, i.e. S θ = S + θ ∪ S − θ with S + θ = {z ∈ C \ {0} : | arg(z)| < θ} and S − θ = −S + θ . We denote by Γ θ the boundary of S θ , which is parameterised by arc-length and oriented anticlockwise around S θ .
A closed, densely defined, linear operator A acting in a Banach space Y is called bisectorial (of angle ω, where 0 < ω < 1 2 π) if the spectrum of A is contained in S ω and for all ω < θ < 1 2 π there exists a constant C θ such that for all nonzero z ∈ C \ S θ
(I + zA) −1 C θ |z| d(z, S θ ) .
For α, β > 0 we set
Ψ α (S θ ) = f ∈ H ∞ (S θ ) : ∃C |f (z)| ≤ C min(|z| α , 1) for all z ∈ S θ , Ψ β (S θ ) = f ∈ H ∞ (S θ ) : ∃C |f (z)| ≤ C min(1, |z| −β ) for all z ∈ S θ , Ψ β α (S θ ) = f ∈ H ∞ (S θ )
: ∃C |f (z)| ≤ C min(|z| α , |z| −β ) for all z ∈ S θ and Ψ(S θ ) = α,β>0 Ψ β α (S θ ). Let ω < θ < 1 2 π be fixed. For ψ ∈ Ψ(S θ ), we define
ψ(A) = 1 2πi Γ θ ψ(z)(z − A) −1 dz.
The resolvent bounds for A imply that this integral converges absolutely in L (Y ). If one has, in addition, the quantitative estimate
ψ(A) L (Y ) ψ ∞ , then A is said to have H ∞ (S θ )-calculus on Y .
Lemma 6.1. Let A be bisectorial of angle ω and let θ > ω.
(1) For φ 1 , φ 2 ∈ Ψ(S θ ) we have φ 1 (A)φ 2 (A) = (φ 1 · φ 2 )(A); this is also true if φ 2 ∈ H ∞ (S θ )
is a rational function, in which case φ 2 (A) is defined in the usual way by using the resolvents of A.
(2) For all ψ 1 ∈ Ψ(S θ ), ψ 2 ∈ H ∞ (S θ ), ψ 3 ∈ Ψ(S θ ) we have ψ 1 (A)(ψ 2 ψ 3 )(A) = (ψ 1 ψ 2 )(A)ψ 3 (A).
Proof. The first claim is the well-known homomorphism property, which in both cases can be proved by writing out the definition of φ 1 (A)φ 2 (A), performing a partial fraction expansion, and using Cauchy's theorem. The second claim follows from the homomorphism property for ψ 2 ∈ Ψ(S θ ), and the general case can be obtained from this by approximation (cf. [20, Theorem 9.2(i)]). Proof. If f = ψ(A)g ∈ R(ψ(A)), let f ε := A(ε + A) −1 f ∈ R(A). Then
f − f ε = ε(ε + A) −1 ψ(A)g = 1 2πi Γ ε ε + z ψ(z)(z − A) −1 g dz.
The integrand is bounded by ψ(z)z −1 ∈ L 1 (Γ, | dz|) and tends pointwise to zero as ε → 0. Hence f ε → f by dominated convergence.
Next we observe that We say that ψ ∈ Ψ β α (S θ ) is degenerate if (at least) one of the restrictions ψ| S ± θ vanishes identically; otherwise it is called non-degenerate. The following two lemmas go back to Calderón, cf. [27,Section IV.6.19]. For the convenience of the reader we include simple proofs. Lemma 6.3 (Calderón's reproducing formula, I). Let ψ ∈ Ψ β α (S θ ) be non-degenerate. If α ′ α and β ′ β, there exists ψ ∈ Ψ β ′ α ′ (S θ ) such that
f ε = (I + εA) −1 f → f as ε → 0. Indeed, if f ∈ D(A), then f − f ε = ε · (I + εA) −(6.1) ∞ 0 ψ(tz) ψ(tz) dt t = 1, z ∈ S θ .
Proof. Let ψ(z) := ψ(z). Let m max(α ′ − α, β ′ − β) and denote
c ± := ∞ 0 (±t) m (1 + t 2 ) m ψ(±t)ψ(±t) dt t .
By non-degeneracy, c ± > 0. Hence the function ψ(z) = c −1 ± z m (1 + z 2 ) −m ψ(z) for z ∈ S ± θ has the desired properties.
Lemma 6.4 (Calderón's reproducing formula, II). Let ψ, ψ ∈ Ψ(S θ ) satisfy (6.1).
Then ∞ 0 ψ(tA) ψ(tA)f dt t = f, f ∈ R(A),
where the left side is defined as an indefinite Riemann integral in L 2 .
Proof.
Let first f = φ(A)g for some φ ∈ Ψ(S θ ). Then ∞ 0 ψ(tA) ψ(tA)f dt t = ∞ 0 (ψ(t·) ψ(t·)φ(·))(A)g dt t = ∞ 0 1 2πi Γ θ ′ ψ(tz) ψ(tz)φ(z)(z − A) −1 g dz dt t = 1 2πi Γ θ ′ ∞ 0 ψ(tz) ψ(tz) dt t φ(z)(z − A) −1 g dz = 1 2πi Γ θ ′ φ(z)(z − A) −1 g dz = φ(A)g = f
by Lemma 6.1, absolute convergence and Fubini's theorem. To conclude, we recall from Lemma 6.2 that functions as above are dense in R(A), and notice that b a ψ(sz) ψ(sz) ds/s are uniformly in H ∞ (S θ ) so that the corresponding operators obtained by the formal substitution z := A are uniformly bounded by the functional calculus. From this the convergence of the indefinite Riemann integral to the asserted limit follows easily.
Hardy spaces associated with bisectorial operators
We now move on to more specific spaces and operators. Throughout this section, we let the following assumptions be satisfied:
Assumption 7.1. The Banach space X is UMD and 1 < p < ∞. Two numbers τ ∈ [1, 2] and γ ∈ [2, ∞] are fixed in such a way that L p (X) has type τ and cotype γ.
Assumption 7.2.
H is a Hilbert space, and the operator A in L 2 (H) is bisectorial of angle ω ∈ (0, π/2). For ω < θ ′ < θ < π/2, it also has an H ∞ (S θ )-calculus on L 2 (H), and the family ((I + ζA) −1 ) ζ∈C\S θ satisfies off-diagonal estimates of order M , where M > n · min{1/τ, 1 − 1/γ}. With only the above assumptions at hand, it may well happen that A fails to be bisectorial even for H = C, and in particular to have an H ∞ -calculus, in L p for some values of p = 2. The tensor extension A⊗ I X may already fail these properties in L 2 (X). To study problems involving operators f (A) in such spaces, we are thus led to define an appropriate scale of Hardy spaces associated with A. When A is the Hodge-Dirac operator or the Hodge-de Rham Laplacian on a complete Riemannian manifold, this has been done in [3]. We build on the ideas of this paper. Lemma 7.3. For ω < θ < π/2 and ε > 0, let g ∈ H ∞ (S θ ), and let ψ ∈ Ψ ε M+ε (S θ ). Then {(g · ψ(t·))(A)} t>0 satisfies off-diagonal estimates of order M , and the offdiagonal constant has an upper bound which depends linearly on g ∞ .
Proof. Let us denote by δ := d(E, F ) the 'distance' of two Borel sets E and F as defined previously. Then, using the fact that (
I −z −1 A) −1 ∈ OD 1/|z| (M ) uniformly in z ∈ S θ , 1 E (g · ψ(t·))(A)1 F f = 1 2πi Γ θ ′ g(z)ψ(tz)1 E I − 1 z A −1 1 F f dz z Γ θ ′ min (t|z|) M+ε , (t|z|) −ε (δ|z|) −M f | dz| |z| 1/t 0 t M+ε r M+ε · δ −M r −M f dr r + ∞ 1/t t −ε r −ε · δ −M r −M f dr r t M δ −M f ,
and this proves the claim.
Lemma 7.4. Let α, β, ε > 0, and
ψ ∈ Ψ β+ε max{M−β,α}+ε (S θ ), ψ ∈ Ψ α+ε max{M−α,β}+ε (S θ ), φ ∈ C1 ⊕ Ψ(S θ ). Then ψ(tA)φ(A) ψ(sA) = min t s α , s t β S t,s ,
where (S t,s ) t,s>0 is a uniformly bounded family of operators acting on L 2 (H) such that S t,s ∈ OD max{t,s} (M ), uniformly in t and s.
Proof. We have
ψ(tA)φ(A) ψ(sA) = (t/s) α ψ 0 (tA)φ(A) ψ 0 (sA) = (s/t) β ψ 1 (tA)φ(A) ψ 1 (sA), where ψ 0 (z) := z −α ψ(z) ∈ Ψ α+β+ε ε , ψ 0 (z) := z α ψ(z) ∈ Ψ ε M+ε , ψ 1 (z) := z β ψ(z) ∈ Ψ ε M+ε , ψ 1 (z) := z −α ψ(z) ∈ Ψ α+β+ε ε .
The case s t of the claim follows from Lemma 7.3 (with s in playing the role of t in that Lemma) with g(z) = ψ 0 (tz)φ(z) and ψ 0 in place of ψ, while for the other case we take g(z) = φ(z) ψ 1 (sz) and ψ 1 in place of ψ.
Proposition 7.5. Let ψ,ψ ∈ Ψ(S θ ) and φ ∈ C1 ⊕ Ψ(S θ ). Then
(T F ) t = ∞ 0 ψ(tA)φ(A)ψ(sA)F s ds s
extends to a bounded operator on T p,2 (H; X) if at least one of the following conditions is satisfied:
(a) M > n/τ , ψ ∈ Ψ n(1/τ −1/2)+ε n/2+ε
, andψ ∈ Ψ n/2+ε
n(1/τ −1/2)+ε , (c) M > max{n/τ, n(1 − 1/γ)}, ψ ∈ Ψ n(1/τ −1/2)+ε n/2+n max{1/γ ′ −1/τ,0}+ε , andψ ∈ Ψ n(1/2−1/γ)+ε n/2+n max{1/τ −1/γ ′ ,0}+ε , (d) M > n(1 − 1/γ), ψ ∈ Ψ n/2+ε n(1/2−1/γ)+ε , andψ ∈ Ψ n(1/2−1/γ)+ε n/2+ε , where ε > 0 is arbitrary.
Proof. This is directly, if slightly tediously, verified as a corollary of Lemma 7.4 and Corollary 5.7, so that the different conditions of Proposition 7.5 correspond to those of Corollary 5.7.
Definition 7.6. We say that a pair of functions (ψ,ψ) ∈ Ψ(S θ )×Ψ(S θ ) has sufficient decay if they verify at least one of the conditions (a), (c), or (d) of Proposition 7.5.
Remark 7.7. (i) Note that the notion of sufficient decay as defined above assumes that the parameters appearing in Assumptions 7.1 and 7.2 have been fixed. Also observe that if the parameters are such that for instance n(1 − 1/γ) < M ≤ n/τ , then only the condition (d) above is applicable.
(ii) If (ψ, 0) ∈ Ψ(S θ ) × Ψ(S θ ) has sufficient decay, by Calderón's reproducing formula there exists aψ ∈ Ψ(S θ ) which satisfies (6.1) and decays as rapidly as desired; in particular, we may arrange so that the pair (ψ,ψ) also has sufficient decay. A similar remark applies if we start from aψ ∈ Ψ(S θ ) such that (0,ψ) has sufficient decay.
For f = i g i ⊗ ξ i ∈ L 2 ⊗ X and ψ ∈ Ψ(S θ ) we shall write (Q ψ f )(y, t) := i ψ(tA)g i (y) ⊗ ξ i := ψ(tA)f (y).
Definition 7.8. For 1 ≤ p < ∞ and a non-degenerate ψ ∈ Ψ(S θ ), the Hardy space H p A,ψ (X) associated with A and ψ is the completion of the space
{f ∈ R(A) ⊗ X ⊆ L 2 (H) ⊗ X : Q ψ f ∈ T p,2 (X)} with respect to the norm f H p A,ψ (X) := Q ψ f T p,2 (H;X) . It is clear that · H p A,ψ (X)
is a seminorm on R(A) ⊗ X; that it is actually a norm will be seen shortly.
By definition, the operator (Q ψ f )(·, t) := ψ(tA)f embeds the Hardy space H p A,ψ (H; X) isometrically into the tent space T p,2 (H; X). Of importance will also be another operator acting to the opposite direction. For ψ ∈ Ψ(S θ ), we define S e ψ f ∈ L 2 (R n ; H) ⊗ X by (7.1) S e ψ F := ∞ 0 ψ(sA)F (s, ·) ds s for those functions F ∈ L 1 loc (R + ; L 2 (R n ; H)) ⊗ X for which the integral exists as a limit in L 2 (H) of the finite integrals b a , where a → 0 and b → ∞. By Calderón's reproducing formula, for a given ψ ∈ Ψ(S θ ), there exists aψ ∈ Ψ(S θ ) such that the defining formula (7.1) makes sense for all F ∈ Q ψ (R(A) ⊗ X), and we have
(7.2) SψQ ψ f = f, f ∈ R(A) ⊗ X.
Hence, if f H p A,ψ (H;X) = 0 for some f ∈ R(A) ⊗ X, this means by definition that Q ψ f = 0, and the identity (7.2) yields immediately f = 0. Thus · H p A,ψ (H;X) = 0 is indeed a norm. Proposition 7.9. Let (ψ, ψ) ∈ Ψ(S θ )×Ψ(S θ ) be a pair with sufficient decay. If f ∈ T p,2 (H; X) is such that the defining formula (7.1) is valid, then S e ψ f ∈ H p A,ψ (H; X), and the mapping f → S e ψ f extends uniquely to a bounded operator from T p,2 (H; X) to H p A,ψ (H; X).
Proof. Write g := S e ψ f . First we check that g ∈ R(A) ⊗ X: this is clear from the defining formula, since ψ(sA)f (·, s) ∈ R(A) for each s > 0 by Lemma 6.2, and Bochner integration in the Banach space L 2 (H) preserves the closed subspace R(A). By Proposition 7. The subspace of T p,2 (H; X) where the defining formula (7.1) is valid contains e.g. C c (H) ⊗ X and is therefore dense in T p,2 (H; X). Hence the mapping S e ψ has a unique extension to a bounded operator from T p,2 (H; X) to H p A,ψ (H; X).
Next we show that H p A,ψ (H; X) is independent of ψ ∈ Ψ(S θ ), provided (ψ, 0) has sufficient decay. A typical function with this property is
ψ(z) = ( √ z 2 ) n( 1 2 − 1 γ )+1 e − √ z 2 ,
where γ denotes the cotype of L p (X). This gives the classical definition by the Poisson kernel when X = C and 1 < p ≤ 2, taking γ = 2. Q ψ f T p,2 (H;X) = f H p A,ψ (H;X) . Taking φ = 1, this gives (i). Taking φ ∈ Ψ(S θ ), we obtain (ii).
The following, by now quite simple result has some useful consequences: Proposition 7.11. If (0, ψ) has sufficient decay, then the bounded mapping S e ψ : T p,2 (H; X) → H p A (H; X) is surjective.
Proof. By Remark 7.7, we find a ψ ∈ Ψ(S θ ) such that (7.2) is satisfied and (ψ, ψ) has sufficient decay. Now let f ∈ H p A (H; X) = H p A,ψ (H; X) be arbitrary and let lim n→∞ f n = f in H p A,ψ (H; X) with f n ∈ R(A) ⊗ X. The functions g n := Q ψ f n belong to T p,2 (H; X) and g n − g m T p,2 (H;X) = f n − f m H p A,ψ (H;X) for all m, n. It follows that the sequence (f n ) is Cauchy in T p,2 (H; X) and therefore converges to some f ∈ T p,2 (H; X). From f n = S e ψ g n and the continuity of S e ψ it follows that f = S e ψ g. As a further consequence we deduce an interpolation result for Hardy spaces from the following general principle (see Theorem 1.2.4 in [28]): Let X 0 , X 1 and Y 0 , Y 1 be two interpolation couples such that there exist operators S ∈ L(Y i , X i ) and Q ∈ L(X i , Y i ) with SQx = x for all x ∈ X i and i = 0, 1. Then [X 0 , X 1 ] θ = S[Y 0 , Y 1 ] θ . Here we take (ψ, ψ) as in the Calderón reproducing formula with sufficient decay, S = S e ψ and Q = Q ψ . Corollary 7.13. Let H be a Hilbert space and X be a UMD space. For all 1 < p 0 < p 1 < ∞ and 0 < θ < 1 we have
[H p0
A (H; X), H p1 A (H; X)] θ = H p θ A (H; X),
1 p θ = 1 − θ p 0 + θ p 1 .
Hardy spaces associated with differential operators
The construction described in Section 7 is particularly relevant when dealing with differential operators A = D B in L 2 (C ⊕ C n ), where
D B = 0 −divB ∇ 0
with B a multiplication operator on L 2 (C n ) given by an (n × n)-matrix with L ∞ entries. Such operators have been considered in connection with the celebrated square root problem of Kato, which was originally solved in [2]. A new proof based on first order methods was devised in [4], where it was shown that D B bisectorial on L 2 (C ⊕ C n ) and satisfies off-diagonal estimates of any order.
In [16], the H ∞ -functional calculus of D B ⊗ I X in L p (X ⊕ X n ) is described in terms of R-boundedness of the resolvents. Although these resolvent conditions, and hence the functional calculus, may fail on L p (X ⊕ X n ) in general, it follows from Section 7 that these operators do have an H ∞ -functional calculus on H p DB (C ⊕ C n ; X), which in particular implies Kato type estimates in this space.
To express these estimates, observe first that R(D B ) = R(divB) ⊕ R(∇). Let us hence write a function f ∈ R(D B ) ⊗ X as (f 0 , f 1 ), where
f 0 ∈ R(divB) ⊗ X ⊆ L 2 (C) ⊗ X, f 1 ∈ R(∇) ⊗ X ⊆ L 2 (C n ) ⊗ X
extends to a bounded operator M on γ(L 2 (A; H), X) of norm M γ(M ). Let us also recall that for 1 p < ∞, the mapping f → [h → f (·)h] defines an isomorphism of Banach spaces (2.3) L p (A; γ(H, X)) γ(H, L p (A; X)).
Remark 4. 4 .
4If X = C, then one can take τ = min(2, p) in Theorem 4.3. Except possibly for the logarithmic factor, (4.1) gives the correct order of growth of f T p,2 α in terms of the angle α 1.
Theorem 4. 8 .
8Let X be a UMD space. Consider the singular integral operator defined by
Definition 5. 1 .
1Let M, t > 0 and H a Hilbert space. An operator T ∈ L(L 2 (R n , H)) is said to have off-diagonal estimates of order M at the scale of t if there is a constant C such that T f L 2 (E;H) ≤ C d(E, F )/t −M f L 2 (F ;H)
u 0
0L p (γ(L 2 ( dy dt t n+1 ;H),X)) f T p,2 2 (H;X) f T p,2 (H;X) .
s) F s )(y) ds s , k = 0, 1, . . . , where C 0 (x, s) := B(x, 2s), and C k (x, s) := B(x, 2 k+1 s) \ B(x, 2 k s) for k 1.
u k L p (γ(L 2 ( dy dt t n+1 ;H),X)) 2 −kM F T p,2 2 k+1 (H;X)2 −kM (1 + k)2 kn/τ F T p,2 (H;X) .
s F s ds s extends to a bounded operator on T p,2 (H; X) if at least one of the following four conditions is satisfied: (a) M > n/τ , α > n/2, N > n/τ , and β > n(1/τ − 1/2), (b) M > n/τ , α > n/2, N > n(1 − 1/γ), and β > n/2, (c) M > n(1 − 1/γ), α > n(1/2 − 1/γ), N > n/τ , and β > n(1/τ − 1/2), (d) M > n(1 − 1/γ), α > n(1/2 − 1/γ), N > n(1 − 1/γ), and β > n/2.
Lemma 6 . 2 .
62Let A be bisectorial of angle ω and let θ > ω. Then, R(A) = R(A) ∩ D(A) = R(A(I + A) −2 ) = ψ∈Ψ(S θ ) R(ψ(A)).
1
1Af has norm at most Cε, since the second factor stays uniformly bounded. Since the operators (I + εA) −1 are uniformly bounded and D(A) is dense, the convergence remains true for all f . If now f ∈ R(A), then f ε ∈ R(A) ∩ D(A).To complete the chain, let f ∈ R(A) ∩ D(A). Then for some g ∈ D(A 2 ) we have f = Ag = A(I + A) −2 (I + A) 2 g = ψ(A)h, where ψ(z) = z/(1 + z) 2 ∈ Ψ and h = (I + A) 2 g. This completes the proof.
tA) ψ(sA)f (y, s) ds s defines an element ψ(·A)g of T p,2 (H; X) and we have S e ψ f H p A,ψ (H;X) = ψ(·A)g T p,2 (H;X) f T p,2 (H;X) .
Theorem 7. 10 .
10Let ψ, ψ ∈ Ψ(S θ ) be two functions such that (ψ, 0) and (ψ, 0) have sufficient decay. Then:(i) H p A,ψ (H; X) = H p A,ψ (H; X) =: H p A (H; X). (ii) A has an H ∞ -functional calculus on H p A (H; X). Proof. Let φ ∈ C1⊕Ψ(S θ ) be arbitrary and fixed. Let f ∈ R(A)⊗X. By Calderón's reproducing formula, there exists ψ ∈ Ψ(S θ ) (with any prescribed decay)Thus φ(A)f H p A,ψ (H;X) = T Q ψ f T p,2 (H;X) ,where T is the operator on T p,2 (H;
Corollary 7 . 12 .(
712Let (0, ψ) have sufficient decay. An equivalent description of the Hardy space is H p A (H; X) = H p A, e ψ (H; X) := {S e ψ F : F ∈ T p,2 (H; X)}, and an equivalent norm is given by H;X) := inf{ F T p,2 (H;X) : f = S e ψ F }.
Acknowledgments. This paper was started while Tuomas Hytönen and Jan vanand hence φ(t 2 D 2 B ), is diagonal with respect to the splitting f = (f 0 , f 1 ). In particular this shows that(C⊕C n ;X) .Hence also the full space H p DB (C⊕C n ; X) (constructed as the completion of R(D B )⊗ X with respect to the above-given norm) has the natural direct sum splitting into "X-valued" and "X n -valued" components. Let us denote these components by H p DB (C; X) and H p DB (C n ; X), so that (C⊕C n ;X) . Then we are ready to state: Theorem 8.1. Let X be a UMD space, 1 < p < ∞, and D B be as above. ThenProof. We know from[4]that (I + zD B ) −1 satisfies off-diagonal estimates of arbitrary order and that D B has an. By the boundedness of the H ∞ -calculus and the identity 1/φ(z) = φ(z),Observing thatand writing (8.1) for f = (f 0 , 0) givesLet then u ∈ D(∇) ⊗ X. By the solution of Kato's problem we have D(∇) = D( √ −divB∇). Substituting, we obtain the assertion. We used above the inclusion R(A 1/2 ) ⊆ R(A), which is true for all sectorial operators (see[10], Corollary 3.1.11).Let D be the unperturbed operator D I . Observe that D 2 (f, 0) = (∆f, 0) and then, whenever ψ is even, ψ(tD)(f, 0) = (ψ(t √ ∆)f, 0). The space H p D (C, X) is then the classical Hardy space.Proof. Let us denote by N the smallest integer greater than n 2 and, for functionsand p(w)is thus a Fourier multiplier with symbol m t (ξ) = (t|ξ|) N e −t|ξ| . This implies assumptions (1) and (4) in Theorem 4.8. Assumptions(2)and(3), with α = β = 1, follow from direct computations of the N -th derivative of t → t −n p( |x| t ) and the mean value theorem. Now, for f ∈ L p (X), letting for all f ∈ L p (X). Now let f ∈ L p (X) and g ∈ L p ′ (X * ), and denote by f, g their duality product. By Calderón's reproducing formula there exists ψ (with arbitrary decay) such that f, g = ∞ 0 ψ(t∆)f, ψ(t∆) * g dt t .Therefore, and hence f L p (X) f H p D (C,X) .
Boundedness of Banach space valued singular integral operators and Hardy spaces. P Auscher, X T Duong, A Intosh, PreprintP. Auscher, X.T. Duong, and A. M c Intosh. Boundedness of Banach space valued singular integral operators and Hardy spaces. Preprint, 2004.
The solution of the Kato square root problem for second order elliptic operators on R n. P Auscher, S Hofmann, M Lacey, A Intosh, Ph Tchamitchian, Ann. of Math. 1562P. Auscher, S. Hofmann, M. Lacey, A. M c Intosh, and Ph. Tchamitchian. The solution of the Kato square root problem for second order elliptic operators on R n . Ann. of Math. (2), 156(2):633-654, 2002.
Hardy spaces of differential forms and Riesz transforms on Riemannian manifolds. P Auscher, A Intosh, E Russ, arXiv:math.DG/0611334v2C. R. Math. Acad. Sci. Paris, Ser. I. 3442P. Auscher, A. M c Intosh, and E. Russ. Hardy spaces of differential forms and Riesz trans- forms on Riemannian manifolds. C. R. Math. Acad. Sci. Paris, Ser. I, 344(2):103-108, 2007. Expanded version at arXiv:math.DG/0611334v2.
Quadratic estimates and functional calculi of perturbed Dirac operators. A Axelsson, S Keith, A Intosh, Invent. Math. 1633A. Axelsson, S. Keith, and A. M c Intosh. Quadratic estimates and functional calculi of per- turbed Dirac operators. Invent. Math., 163(3):455-497, 2006.
Interpolation spaces. An introduction. J Bergh, J Löfström, Grundlehren der Mathematischen Wissenschaften. 223Springer-VerlagJ. Bergh and J. Löfström. Interpolation spaces. An introduction. Springer-Verlag, Berlin, 1976. Grundlehren der Mathematischen Wissenschaften, No. 223.
Vector-valued singular integrals and the H 1 -BMO duality. J Bourgain, Probability theory and harmonic analysis. Cleveland, Ohio; New YorkDekker98J. Bourgain. Vector-valued singular integrals and the H 1 -BMO duality. In Probability theory and harmonic analysis (Cleveland, Ohio, 1983), volume 98 of Monogr. Textbooks Pure Appl. Math., pages 1-19. Dekker, New York, 1986.
Schauder decompositions and multiplier theorems. P Clément, B Pagter, F A Sukochev, H Witvliet, Studia Math. 1382P. Clément, B. de Pagter, F.A. Sukochev, and H. Witvliet. Schauder decompositions and multiplier theorems. Studia Math., 138(2):135-163, 2000.
Some new function spaces and their applications to harmonic analysis. R R Coifman, Y Meyer, E M Stein, J. Funct. Anal. 622R.R. Coifman, Y. Meyer, and E.M. Stein. Some new function spaces and their applications to harmonic analysis. J. Funct. Anal., 62(2):304-335, 1985.
Duality of Hardy and BMO spaces associated with operators with heat kernel bounds. X T Duong, L Yan, J. Amer. Math. Soc. 184X.T. Duong and L. Yan. Duality of Hardy and BMO spaces associated with operators with heat kernel bounds. J. Amer. Math. Soc., 18(4):943-973, 2005.
The functional calculus for sectorial operators. M Haase, Advances and Applications. BaselBirkhäuser Verlag169M. Haase. The functional calculus for sectorial operators, volume 169 of Operator Theory: Advances and Applications. Birkhäuser Verlag, Basel, 2006.
A vector-valued approach to tent spaces. E Harboure, J L Torrea, B E Viviani, J. Analyse Math. 56E. Harboure, J.L. Torrea, and B.E. Viviani. A vector-valued approach to tent spaces. J. Analyse Math., 56:125-140, 1991.
Sums of independent Banach space valued random variables. J Hoffmann-Jørgensen, Studia Math. 52J. Hoffmann-Jørgensen. Sums of independent Banach space valued random variables. Studia Math., 52:159-186, 1974.
Hardy and BMO spaces associated to divergence form elliptic operators. S Hofmann, S Mayboroda, arXiv:math.AP/0611804v2PreprintS. Hofmann and S. Mayboroda. Hardy and BMO spaces associated to divergence form elliptic operators. Preprint, arXiv:math.AP/0611804v2.
Littlewood-Paley-Stein theory for semigroups in UMD spaces. T Hytönen, Rev. Matem. Iberoam. To appearT. Hytönen. Littlewood-Paley-Stein theory for semigroups in UMD spaces. Rev. Matem. Iberoam., 2007. To appear.
Square function estimates for families of Calderón-Zygmund operators. T Hytönen, C Kaiser, In preparationT. Hytönen and C. Kaiser. Square function estimates for families of Calderón-Zygmund op- erators. In preparation.
Kato's square root problem in Banach spaces. T Hytönen, A Intosh, P Portal, arXiv:math.FA/0703012v1PreprintT. Hytönen, A. M c Intosh, and P. Portal. Kato's square root problem in Banach spaces. Preprint, arXiv:math.FA/0703012v1.
A T (1) theorem for integral transforms with operator-valued kernel. T Hytönen, L Weis, J. Reine Angew. Math. 599T. Hytönen and L. Weis. A T (1) theorem for integral transforms with operator-valued kernel. J. Reine Angew. Math., 599:155-200, 2006.
Wavelet transform for functions with values in UMD spaces. C Kaiser, L Weis, SubmittedC. Kaiser and L. Weis. Wavelet transform for functions with values in UMD spaces. Submit- ted.
The H ∞ -functional calculus and square function estimates. N J Kalton, L Weis, In preparationN.J. Kalton and L. Weis. The H ∞ -functional calculus and square function estimates. In preparation.
Maximal Lp-regularity for parabolic equations, Fourier multiplier theorems and H ∞ -functional calculus. P C Kunstmann, L Weis, Functional analytic methods for evolution equations. BerlinSpringer1855P.C. Kunstmann and L. Weis. Maximal Lp-regularity for parabolic equations, Fourier mul- tiplier theorems and H ∞ -functional calculus. In Functional analytic methods for evolution equations, volume 1855 of Lecture Notes in Math., pages 65-311. Springer, Berlin, 2004.
Sums of independent Banach space valued random variables. S Kwapień, Studia Math. J. Hoffmann-Jørgensen52Studia Math.S. Kwapień. On Banach spaces containing c 0 . Studia Math., 52:187-188, 1974. A supplement to the paper by J. Hoffmann-Jørgensen: "Sums of independent Banach space valued random variables" (Studia Math. 52 (1974), 159-186).
On square functions associated to sectorial operators. C , Le Merdy, Bull. Soc. Math. France. 132C. Le Merdy. On square functions associated to sectorial operators. Bull. Soc. Math. France, 132:137-156, 2004.
BMO is the intersection of two translates of dyadic BMO. T Mei, C. R. Acad. Sci. Paris, Ser. I. 33612T. Mei. BMO is the intersection of two translates of dyadic BMO. C. R. Acad. Sci. Paris, Ser. I, 336(12):1003-1006, 2003.
Stochastic integration in UMD Banach spaces. J M A M Van Neerven, M C Veraar, L Weis, Ann. Prob. 35J.M.A.M. van Neerven, M.C. Veraar, and L. Weis. Stochastic integration in UMD Banach spaces. Ann. Prob., 35:1438-1478, 2007.
Stochastic integration of functions with values in a Banach space. J M A M Van Neerven, L Weis, Studia Math. 166J.M.A.M. van Neerven and L. Weis. Stochastic integration of functions with values in a Banach space. Studia Math., 166:131-170, 2005.
Holomorphic semigroups and the geometry of Banach spaces. G Pisier, Ann. of Math. 1152G. Pisier. Holomorphic semigroups and the geometry of Banach spaces. Ann. of Math. (2), 115(2):375-392, 1982.
Harmonic analysis: real-variable methods, orthogonality, and oscillatory integrals. E M Stein, Princeton Mathematical Series. 43Princeton University PressE.M. Stein. Harmonic analysis: real-variable methods, orthogonality, and oscillatory inte- grals, volume 43 of Princeton Mathematical Series. Princeton University Press, Princeton, NJ, 1993.
Interpolation theory, function spaces, differential operators. H Triebel, North HollandH. Triebel. Interpolation theory, function spaces, differential operators. North Holland, 1978.
| []
|
[
"BANACH SPACES OF GLT SEQUENCES AND FUNCTION SPACES",
"BANACH SPACES OF GLT SEQUENCES AND FUNCTION SPACES"
]
| [
"V B Kiran Kumar ",
"ANDRahul Rajan ",
"N S Sarath Kumar "
]
| []
| []
| The Generalized Locally Toeplitz (GLT) sequences of matrices have been originated from the study of certain partial differential equations. To be more precise, such matrix sequences arise when we numerically approximate some partial differential equations by discretization. The study of the asymptotic spectral behaviour of GLT sequence is very important in analysing the solution of corresponding partial differential equations. The approximating classes of sequences (a.c.s) and the spectral symbols are important notions in this connection. Recently, G. Barbarino obtained some additional results regarding the theoretical aspects of such notions. He obtained the completeness of the space of matrix sequences with respect to pseudo metric a.c.s. Also, he identified the space of GLT sequences with the space of measurable functions. In this article, we follow the same research line and obtain various results connecting the sub-algebras of matrix sequence spaces and sub-algebras of function spaces. In some cases, these are identifications as Banach spaces and some of them are Banach algebra identifications. In the process, we also prove that the convergence notions in the sense of eigenvalue/singular value clustering are equivalent to the convergence with respect to the metrics introduced here. These convergence notions are related to the study of preconditioners in the case of matrix/operator sequences. Finally, as an application of our main results, we establish a Korovkin-type result in the setting of GLT sequences. | 10.13001/ela.2022.6693 | [
"https://arxiv.org/pdf/2112.15054v1.pdf"
]
| 245,634,199 | 2112.15054 | 989aee0b4626a093294787599edd1914fbaed1f7 |
BANACH SPACES OF GLT SEQUENCES AND FUNCTION SPACES
V B Kiran Kumar
ANDRahul Rajan
N S Sarath Kumar
BANACH SPACES OF GLT SEQUENCES AND FUNCTION SPACES
arXiv:2112.15054v1 [math.FA] 30 Dec 2021
The Generalized Locally Toeplitz (GLT) sequences of matrices have been originated from the study of certain partial differential equations. To be more precise, such matrix sequences arise when we numerically approximate some partial differential equations by discretization. The study of the asymptotic spectral behaviour of GLT sequence is very important in analysing the solution of corresponding partial differential equations. The approximating classes of sequences (a.c.s) and the spectral symbols are important notions in this connection. Recently, G. Barbarino obtained some additional results regarding the theoretical aspects of such notions. He obtained the completeness of the space of matrix sequences with respect to pseudo metric a.c.s. Also, he identified the space of GLT sequences with the space of measurable functions. In this article, we follow the same research line and obtain various results connecting the sub-algebras of matrix sequence spaces and sub-algebras of function spaces. In some cases, these are identifications as Banach spaces and some of them are Banach algebra identifications. In the process, we also prove that the convergence notions in the sense of eigenvalue/singular value clustering are equivalent to the convergence with respect to the metrics introduced here. These convergence notions are related to the study of preconditioners in the case of matrix/operator sequences. Finally, as an application of our main results, we establish a Korovkin-type result in the setting of GLT sequences.
Introduction and Preliminaries
The correspondence between matrix sequences and measurable functions is very natural in many important examples such as the Toeplitz matrices. Here the spectral information of the operator/matrix sequence is stored in the corresponding symbol-function (recall the celebrated Szegö distribution theorem [8]). Also, such matrix sequences arise naturally in the study of partial differential equations with certain boundary conditions, using the finite difference approximation.
For example, if we consider the Schrödinger operator that maps f → −f ′′ + v.f , where v is a real-valued periodic potential function, then the corresponding finite difference approximation leads to a sequence of block Toeplitz matrices. If we consider more general PDE's like those arise from diffusion problem (f → (−af ′ ) ′ + v.f ) or convection-diffusion-reaction (f → (−af ′ ) ′ + bf ′ + v.f ), we end up with Locally Toeplitz (LT) or Generalized Locally Toeplitz (GLT) sequences [7]. In most of the cases, we see that the sequence of discretization matrices {A n } n enjoys an asymptotic spectral distribution. This is somehow related to the spectrum of the differential operator associated with the considered PDE. Asymptotic singular value distribution is defined below Date: January 3, 2022. 1 Definition 1.1. We say that {A n } n has an asymptotic singular value distribution with symbol f and write {A n } n ∼ σ f , if, for all F ∈ C C (R)
lim n→∞ 1 n n i=1 F (σ i (A n )) = 1 2π D F (|f (x)|)dx.
Because of this inherent connection between the matrix sequences and the corresponding symbol-functions, many researchers explored the possible generalizations of such results. They tried to obtain symbol-functions or function spaces corresponding to some class of matrix sequences to understand the spectral asymptotic. Recently, such studies have been initiated in the setting of GLT sequences by researchers like Albrecht Böttcher, Stefano Serra-Capizzano, G. Barbarino, C. Garoni, etc. [1,2,3,4,7]. An equivalence between GLT sequences and measurable functions was obtained in [1]. In this article, we follow the same research line and obtain various results connecting the subalgebras of the space of all matrix sequences and the sub-algebras of space of measurable functions. In some cases, these are identifications as Banach spaces and some of them are Banach algebra identifications. These spaces of matrix sequences are defined using various pseudo-metric functions introduced in this article. These notions are motivated from the pseudo-metric induced by the notion of approximating class of sequences (a.c.s) used in [7]. As in [7], we also obtain characterizations of convergence notions in the sense of eigenvalue/singular value clustering here (these notions were originated from the preconditioning problems in numerical linear algebra). Now we list down some definitions which are useful throughout this article. Definition 1.2. Let {A n } n be a matrix-sequence and {{B n,m } n } m a sequence of matrixsequences. We say that {{B n,m } n } m is an approximating class of sequences (a.c.s) for {A n } n if the following condition is met: for every m there exists an n m such that, for n ≥ n m , The notion of approximating classes of sequences is a powerful tool in numerical Linear Algebra literature. Using this , we can replace a complicated matrix sequence {A n } by some simpler matrix sequences {{B n,m } n } m . The asymptotic distribution of singular values/eigenvalues of {{B n,m } n } m can be used to compute the asymptotic distribution of singular values/eigenvalues of {A n } n . Definition 1.3. Let a : [0, 1] → C be a Riemann-integrable function and f ∈ L 1 ([−π, π]). We say that a matrix sequence {A n } n is a Locally Toeplitz (LT) sequence with symbol a ⊗ f , and we write
{A n } n ∼ LT a ⊗ f , if {{LT m n (a, f )} n } m∈N is an a.c.s for {A n } n , where LT m n (a, f ) = [D m (a) ⊗ T ⌊n/m⌋ (f )] ⊕ O n(mod m) = diag i=1,...,n [a( i m )T ⌊n/m⌋ (f )] ⊕ O n(mod m) .
Here D n (a) is a (n × n) diagonal matrix associated with a given by
D n (a) = diag i=1,...,n a( i n ).} n ∼ LT a i,m ⊗ f i,m , i = 1, .
.., k m , such that:
• km i=1 a i,m ⊗ f i,m → κ in measure over [0, 1] × [−π, π] when m → ∞; • {{ km i=1 A (i,m) n } n } m is an a.c.s for {A n } n .
In this article, the results presented are for D = [0, 1] × [−π, π]. All these results are directly follows for the multilevel case,
D = [0, 1] d × [−π, π] d .
The major outcome of this article is the identification of a some subspace,G p of GLT sequences (we will introduce this space later) with subspaces of the measurable functions. We state the result below.
Theorem 1.4. Let D = [0, 1] × [−π, π]. The Banach spacesG p and L p (D), D = [0, 1] × [−π, π], 1 ≤ p ≤ ∞ are isometrically isomorphic.
In particularG ∞ and L ∞ (D) isomorphic as C * -algebra. (The proof will be given in section 3) The completeness ofẼ, the set of all equivalence class of matrices of increasing order with respect to a.c.s metric was proved in [1]. Also it is known that the class of GLT sequences with respect to a.c.s metric form a complete *-algebra [7]. This space is isometrically isomorphic to the class of measurable functions on D [1].
In this article, we introduce seminorms q w and q w p in connection with these convergence notions and obtain the Banach spacesà w andà w p with respect to the norms induced by these seminorms . In particular we prove thatà w is a C * -algebra. The spaceG p is the collection of GLT sequences inà w p , for 1 ≤ p ≤ ∞. It turns out thatG p are Banach spaces with respect to the norms induced by these seminorms. Our main result stated above says thatG p are isometrically isomorphic to L p (D).
The article is organized as follows. In the next section, we introduce the seminorms q w , q w p and obtain its relation with Type 2 weak cluster convergence. Also, we identify the Banach spacesà w andà w p of matrix sequences. In the third section, we prove our main results; obtain the equivalence between these Banach spaces with subspaces of measurable functions. In the fourth section, as an application of our main results, we obtain a Korovkintype approximation theorem for GLT sequences analogous to the result for Toepliz sequences. The article ends with a concluding section, mentioning some further possibilities.
Banach Spaces of Matrix Sequences
Motivated from the notion of a.c.s, we introduced certain seminorms on the space of all sequences of matrices of increasing size. Let E = {{A n } n : A n is a matrix of finite order}. For A n ∈ M n (C), let
P (A n ) = inf rank(R n ) n + N n : R n + N n = A n , R n , N n ∈ M n (C) ,
where infimum is taken over all decomposition of A n = R n + N n . Let {A n } n ∈ E, we can define
p({A n } n ) = lim sup n→∞ P (A n ). For {A n } n , {B n } n ∈ E, define d acs ({A n } n , {B n } n ) = p({A n − B n } n ).
It was proved in [1,6] that d acs is a pseudo metric on E which turns E into a complete pseudometric space (E, d acs ) and the convergence of {{B n,m } n } m to {A n } n is called a.c.s. convergence, denoted by {{B n,m } n } m a.c.s.
− −− → {A n } n as m → ∞. Let L = {{A n } n ∈ E : p({A n } n ) = 0}.
Then the quotient spaceẼ = E/L will be a metric space with respect to the metricd acs :Ẽ ×Ẽ → R defined bỹ
d acs ({A n } n + L, {B n } n + L) = d acs ({A n } n , {B n } n )
The following Theorem in [6] gives an equivalent definition for P (A).
Theorem 2.1. (Theorem 5 of [6]) For any matrix A n ∈ M n (C),
P (A n ) = min i=1,2,...,n i n + σ i+1 (A n ) ,
where σ i (A n ) is the i th singular value of A n arranged in non-increasing order and we assume by convention that σ n+1 (A n ) = 0. Therefore d acs ({2A n } n , {2B n } n ) = 2d acs ({A n } n , {B n } n ).
Definition 2.3. Let {A n } n be a matrix sequence and the functions q w , q w p : E → R defined as
q w ({A n } n ) = inf lim sup n→∞ N n : R n + N n = A n , lim n→∞ rankR n n = 0 , q w p ({A n } n ) = inf lim sup n→∞ N n sp n 1/p : R n + N n = A n , lim n→∞ rankR n n = 0 , 1 ≤ p < ∞.
Here the infimum is taken over all such decompositions of A n and N n sp denotes schatten p−norm.
The subspaces A w and A w p of E are defined as follows:
A w = {{A n } n ∈ E : q w ({A n } n ) < ∞} , A w p = {{A n } n ∈ E : q w p ({A n } n ) < ∞} .
Now we recall the notion of strong cluster convergence, weak cluster convergence and uniform cluster convergence used in [10,13]. Definition 2.4. Let {A n } n and {B n } n be two sequences of matrices of increasing size. We say that {A n − B n } n converges to constant sequence {O n } n (sequence of zero-matrices) in Type 2 weak cluster sense if for any ǫ > 0, there exists integers n 1,ǫ , n 2,ǫ , such that for n > n 2,ǫ , A n − B n = R n + N n with rank R n ≤ n 1,ǫ and N n < ǫ. Also n 1,ǫ depends on both n and ǫ and is of o(n). The convergence is in the Type 2 uniform cluster sense if n 1,ǫ is independent of ǫ and in the Type 2 strong cluster sense if n 1,ǫ depends only on ǫ.
Remark 2.5. ( [10]) {A n −B n } n converges to {O n } n in Type 2 weak cluster sense if and only if for any ǫ > 0, there exist integers n 1,ǫ , n 2,ǫ such that for all n 2,ǫ , except at most possibly n 1,ǫ (dependent of size n and is of o(n)) singular values, all singular values {A n − B n } n lie in the interval [0, ǫ). The Type 2 convergence is equivalent to the singular value clustering. There is a notion of Type 1 convergence that is equivalent to eigenvalue clustering. Both originated from the study of preconditioners in numerical linear algebra problems (see [13] for eg.).
The following lemma is a consequence of the results in [15] which provides a criterion to establish the convergence notions defined above.
Lemma 2.6. Let {A n } n and {B n } n be two sequences of n × n matrices of growing order.
If A n − B n 2 F = o(n) ,
then we have the convergence in the Type 2 weak cluster sense. If
A n − B n 2 F = O(1)
, then the convergence is Type 2 strong cluster sense. In [7] Carlo Garoni and Stefano Serra-Capizzano proved that a.c.s convergence and Type 2 weak convergence are equivalent. We state the result below. Next theorem gives a characterization for q w leads to a relation between type 2 weak convergence and q w , analogous to Theorem 2.7.
Theorem 2.8. Let {A n } n be a matrix sequence and σ i (A n ) be the i th singular value of the matrix A n arranged in non-increasing order. Then,
q w ({A n } n ) = inf α ∈ [0, ∞) : lim n→∞ #(σ(A n ) ≥ α) n = 0 .
Proof. Let {R n } n and {N n } n be any matrix sequence such that {R n } n + {N n } n = {A n } n and lim n→∞ rank Rn n
= 0. Let σ 1 (A n ) ≥ σ 2 (A n ) ≥ . . . ≥ σ n (A n ) be the singular values of A n arranged in non increasing order. We know that σ i (A n ) ≤ σ i (R n ) + N n . Setting r 1 = rank R n , then σ i (A n ) ≤ N n , for all i > r 1 . Let r 2 be the smallest integer such that σ i (A n ) ≤ N n for all i > r 2 .
Then r 1 ≥ r 2 , and σ r 2 (A n ) > N n ≥ σ r 2 +1 (A n ). Let A n = U n Σ n V * n be a Singular Value Decomposition (SVD) of A n and set,
R n = U n diag(σ 1 (A n ), . . . , σ r 2 (A n ), 0, . . . , 0)V * ñ N n = U n diag(0, . . . , 0, σ r 2 +1 (A n ), . . . , σ n (A n ))V * n .
Then, A n =R n +Ñ n , let lim sup n→∞ Ñ n = α.
rank(R n ) = r 1 ≥ r 2 = rank(R n ), N n ≥ σ r 2 +1 (A n ) = Ñ n .
Then, lim
n→∞ #(σ(A n ) > α) n = lim n→∞ r 2 n ≤ lim n→∞ r 1 n = 0 and lim sup n→∞ N n ≥ lim sup n→∞ Ñ n = α. Therefore, q w ({A n } n ) ≥ inf α ∈ [0, ∞) : lim n→∞ #(σ(An)≥α) n = 0 . To prove the other inequality, let A n = U n Σ n V * n be a SVD of A n . Let α ∈ [0, ∞) such that lim n→∞ #(σ(An)≥α) n = 0. Let R n = U nΣn V * n , N n = U nΣn V * n ,
whereΣ n is the diagonal matrix obtained from Σ n by setting 0 to all the singular values of A n that are less than or equal to α, andΣ n = Σ −Σ n . Hence lim n→∞ rank(Σn) n = 0. Then, A n = R n + N n , rank(R n ) = #{σ(A n ) ≥ α} and lim sup n→∞ N n ≤ α. By taking infimum over all such α, we get
q w ({A n } n ) ≤ inf α ∈ [0, ∞) : lim sup n→∞ N n ≤ α ≤ inf α ∈ [0, ∞) : lim n→∞ #(σ(A n ) ≥ α) n = 0
Using the characterization of q w and remark 2.5, we obtain the following corollaries. Proof. The result follows from Nn sp n 1/p ≤ N n and the Corollary 2.9.
Let L w = {{A n } n ∈ A w : q w ({A n } n ) = 0}, L w p = {{A n } n ∈ A w p : q w p ({A n } n ) = 0}. Theñ A w = A w /L w andà w p = A w p /L w
are the quotient spaces of A w and A w p respectively. Now we prove thatà w andà w p are Banach spaces.
Theorem 2.11.à w andà w p are Banach spaces with respect to the norms induced by q w and q w p respectively. In particular,à w forms a C * -algebra andà w 2 is a Hilbert space.
Proof. Here we prove only the case ofà w and the other case is similar. First we fix some
notations. Let q A = q w ({A n } n ), q B = q w ({B n } n ), q A+B = q w ({A n } n + {B n } n ) and q AB = q w ({A n B n } n≤ q A + q B + 2 m . Thus, q A+B ≤ q A + q B . q αA = |α|q A , α ∈ C is straight forward.
Hence q w is a seminorm in A w . Then the functionq w :à w ×à w → R defined as
q w ({A n } n + L w ) = q w ({A n } n ),
becomes a norm onà w . For the Banach algebra inequality, we consider
{A n B n } n = {R A n,m R B n,m + R A n,m N B n,m + N A n,m R B n,m } n + {N A n,m N B n,m } n . Here lim n→∞ 1 n rank(R A n,m R B n,m + R A n,m N B n,m + N A n,m R B n,m ) = 0. Then, q w ({A n B n } n ) ≤ lim sup n→∞ N A n,m N B n,m ≤ lim sup n→∞ N A n,m lim sup n→∞ N B n,m ≤ (q w ({A n } n ) + 1 m )(q w ({B n } n ) + 1 m ).
Thus, q w ({A n B n } n ) ≤ q w ({A n } n )q w ({B n } n ) ( Note that this result is not true forà w p ). Finally, we prove the completeness ofà w . Let {{B n,m } n + L w } m be a Cauchy sequence inà w . It suffices to show the convergence of a subsequence. We can extract a subsequence and name it as {{B n,m } n + L w } m itself such that q w ({B n,m+1 − B n,m } n + L w ) ≤ 2 −m ; m = 1, 2, 3, . . . We can find a strictly increasing sequence of positive integers{n i,m } i such that for all n ≥ n i,m ,
rank(R m n,i ) n < 1 i . Also we choose {{n i,m } m } i such that (2.1) n i,m+1 > n i+1,m .
This inequality helps us to obtain the required estimate. Since n i,m+1 > n i+1,m > n i,m , for a fixed i, {n i,m } m is an increasing sequence. Now consider {n 2,m } m and construct a matrix sequence {A n } n in a such a way that A n = B n,j+1 , whenever n 2,j−1 ≤ n < n 2,j . Consider A n − B n,m ; for n 2,m+i−1 ≤ n < n 2,m+i ,
A n − B n,m = B n,m+i+1 − B n,m = R m n,i+1 + N n,i+1 , where N m n,i+1 ≤ 2 (1−m) and rank(R m n,i+1 ) n < 1 i + 1
, for all n ≥ n i+1,m .
Here by inequality 2.1, n ≥ n 2,m+i−1 > n i+1,m , then
q w ({A n } n − {B n,m } n ) < 2 (1−m) . Hence lim m→∞q w ({A n } n − {B n,m } n + L w ) = lim m→∞ q w ({A n } n − {B n,m } n ) = 0.
Thus,à w andà w p are Banach spaces.à w form a C * algbera with usual complex conjugate transpose of the matrix as the involution, that is {A n } * n = {A * n } n .
The convergence of a sequence {{B n,m } m } n to {A n } n in the topology induced by q w and q w p are denoted by {{B n,m } n } m qw − → {A n } n and {{B n,m } n } m q w p − − → {A n } n respectively. Now we recall a lemma from [7] that is useful to obtain the relation between a.c.s. convergence and the convergence with respect to q w and q w p .
(1) {{B n,m } n } m is an a.c.s. for {A n } n (2) p({A n − B n,m } n ) → 0 as m → ∞
The next theorem gives the comparison of these convergence notions.
Theorem 2.13. q w convergence =⇒ q w p convergence =⇒ a.c.s. convergence.
Proof. The proof is immediate from the definition and Lemma 2.12.
Remark 2.14. Reverse implication is false, as we shall see in the following example. Also q w p convergence implies q w q convergence if 1 ≤ p < q < ∞. Now we give an example of a sequence of matrix sequences for which the converse of Theorem 2.13 is false.
Main Results: GLT sequences and L p spaces
In this section, we prove our main result, Theorem 1.4. Recall the definition of GLT sequences (see definition 1.3). Let G ∞ and G p be the spaces defined as follows.
G ∞ = {{A n } n ∈ A w : {A n } n ∼ GLT f }, G p = {{A n } n ∈ A w p : {A n } n ∼ GLT f }. Let Z = {{A n } n ∈ G ∞ : q w ({A n } n ) = 0}, Z p = {{A n } n ∈ G p : q w p ({A n } n ) = 0},
andG ∞ = G ∞ /Z,G p = G p /Z p be the quotient spaces of G ∞ and G p respectively. Following is an example of a matrix sequence that belongs toG 1 but not toG 2 .
a(x) = 1 √ x if 0 < x ≤ 1 0 if x = 0. ,
g : [−π, π] → C is constant function 1 and {A n } n be the matrix sequence given by
A n = √ n 0 0 · · · 0 0 n 2 0 · · · 0 0 0 n 3 · · · 0 . . . . . . . . . . . . . . . 0 0 0 · · · 1 .
The matrix sequence {A n } n belongs toG 1 but notG 2 and its symbol is the function f :
[0, 1] × [−π, π] → R defined by f = a ⊗ g.
We recall some algebraic properties of the space of GLT sequences from [7];
(1) Suppose that {A n } n ∼ GLT f and {B n } n ∼ GLT g. Then, (a) {A * n } n ∼ GLTf , (b) {αA n + βB n } n ∼ GLT αf + βg, for all α, β ∈ C,
(c) {A n B n } n ∼ GLT f g. (d) if {A n } n ∼ GLT h then f = h a.e (2) For all measurable function f defined on D := [0, 1] × [−π, π], there exists a matrix sequence {A n } n such that {A n } n ∼ GLT f . Define a function φ p :G p → L p (D), 1 ≤ p ≤ ∞, such that whenever {A n } n ∼ GLT f , φ p ({A n } n ) = f if p = ∞ = (2π) 1 p f if p = ∞.
φ p is well defined by the property (d) of (1). Now we are in a position to prove our main result Theorem 1.4. In fact we prove that φ p is an isometric isomorphism betweenG p and L p (D), 1 ≤ p ≤ ∞. This is a consequence of the above listed properties and the following couple of lemmas.
Lemma 3.2. Let {A n } n ∈ A w and {A n } n ∼ σ f , then q w ({A n } n ) = f ∞ . Proof. Suppose f ∞ = ess sup x∈D |f (x)| = l.
Then by the definition of essential supremum, for any ǫ > 0,
µ{x : |f (x)| ≥ l + ǫ} = 0,
where µ is the Lebesgue measure. Also it is given that {A n } n ∼ σ f , then
lim n→∞ 1 n n i=1 F (σ i (A n )) = 1 2π D F (|f (x)|)dx
for every F : R → C continuous function with compact support and singular values σ i (A n ) arranged in non increasing order; σ 1 (A n ) ≥ σ 2 (A n ) ≥ · · · ≥ σ n (A n ). Consider a real valued continuous function F with compact support, such that
χ [ǫ,l+2ǫ] ≥ F (x) ≥ χ [0,l+ǫ] , then lim n→∞ 1 n n i=1 F (σ i (A n )) ≤ lim inf n→∞ 1 n #(σ i (A n ) ≤ l + 2ǫ) and D F (|f (x)|)dx ≥ µ{x : |f (x)| ≤ l + ǫ}.
Then, Given that {A n } n ∼ σ f . Then,
lim inf n→∞ 1 n #(σ i (A n ) ≤ l + 2ǫ) ≥ 1 2π µ{x : |f (x)| ≤ l + ǫ}, lim inf n→∞ 1 n {n − (#σ i (A n ) > l + 2ǫ)} ≥ 1 2π (2π − µ{x : |f (x)| > l + ǫ}), 1 − lim sup n→∞ 1 n (#σ i (A n ) > l + 2ǫ) ≥ 1 − 1 2π µ{x : |f (x)| > l + ǫ},lim n→∞ 1 n n i=1 F (σ i (A n )) = 1 2π D F (|f (x)|)dx.
for every F : R → C continuous function with compact support. Consider such a function
F : R → C such that χ [−ǫ,k+2ǫ] ≥ F ≥ χ [0,k+ǫ] . lim inf n→∞ #(σ(A n ) ≤ k + ǫ) n ≤ lim n→∞ 1 n n i=1 F (σ i (A n )), D F (|f (x)|)dx ≤ µ{x : |f (x)| ≤ k + 2ǫ}, lim inf n→∞ 1 n #(σ i (A n ) ≤ k + ǫ) ≤ 1 2π µ{x : |f (x)| ≤ k + 2ǫ}, 1 2π µ{x : |f (x)| > k + 2ǫ} ≤ lim sup n→∞ 1 n #(σ i (A n ) > k + ǫ) = 0.
Thus, f ∞ ≤ k + 2ǫ and f ∞ ≤ q w ({A n } n ).
Corollary 3.3. Let {A n } n ∼ σ f . Then {A n } n ∈ A w if and only if f ∈ L ∞ (D). Lemma 3.4. Let {A n } n ∈ A w p and {A n } n ∼ σ f . Then q w p ({A n } n ) = 1 (2π) (1/p) f p , 1 ≤ p < ∞.
Proof. The proof is similar to Lemma 3.2.
Proof of Theorem 1.4: φ p is an injective *-homomorphism of Banach spaces, which can be readily followed from (b) of property (1) of GLT. The surjectivity follows from the definition ofG p and property (2) of GLT. Hence it is a *-isomorphism. From Lemma 3.2 and Lemma 3.4, it follows that φ p is an isometry. In particular, φ ∞ is a C * -isomorphism. Theorem 1.4 yields a natural isometry between the spacesG p and L p (D), for 1 ≤ p ≤ ∞, which is analogous to the isometry identified in [1] between space of GLT sequences and space of measurable functions. Notice that in [1], the author derived a metric space isometry. Here we achieved a Banach space isometric isomorphism.
Korovkin-Type Theorem
P. P Korovkin proved a classical approximation theorem in 1953, which unified several approximation process. Korovkin-type theorems in the setting of Toeplitz operators acting on Hardy spaces and Fock spaces were obtained in [10,12]. Type 2 strong/weak cluster sense convergence was considered there. Here we obtain an analogous result for GLT sequences.
Consider M n (C), with Frobenius norm induced from the inner product A, B = trace(B * A). Let {U n } n be sequence of unitary matrices such that each U n is of order n. For each n, define the subalgebra M Un of M n (C) as M Un = {A ∈ M n (C) : U * n AU n is diagonal }. M Un is a closed subspace of M n (C). We denote the orthogonal projection of M n (C) onto M Un by P Un (·). It is known that P Un (·) = 1 when we consider M n (C) as a Banach space under usual operator norm (see [12] for details). For A ∈ M n (C), P Un (A) is called a preconditioner for A.
Preconditioners play a crucial role in solving linear systems by iterative techniques. They help to increase the convergence rate of iteration. For instance, consider the linear system with Toeplitz structure,
T n (f )x = b n . For a fixed f , we can consider a sequence of Toeplitz matrices {T n (f )} n . If we can find a sequence of matrices {C n (f )} n such that {C n (f ) − T n (f )} n converges to {O n } n in Type 2 strong/weak cluster sense, {C n (f )} n can be considered as an efficient preconditioner [11]. In this case, the eigenvalues of C n (f ) −1 T n (f ) will be clustered at 1. This will help to improve the stability of the corresponding linear system. In [5] R. H Chan and M. C yeung proved that when U n = F n , the Fourier matrix of order n and f is a continuous function, then {P Un (T n (f )) − T n (f )} n converges to {O n } n in Type 2 strong cluster sense (corresponding preconditioners are known as circulant preconditioner). Depending on the choice of U n , we can obtain other efficient preconditioners such as Hartley [9], Tau [14] etc. for Toeplitz matrices.
Since linear systems involving GLT sequences appear at various situations, finding efficient preconditioners for GLT sequences is also an important problem. Following is an example of an efficient preconditioner for GLT sequences.
Consider a LT sequence {A n } n ∼ LT a ⊗ f , where a is a Riemann integrable function on [0, 1] and f is a continuous function on [−π, π]. We give an example of preconditioner for this LT sequence and also obtain a preconditioner for a GLT sequence. Let
U n = F ⌊ n m ⌋ 0 0 · · · 0 0 F ⌊ n m ⌋ 0 · · · 0 0 0 F ⌊ n m ⌋ · · · 0 . . . . . . . . . . . . . . . 0 0 0 · · · F n(mod m) , where F n = 1 √ n e 2πijl n n−1 j,l=0
is the Fourier matrix of order n. Consider
LT m n (a, f ) = D m (a) ⊗ T ⌊ n m ⌋ (f ) ⊕ O n(mod m)
. We can construct a matrix sequence {à n } n which is an a.c.s limit for the sequence {{LT m n (a, f )} n } m such thatà n = LT m n (a, f ) for some m and n ≥ m 2 . Now consider
P Un (Ã n ) −Ã n = a( 1 m )P F k (T k (f )) 0 0 · · · 0 0 a( 2 m )P F k (T k (f )) 0 · · · 0 . . . . . . . . . · · · . . . 0 0 · · · a(1)P F k T k (f )) 0 0 0 · · · 0 O n(mod m) − a( 1 m )T k (f ) 0 0 · · · 0 0 a( 2 m )T k (f ) 0 · · · 0 . . . . . . . . . · · · . . . 0 0 · · · a(1)T k (f ) 0 0 0 · · · 0 O n(mod m) , where k = ⌊ n m ⌋. Since {P F k (T k (f )) −
T k (f )} n converges to {O n } n in Type 2 strong cluster sense, we can show that {P Un (Ã n ) −Ã n } n converges to {O n } n in Type 2 weak cluster sense. Since q w ({A n −Ã n } n ) = 0, {P Un (Ã n ) − A n } n converges to {O n } n in Type 2 weak cluster sense. Since the convergence of {P Un (T n (f )) − T n (f )} n to {O n } n in Type 2 strong/weak cluster sense leads to efficient preconditioners, it is important to know when does this convergence holds. The Korovkin-type theorems obtained in [12] reduces this task into a finite subset of the class of symbols. Here we obtain a similar result in the setting of GLT sequences. First, we prove a couple of lemmas. Since lim n→∞ P (A n ) = 0, for ǫ > 0, there exists a positive integer n ǫ such that for all n ≥ n ǫ , P (A n ) < ǫ. Hence we have for n ≥ n ǫ ,
min i=1,2,...,n i n + σ i+1 (A n ) < ǫ.
Then there exist a j, such that j n + σ j+1 (A n ) < ǫ. Now
1 n j i=1 σ i 2 < M 2 ǫ and n i=j+1 σ i 2 < ǫ 2 (n − j). Then, A n 2 F = n i=1 σ i 2 (A) < M 2 ǫ + (n − j)ǫ 2 and A n 2 F n < M 2 ǫ n + ǫ 2 , ∀n ≥ n ǫ .
Hence A n 2 F = o(n). Lemma 4.3. Let {A n } n be a sequence of matrices such that for each n, A n ≤ M < ∞. If q w ({A n } n ) = 0, then q w ({P Un (A n )} n ) = 0.
Proof. Given q w ({A n } n ) = 0. Then by Lemma 4.2, A n 2 F = o(n). Since P Un F = 1,
P Un (A n ) 2 F ≤ A n 2 F = o(n).
Thus q w ({P Un (A n )} n ) = 0. Now we present the Korovkin-type theorem in the setting of GLT sequences. Here we obtain preconditioners for the norm bounded GLT sequences. For arbitrary GLT sequences, see the corollary 4.5.
Theorem 4.4. Let {f 1 , f 2 ,. . . , f k } ⊆ L ∞ (D) and {A n (f i )} n ∼ GLT f i is norm bounded for each f i . Suppose that {P Un (A n (g)) − A n (g)} converges to {O n } in Type 2 weak cluster sense for g ∈ {f 1 , f 2 ,. . . , f k , k i=1 f i f * i } .
Then for every f in the C * -algebra generated by
{f 1 , f 2 , . . . , f k }, with {A n (f )} n ∼ GLT f is norm bounded, {P Un (A n (f )) − A n (f )} converges to {O n } in Type 2 weak cluster sense.
Proof. There is a standard procedure to obtain convergence of {P Un (A n (f )) − A n (f )} n with f in the *-algebra generated by {f 1 , f 2 ,. . . , f k }. We can follow the same procedure here (see the proof Theorem 3.4 in [10]). From the *-algebra to reach C * -algebra (that is the closure in the C * -algebra norm), we proceed as follows.
Let g belongs to C * -algebra generated by {f 1 , f 2 , · · · , f k } and {g m } be a sequence which converges to g. Then by Theorem 1.4 there exist norm bounded GLT sequences {A n (g m )}n and {A n (g)} n corresponding to each g m and g respectively, such that {{A n (g m )} n } m converges to {A n (g)} n inG ∞ .
Therefore, for ǫ > 0, there exists a positive integer t such that q w ({A n (g t )−A n (g)} n ) < ǫ/2. Then there exist two norm bounded sequences {R n } n,t and {N n } n,t such that A n (g t ) − A n (g) = R n,t + N n,t , lim n→∞ rank(R n,t ) n = 0, N n,t < ǫ/2.
Now consider
q w ({P Un (A n (g)) − A n (g)} n ) = q w ({P Un (A n (g)) − P Un (A n (g t )) + P Un (A n (g t )) − A n (g t ) + A n (g t ) − A n (g)} n ) ≤ q w ({P Un (A n (g) − A n (g t ))} n ) + q w ({P Un (A n (g t )) − A n (g t )} n ) + q w ({A n (g t ) − A n (g)}) n ).
The second term on the right hand side is zero and q w ({A n (g t ) − A n (g)}) n ) < ǫ/2. Now consider the first term on the right hand side
q w ({P Un (A n (g) − A n (g t ))} n ) = q w ({P Un (R n,t + N n,t )} n ) ≤ q w ({P Un (R n,t )} n ) + q w ({P Un (N n,t )}).
Since q w ({R n,t } n ) = 0, by Lemma 4.3, q w ({P Un (R n,t )} n ) = 0. Also we know that P Un = 1, then P Un (N n,t ) ≤ N n,t and hence q w ({P Un (N n,t )} n ) < ǫ/2. Thus q w ({P Un (A n (g)) − A n (g)} n ) < ǫ. Hence {P Un (A n (g)) − A n (g)} n converges to {O n } n in Type 2 weak cluster sense.
Corollary 4.5. Under the conditions of Theorem 4.4, if km i=1 g i,m converges to g in measure, where each g i,m belong to C * -algebra generated by {f 1 , f 2 , . . . , f k }, then we can extract a preconditioner sequence {P Un (A n (h i ))} n for {A n (g)} n .
Proof. We have km i=1 g i,m belongs to C * {f 1 , f 2 , · · · , f m }. Hence {P Un (A n ( km i=1 g i,m )) − A n ( km i=1 g i,m )} n converges to {O n } n in Type 2 weak cluster sense if A n ( km i=1 g i,m ) is of bounded norm and A n ( km i=1 g i,m ) ∼ GLT km i=1 g i,m . Since km i=1 g i,m converges to g in measure, A n ( km i=1 g i,m ) and {P Un (A n ( km i=1 g i,m )) are a.c.s for A n (g). We can construct a sequence of the form {P Un (A n (h j ))} n which is an a.c.s limit for {P Un (A n ( km i=1 g i,m ))} n . Hence the result follows.
Remark 4.6. Theorem 4.4 need not hold if {A n (f )} is not norm bounded. Let f ∈ L ∞ (D) and consider a GLT sequence {A n (f )} n such that A n (f ) ≤ M < ∞, for all n. Let, q w ({P Un (A n (f ) − A n (f ))} n = 0 and {B n } n be another sequence such that q w ({B n − A n (f )} n ) = 0. But if B n is unbounded, then q w ({P Un (B n ) − B n } n ) need not be zero. For if, consider B n = A n (f ) + Z n , where U * n Z n U n = (a ij ) n i,j=1 , a ij = 1 for all 1 ≤ i, j ≤ n. Clearly q w ({P Un (B n ) − B n } n ) = 0. But q w ({P Un (A n (f )) − B n }) = 0. So we can treat P Un (A n (f )) as a preconditioner for {B n } n . Also note that the function g in corollary 4.5 need not be essentially bounded.
Concluding Remarks
As we know, the theory of Toeplitz matrix sequences has a rich operator theoretic analogue on the Hardy space via the symbol-function. There are variations of it into Bergman space, Fock space, etc. We expect such versions in the case of GLT sequences also. The development must be through the identification of corresponding symbols. The major achievement of this article is that we are able to identify the connection between the space of symbols and the subspaces of GLT sequences. We hope that these identifications will be useful in establishing the operator theoretic analogue of the spectral distributional results of such matrix sequences. Korovkin-type result we obtained in this article makes use of a topology on the space of GLT sequences. The connection with the topologies on B(H) and the topologies introduced in this article would be another interesting point. Obtaining the convergence in eigenvalue clustering as convergence with respect to some topology on B(H) is the main goal in our future research.
A n = B n,m + R n,m + N n,m rank(R n,m ) ≤ c(m)n, N n,m ≤ ω(m) where . is the spectral norm, n m , c(m) and ω(m) depend only on m and lim m→∞ c(m) = lim m→∞ ω(m) = 0.
Remark 2.2.d acs inẼ is not induced from any norm. For if, {A n } n = {I n } n , sequence of identity matrices and {B n } n = {O n } n is the sequence of zero matrices, then d acs ({A n } n , {B n } n ) = d acs ({I n } n , {O n } n ) = 1.
d acs ({2A n } n , {2B n } n ) = d acs ({2I n } n , {O n } n ) = 1.
Theorem 2.7. (Theorem 4.1 of [7]) Let {A n } n and {B n } n be two sequences of matrices of increasing size. Then {A n − B n } n converges to {O n } n in type 2 weak cluster sense if and only if d acs ({A n } n , {B n } n ) = 0.
Corollary 2. 9 .
9Let {A n } n and {B n } n be two sequences of matrices of increasing size. Then {A n −B n } n converges to {O n } n in type 2 weak cluster sense if and only if q w ({A n −B n } n ) = 0.Corollary 2.10. Let {A n } n and {B n } n be two sequences of matrices of increasing size. Then {A n −B n } n converges to {O n } n in type 2 weak cluster sense if and only if q w p ({A n −B n } n ) = 0.
{B n,m+i − B n,m } n + L w ) = q w ({B n,m+i − B n,m } n ) ≤ 2 (1−m) ; i = 1, 2, 3, . . . Also we can construct two matrix sequences {R m n,i } n and {N m n,i } n such that B n,m+i − B n,m = R m n,i + N m n,i ,
Lemma 2 .
212. (Theorem 4.1 [7]) Let {A n } n be a matrix sequence and let {{B n,m } n } m be a sequence of matrix sequences. Then the following conditions are equivalent
Example 2. 15 .
15Let B n,m be the diagonal matrix with its first ⌊ n m ⌋ diagonal entries 1 and others 0.p({B n,m } n ) = inf lim sup n→∞ rank R n + N : R + N = B n,m Then by lemma 2.12, {{B n,m } n } m a.c.s − −→ {O n } n . But q w ({B n,m } n ) is 1 for all m. Hence {{B n,m } n } m does not converge to {O n } n inà w .
Example 3. 1 .
1Let a : [0, 1] → R be a function defined by
σ i (A n ) > l + 2ǫ) ≤ 1 2πµ{x : |f (x)| > l + ǫ} = 0.By Theorem 2.8, q w ({A n } n ) ≤ l + 2ǫ. Thus, q w ({A n } n ) ≤ f ∞ . To prove the other inequality, suppose q w ({A n }) = k. By Theorem 2.8, for ǫ >
Remark 4. 1 .
1Note that {A n } n in the above example need not be norm bounded. If {A n } n is norm bounded, then {P Un (A n ) − A n } n converges to {O n } n in Type 2 weak cluster sense (see Lemma 4.3). Now consider a GLT sequence {B n } n with symbol κ, such that km i=1 a i,m ⊗f i,m converges to κ in essential supremum norm (or in measure), where each a i,m is a Riemann integrable function on [0, 1] and f im is a continuous function on [−π, π]. Then {{ km i=1 P Un (D n (a i,m )T n (f i,m ))} n } m converges to {B n } n inG ∞ (or a.c.s for {B n } n ). Then we can construct a sequence { n } n as in the proof of Theorem 2.11 such that n = P Un ( km i=1 D n (a i,m )T n (f i,m )) for some m (depends on n) and q w ({ n } n − {B n } n ) = 0. Thus { n } n is a good preconditioner for {B n } n .
Lemma 4. 2 . 2 F 2 F
222Suppose {A n } n is a sequence of matrices of growing order such that A n ≤ M < ∞, for some M > 0. Then q w ({A n } n ) = 0 if and only if A n = o(n). Proof. If A n = o(n), then by Lemma 2.6, {A n } n converges to {O n } n in Type 2 weak cluster sense. Then q w ({A n } n ) = 0. Conversely assume that q w ({A n } n ) = 0. Then {A n } n converges to {O n } n in Type 2 weak cluster sense. Using Theorem 2.7, we get lim n→∞ P (A n ) = 0. Also by Theorem 2.1, we have P (A n ) = min i=1,2,...,n i n + σ i+1 (A n ) .
Let κ : [0, 1] × [−π, π] → C be a measurable function. We say that a matrix sequence {A n } n is a Generalized Locally Toeplitz (GLT) sequence with symbol κ, and we write {A n } n ∼ GLT κ, if the following condition is met. For every m varying in some infinite subset of N there exists a finite number of LT sequence {A(i,m)
n
). From the definition of q w , for every m ∈ N, there exist four matrix sequences{R A
n,m },{N A
n,m }, {R B
n,m },{N B
n,m } such that {R A
n,m } + {N A
n,m } = {A n } n , {R B
n,m } + {N B
n,m } =
{B n } n , and
lim sup
n→∞
N A
n,m ≤ q A +
1
m
, lim sup
n→∞
N B
n,m ≤ q B +
1
m
.
Also, lim
n→∞
rank(R A
n,m )
n
= lim
n→∞
rank(R A
n,m )
n
= 0.
Now we verify the axioms of seminorm, the non negativity and q w ({O n } n ) = 0 are trivial.
For triangular inequality,
q A+B ≤ lim sup
n→∞
N A
n,m + N B
n,m
≤ lim sup
n→∞
N A
n,m + lim sup
n→∞
N B
n,m
Equivalence between GLT sequences and measurable functions. G Barbarino, Linear Algebra Appl. 529G. Barbarino, Equivalence between GLT sequences and measurable functions, Linear Algebra Appl, 529:397-412, 2017.
From convergence in measure to convergence of matrix-sequences through concave functions and singular values. G Barbarino, C Garoni, Electron. J. Linear Algebra. 32G. Barbarino and C. Garoni, From convergence in measure to convergence of matrix-sequences through concave functions and singular values, Electron. J. Linear Algebra , 32:500-513, 2017.
Block generalized locally Toeplitz sequences: Theory and applications in the unidimensional case. G Barbarino, C Garoni, S Serra-Capizzano, Electron. Trans. Numer. Anal. 53G. Barbarino, C. Garoni, S. Serra-Capizzano, et al, Block generalized locally Toeplitz sequences: Theory and applications in the unidimensional case, Electron. Trans. Numer. Anal., 53:28-112, 2020.
Exploration of Toeplitz-like matrices with unbounded symbols is not a purely academic journey. A Böttcher, C Garoni, S Serra-Capizzano, Sb. Math. 208111602A. Böttcher, C. Garoni, and S. Serra-Capizzano, Exploration of Toeplitz-like matrices with unbounded symbols is not a purely academic journey, Sb. Math., 208(11):1602, 2017.
Circulant preconditioners for Toeplitz matrices with positive continuous generating functions. R H Chan, M C Yeung, Math. Comp. 58R. H. Chan and M. C. Yeung, Circulant preconditioners for Toeplitz matrices with positive continuous generating functions, Math. Comp., 58(197):233-240, 1992.
Topological foundations of an asymptotic approximation theory for sequences of matrices with increasing size. C Garoni, Linear Algebra Appl. 513C. Garoni, Topological foundations of an asymptotic approximation theory for sequences of matrices with increasing size, Linear Algebra Appl, 513:324-341, 2017.
Generalized Locally Toeplitz sequences: Theory and applications. C Garoni, S Serra-Capizzano, Springer1C. Garoni and S. Serra-Capizzano, Generalized Locally Toeplitz sequences: Theory and applications, volume 1. Springer, 2017.
Toeplitz Forms and Their Applications 2nd edn. U Grenander, G Szegö, New York: ChelseaU. Grenander and G. Szegö, Toeplitz Forms and Their Applications 2nd edn (New York: Chelsea). 1984.
Hartley preconditioners for Toeplitz systems generated by positive continuous functions. X.-Q Jin, BIT. 343X.-Q. Jin, Hartley preconditioners for Toeplitz systems generated by positive continuous functions, BIT, 34(3):367-371, 1994.
A Korovkin-type theory for non self-adjoint Toeplitz operators. V B Kumar, M N N Namboodiri, R Rajan, Linear Algebra Appl. 543V. B. Kiran Kumar, M.N.N. Namboodiri, and R. Rajan, A Korovkin-type theory for non self-adjoint Toeplitz operators, Linear Algebra Appl, 543:140-161, 2018.
A short survey on preconditioners and Korovkin-type theorems. V B Kumar, M N N Namboodiri, R Rajan, J. Anal. 292V. B. Kiran Kumar, M.N.N. Namboodiri, and R. Rajan, A short survey on preconditioners and Korovkin-type theorems, J. Anal., 29(2):425-447, 2021.
Preconditioners and Korovkin-type theorems for infinite-dimensional bounded linear operators via completely positive maps. V B Kumar, M N N Namboodiri, S Serra-Capizzano, Studia Math. 2182V. B. Kiran Kumar, M.N.N. Namboodiri, and S. Serra-Capizzano, Preconditioners and Korovkin-type theorems for infinite-dimensional bounded linear operators via completely positive maps, Studia Math, 218(2):95-118, 2013.
A Korovkin-type theory for finite Toeplitz operators via matrix algebras. S Serra-Capizzano, Numer. Math. 821S. Serra-Capizzano, A Korovkin-type theory for finite Toeplitz operators via matrix algebras, Numer. Math., 82(1):117-142, 1999.
Superlinear PCG methods for symmetric Toeplitz systems. S Serra-Capizzano, Math. Comp. 68226S. Serra-Capizzano, Superlinear PCG methods for symmetric Toeplitz systems, Math. Comp., 68(226):793-803, 1999.
A unifying approach to some old and new theorems on distribution and clustering. E E Tyrtyshnikov, Linear Algebra Appl. 232Email address: [email protected]. E. Tyrtyshnikov, A unifying approach to some old and new theorems on distribution and clustering, Linear Algebra Appl, 232:1-43, 1996. Email address: [email protected]
| []
|
[
"DISCOVERY OF GAMMA-RAY ORBITAL MODULATION IN THE BLACK WIDOW PSR J1311−3430",
"DISCOVERY OF GAMMA-RAY ORBITAL MODULATION IN THE BLACK WIDOW PSR J1311−3430"
]
| [
"Yi Xing \nKey Laboratory for Research in Galaxies and Cosmology\nShanghai Astronomical Observatory\nChinese Academy of Sciences\n80 Nandan Road200030ShanghaiChina\n",
"Zhongxiang Wang \nKey Laboratory for Research in Galaxies and Cosmology\nShanghai Astronomical Observatory\nChinese Academy of Sciences\n80 Nandan Road200030ShanghaiChina\n"
]
| [
"Key Laboratory for Research in Galaxies and Cosmology\nShanghai Astronomical Observatory\nChinese Academy of Sciences\n80 Nandan Road200030ShanghaiChina",
"Key Laboratory for Research in Galaxies and Cosmology\nShanghai Astronomical Observatory\nChinese Academy of Sciences\n80 Nandan Road200030ShanghaiChina"
]
| []
| We report our discovery of orbitally modulated γ-ray emission from the black widow system PSR J1311−3430. We analyze the Fermi Large Area Telescope data during the offpulse phase interval of the pulsar, and find the orbital modulation signal at a ∼3σ confidence level. Further spectral analysis shows no significant differences for the spectra obtained during the bright and faint orbital phase ranges. A simple sinusoid-like function can describe the modulation. Given these properties, we suggest that the intrabinary γ-ray emission arises from the region close to the companion and the modulation is caused by the occultation of the emitting region by the companion, similar to that is seen in the transitional millisecond pulsar binary (MSP) PSR J1023+0038. Considering the X-ray detection of intrabinary shock emission from eclipsing MSP binaries recently reported, this discovery further suggests the general existence of intrabinary γ-ray emission from them. | 10.1088/2041-8205/804/2/l33 | [
"https://arxiv.org/pdf/1502.04783v2.pdf"
]
| 119,182,645 | 1502.04783 | 38ea3bcadf8602c9aef8aa5813f9962afa0082db |
DISCOVERY OF GAMMA-RAY ORBITAL MODULATION IN THE BLACK WIDOW PSR J1311−3430
15 Apr 2015 Draft version April 16, 2015
Yi Xing
Key Laboratory for Research in Galaxies and Cosmology
Shanghai Astronomical Observatory
Chinese Academy of Sciences
80 Nandan Road200030ShanghaiChina
Zhongxiang Wang
Key Laboratory for Research in Galaxies and Cosmology
Shanghai Astronomical Observatory
Chinese Academy of Sciences
80 Nandan Road200030ShanghaiChina
DISCOVERY OF GAMMA-RAY ORBITAL MODULATION IN THE BLACK WIDOW PSR J1311−3430
15 Apr 2015 Draft version April 16, 2015Draft version April 16, 2015arXiv:1502.04783v2 [astro-ph.HE] Preprint typeset using L A T E X style emulateapj v. 5/2/11Subject headings: binaries: close -pulsars: individual (PSR J1311−3430) -gamma rays: stars
We report our discovery of orbitally modulated γ-ray emission from the black widow system PSR J1311−3430. We analyze the Fermi Large Area Telescope data during the offpulse phase interval of the pulsar, and find the orbital modulation signal at a ∼3σ confidence level. Further spectral analysis shows no significant differences for the spectra obtained during the bright and faint orbital phase ranges. A simple sinusoid-like function can describe the modulation. Given these properties, we suggest that the intrabinary γ-ray emission arises from the region close to the companion and the modulation is caused by the occultation of the emitting region by the companion, similar to that is seen in the transitional millisecond pulsar binary (MSP) PSR J1023+0038. Considering the X-ray detection of intrabinary shock emission from eclipsing MSP binaries recently reported, this discovery further suggests the general existence of intrabinary γ-ray emission from them.
INTRODUCTION
Millisecond pulsars (MSPs) are widely accepted to be old neutron stars that were spun up through mass accretion from the companions when they were at the low-mass X-ray binary phase (Alpar et al. 1982;Radhakrishnan & Srinivasan 1982).
Not surprisingly, >60% known MSPs are in binaries (e.g., Manchester et al. 2005). A sub-class of them, so-called 'black widow' pulsar systems (Fruchter et al. 1988), have very low-mass, ∼0.02 M ⊙ companions. To form isolated MSPs, one possible channel is through ablation of the companions by the pulsar wind. This possibility likely occurs in the black widows since they are eclipsing systems at radio frequencies, indicating the interaction between the pulsar wind and the companions. X-ray observations of them revealed orbital flux variations, also supporting the presence of the intrabinary interaction Gentile et al. 2014). In addition, recent extensive studies of the so-called 'redback' systems (Roberts 2013) have provided clear evidence for the interaction. These redbacks are also eclipsing MSP binaries, but contain relative massive, ∼0.1-0.6 M ⊙ companions. X-ray observations of the prototypical redback PSR J1023+0038 detected significant orbital flux variations (Archibald et al. 2010;Bogdanov et al. 2011), and the variations can be explained by the existence of an intrabinary shock region (Bogdanov et al. 2011). Similar features were also clearly seen in the redback XSS J12270−4859 (Bogdanov et al. 2014 and references therein).
Owing to its all-sky monitoring and high sensitivity capabilities, the Fermi Gamma-ray Space Telescope, launched in 2008, has greatly improved our studies of pulsars. For MSPs, more than six-fold black widows and redbacks have been discovered with the help of Fermi (e.g., Roberts 2013). At Fermi's 100 MeV to 300 GeV energy range, marginal evidence for the intrabinary interaction in the eclipsing systems has also been seen. For the first discovered black widow PSR B1957+20 (Fruchter et al. 1988), an orbital modulation signal was detected at a ∼ 2.3σ confidence level (Wu et al. 2012). In addition, possible signals were also reported for XSS J12270−4859 (Xing & Wang 2014) and a candidate redback 2FGL J0523.3−2530 (Xing et al. 2014). Theoretical studies have long predicted the intrabinary interaction and related high-energy emission from black widows (e.g., Arons & Tavani 1993). Studies of the γ-ray emission from the intrabinary region allow us to explore the detailed physical processes within such a binary (e.g., Roberts et al. 2014). In this paper we report the detection of orbitally modulated γ-ray emission from a recently discovered black widow PSR J1311−3430, which thus indicates the intrabinary origin for part of its emission.
PSR J1311−3430 was initially listed as an unassociated source in the Fermi Large Area Telescope (LAT) source catalog (2FGL J1311.7−3429; Nolan et al. 2012). It is the only γ-ray selected MSP with γ-ray pulsed emission discovered via a direct blind search in the Fermi data . The pulsed radio emission soon was detected too (Ray et al. 2013), but the signal was visible only during <10% of the observation time, suggesting strong variations in the intrabinary medium. Before the discovery, the source was found to have orbital modulation with a short period of ≃94 minutes through optical imaging and spectroscopy (Romani 2012;Romani et al. 2012). Considering its properties of weak X-ray emission, sinusoid-like optical modulation and large modulation amplitude, it was already suggested to be a black widow system and the optical modulation is caused by irradiation of the companion by the pulsar wind (Romani 2012). Marginal modulated X-ray emission possibly related to the intrabinary shock has also been detected (Romani 2012;Kataoka et al. 2012). The γ-ray discovery of the 2.56 ms spin signal thus confirmed its black widow nature . Analyzing the Fermi data, we searched and found the orbital modulation signal from the source's offpulse emission. Below we present the data analysis and results in Section 2. The results are discussed in Section 3.
DATA ANALYSIS AND RESULTS
Fermi LAT data
We selected 0.1-300 GeV LAT events from the Fermi Pass 7 Reprocessed (P7REP) database inside a 20 o × 20 o region centered at the position of PSR J1311−3430 during the time period from 2008-08-04 15:43:36 to 2015-01-06 21:19:57 (UTC). Only events with zenith angle less than 100 deg and during good time intervals were kept. The former prevents the Earth's limb contamination, and for the latter, the quality of the data was not affected by the spacecraft events.
Timing Analysis
We performed timing analysis to the 0.1-300 GeV LAT data of the PSR J1311−3430 region to update the γray ephemeris given in Pletsch et al. (2012). An aperture radius of 1. • 0 was used. We determined the pulse time of arrivals (TOAs) by obtaining the pulse profiles of 40 evenly divided segments using the known ephemeris ) and cross-correlated them with a template profile created with data during the time period of MJD 54682-56119 (the same time range as that in Pletsch et al. 2012), following the algorithm described in Taylor (1992). We used TEMPO2 Edwards et al. 2006) to fit the TOAs. Only pulse frequency f and frequency derivativeḟ were fitted, and the other timing parameters were fixed to their known values. We obtained f = 390.56839326403(7)Hz anḋ f = −3.193(1) × 10 −15 s −2 , consistent with the values given in Pletsch et al. (2012) within ∼0.5σ and ∼2.2σ uncertainties, respectively. The folded pulse profile and two-dimensional phaseogram are shown in Figure 1. We defined phase 0.16-0.66 and 0.66-1.16 as the onpulse and offpulse phase intervals, respectively.
Maximum Likelihood Analysis
We selected LAT events in 0.1-300 GeV energy range for the likelihood analysis, and included all sources within 20 deg in the Fermi third source catalog (The Fermi-LAT Collaboration 2015) centered at the position of PSR J1311−3430 to make the source model. The spectral function forms of the sources are provided in the catalog. The spectral parameters of the sources within 5 deg from PSR J1311−3430 were set free, and all other parameters of the sources were fixed at their catalog values. The γ-ray counterpart of PSR J1311−3430 was modeled with an exponentially cutoff power law, characteristic for pulsars (Abdo et al. 2013), and a simple power law for comparison. In addition, we used the spectrum model gll iem v05 rev1.fits and the spectrum file iso source v05.txt to consider the Galactic and extragalactic diffuse emission, respectively.
Using the LAT science tools software package v9r33p0, we performed standard binned likelihood analysis to the LAT data. The γ-ray emission during the total pulse phase interval was detected with a Test Statistic (TS) value of 6279, while that during the onpulse and offpulse phase intervals were detected with TS values of 7723 and 499, respectively. The TS value at a specific position is calculated from TS= −2 log(L 0 /L 1 ), where L 0 and L 1 are the maximum likelihood values for a model without and with an additional source respectively, and approximately is the square of the detection significance for the additional source (Abdo et al. 2010). We found during the total pulse phase, onpulse phase, and offpulse phase intervals, the emission is better modeled by an exponentially cutoff power law, with the low energy cutoff detected with >13σ, >14σ, and >5σ significance (esti- Table 1.
Orbital Variability
We folded the LAT events of the PSR J1311−3430 region at its orbital period to study its possible orbital modulations. The source position given in Pletsch et al. (2012) was used for the barycentric corrections to photon arrival times, and photons within R max (R max ranges from 0. • 1-1. • 0 with a step of 0. • 1) from the position were collected. Different energy ranges (0.1-300, 0.2-300, 0.3-300, 0.5-300, 1-300 GeV) were tested in folding. No significant modulations were detected using the whole data (i.e., during the total pulse phase), as the largest H-test value was 6 (corresponding to <2σ detection significance; de Jager et al. 1989). However, a significant orbital signal was best revealed using the offpulse data in the >0.2 GeV energy range within 0. • 4 from PSR J1311−3430. The folded light curve, which has an Htest value of ∼22 (corresponding to ∼4σ significance and ∼3σ post-trial significance, for the latter where 50 trials on the energy range and aperture radius are considered), is shown in Figure 2. The phase zero is at the ascending node of the pulsar ). The folded light curve has a brightness peak around the superior conjunction (when the companion is behind the pulsar), the same as the modest X-ray one reported in Romani (2012). The similarity helps strengthen the γ-ray modulation detection. Using the LAT tool gtexposure, we checked the summed exposures over the 10 orbital phase bins (e.g., Johnson et al. 2015), and they had only <1% differences, too small to cause any artificial orbital modulations.
We performed likelihood analysis to the >0.2 GeV offpulse LAT events during the orbital phase ranges of 0.2-0.5 (named Phase I) and 0.7-1.0 (named Phase II), which were approximately defined for the bottom and peak of the orbital modulation, respectively. We found that the emission during the both phase ranges is better modeled by an exponentially cutoff power law, with low energy cutoff detected with >3σ and >2σ significance. The exponentially cutoff power-law fits are summarized in Table 1. The TS values during Phase I and II are ∼120 and ∼320 (see Figure 3), respectively, which indicate that the source during the latter is more significantly detected than during the former, confirming the detection of orbital modulation from photon folding.
Possible contamination from a nearby catalog source 3FGL J1316.0−3338, which is identified as the counterpart to the flat spectrum radio quasar (FSRQ) PKS 1313−333 (Ackerman et al. 2015), was investigated. This source, being only ∼1. • 2 away from PSR J1311−3430 and relatively bright (TS≃625 in the catalog), exhibited flaring events in the past 1 . Using the offpulse data and performing likelihood analysis, we extracted its 30-day interval light curve, and found that for five time bins , it had fluxes >2σ above the value obtained from the total offpulse data. We repeated the analysis by excluding the data of the time bins. We found that the folded light curve is nearly the same, still having an H-test value of ≃22.
Spectral Analysis
We further investigated the orbital-dependent spectral variability during the offpulse phase interval. Spectra of PSR J1311−3430 during the whole offpulse phase interval, Phase I, and Phase II were obtained, and the spectrum during the onpulse phase interval was also obtained for comparison. We extracted the spectra by performing maximum likelihood analysis to the LAT data in 10 evenly divided energy bins in logarithm from 0.1-300 GeV, with the emission of the source being modeled with a power law in each energy bin. We only kept spectral points with TS≥4, and derived the 95% upper limits in other energy bins. The spectra extracted by this method are less model-dependent and provide a detailed description of the γ-ray emission for the source.
The obtained spectra are shown in Figure 4. The onpulse emission appears to have a 3-times higher cutoff energy E c than the offpulse one (see also Table 1). In addition, comparing the two offpulse spectra, the source was brighter across the >0.2 GeV energy range during Phase II than during Phase I.
We also repeated the analysis by excluding the data when the nearby source 3FGL J1316.0−3338 had possible flares (see § 2.4). The obtained spectral parameters of PSR J1311−3430 during the two orbital-phase ranges are consistent with the values obtained above (within 1σ uncertainties). We concluded that the flares do not have any significant effect on our spectral analysis. However, we note that in the lowest energy bin, the two upper limits of the Phase I and II spectra (see the right panel of Figure 4) are lower than the exponentially cutoff powerlaw fits. This problem might be due to possible contamination from 3FGL J1316.0−3338 in the low-energy range.
DISCUSSION
From our analysis of the Fermi data of PSR J1311−3430, we have detected its γ-ray orbital modulation during the offpulse phase interval of the pulsar, where the magnetospheric emission from the pulsar was likely effectively removed. In both optical and X-ray observations, flares were detected (Romani 2012), indicating the strong interaction between the pulsar wind and the companion. Likely γ-rays are also produced due to the intrabinary interaction. However, different from that is seen in PSR B1957+20, which has an extra component above 2.7 GeV at its inferior conjunction (when the companion is in front of the pulsar), the light curve peak is near the superior conjunction for PSR J1311−3430. The difference suggests that the intrabinary γ-ray emission model, which explains the extra component as the result from viewing an inverse Compton process as a head-on collision (see also Bednarek (2014)), proposed in Wu et al. (2012) does not apply here.
For PSR J1311−3430, no significant spectral changes were found from the offpulse orbital-phase-resolved spec-tra ( Figure 4). The source appeared brighter across the >0.2 GeV energy range during the peak range (Phase II) than during the bottom range (Phase I). Although the uncertainties from our fits with the exponentially cutoff power law are relatively large, the two spectra are generally similar to each other. The similarity suggests an geometric origin for the orbital modulation, such as that used to explain the X-ray orbital modulation of PSR J1023+0038 (Bogdanov et al. 2011). The binary likely has an inclination angle of i ∼60 • , and the companion has a very small Roche lobe radius R L = 0.068 R ⊙ (a canonical neutron star mass M n = 1.35 M ⊙ is assumed). Therefore like in PSR J1023+0038, the intrabinary emission region must be very close to the companion and the companion can thus block part of the region, causing the observed orbital modulation. Here we simply assume that the region is at the inner surface of the companion, and use a function m h [1 + sin(2πφ − π) sin i]/2 + m c to describe the orbital modulation, where φ is the orbital phase, i = 60 • is fixed, and m h and m c are the modulation amplitude (in units of counts) and constant counts, respectively. Fitting the folded light curve, we found this function can describe the modulation (see Figure 2), where the minimum χ 2 = 9.7 (for 8 degrees of freedom) and m h = 27 ± 7, m c = 27 ± 3. However, examining the light curve, the minimum and maximum may have a 0.1 phase shift (i.e., they occur at phase 0.35 and 0.85, respectively). If we force such a shift, the results are χ 2 = 7.9 (for 8 degrees of freedom) and m h = 28 ± 7, m c = 26 ± 3, indeed slightly better. This shift may suggest that the emission is not isotropic. The constant part m c may represent the intrabinary emission unblocked by the companion over the whole orbital phase, while emission from the pulsar during its offpulse phase interval could also contribute a small fraction.
It is not clear why PSR J1311−3430 has an orbital modulation different from that of PSR B1957+20. We note that if the size of the interaction region is proportional to that of a companion, similar fractions of an isotropic pulsar wind (0.0021 vs. 0.0024) would be intercepted for PSR J1311−3430 and PSR B1957+20 respectively (estimated from (R 2 /D b ) 2 /4; R 2 is the radius 4.6 ± 0.6 1.9 ± 0.2 1.3 ± 0.4 499 Offpulse Phase I 3 ± 1 1.2 ± 0.7 0.6 ± 0.3 120 Offpulse Phase II 7 ± 2 1.8 ± 0.5 1.2 ± 0.9 320
Note. -Column 3 and 4 list the photon index and cutoff energy of the exponentially cutoff power-law model.
of the companion and D b is the separation distance of the binary). The notable differences are that the spindown luminosity of PSR J1311−3430 is approximately 1/3 of PSR B1957+20 and that the companion in PSR J1311−3430 nearly fills its Roche lobe . We suspect that because of the latter, an outflow from the companion may still exist, the same as that in PSR J1023+0038 (e.g., Bogdanov et al. 2011). For PSR B1957+20, its companion fills its Roche lobe ∼ 85% (Reynolds et al. 2007), and the mass loss from the companion is presumably driven by the pulsar wind (Arons & Tavani 1993 and references therein). The much more messy environment in PSR J1311−3430, as suggested by the radio observations (Ray et al. 2013), could be evidence for this possibility. Detailed modeling for physical processes in PSR J1311−3430 would help verify it.
Our discovery of the orbital γ-ray modulation in PSR J1311−3430 and the analysis results have provided clear evidence for γ-ray production due to intrabinary interaction between a pulsar and its companion in a black widow system, and thus have confirmed the general phys-ical picture that has long been proposed theoretically. Since X-ray observations have revealed the general existence of intrabinary shock emission in eclipsing MSP binaries, similar work can be carried out to search and study related γ-ray emission from these recently identified systems.
Fig. 1 .
1-Folded pulse profile and two-dimensional phaseogram in 32 phase bins obtained for PSR J1311−3430. The gray scale represents the number of photons in each bin, and the dashed lines mark the onpulse and offpulse phase intervals.
Fig. 2 .
2-0.2-300 GeV light curve folded at the orbital period using the offpulse data. The bottom (Phase I) and peak (Phase II) ranges of the modulation are marked. Two simple sinusoid fits are displayed as dashed and dotted curves; for the latter, 0.1 phase shift is forced (see the text in Discussion).
Fig. 3 .
3-0.2-300 GeV TS maps of a 2 o × 2 o region centered at the position of PSR J1311−3430 during Phase I (left) and Phase II (right). The image scale of the maps is 0.04 • pixel −1 . The color bars indicate the TS value range. All sources in the source model except PSR J1311−3430 were considered and removed. The white (left) and dark (right) crosses mark the position of PSR J1311−3430, and the green cross marks the position of the nearby catalog source 3FGL J1316.0−3338. mated from −2 log(L pl /L exp ), where L exp and L pl are the maximum likelihood values for the exponentially cutoff power-law model and power-law model, respectively; Abdo et al. 2013). The resulting exponentially cutoff power-law fits are summarized in
Fig. 4 .
4-Left panel: γ-ray spectra of PSR J1311−3430 obtained with the onpulse (green triangles) and offpulse data (dark circles). Right panel: γ-ray spectra of PSR J1311−3430 obtained during offpulse Phase I (blue diamonds) and Phase II (red squares). The exponentially cutoff power-law fits for the onpulse and offpulse data are shown as green solid and dark dot-dashed curves, respectively. The same model fits for the offpulse Phase I and Phase II data are shown as blue dotted and red dashed curves, respectively.
TABLE 1
1Exponentially cutoff power-law fits for PSR J1311−3430.Data set
>0.1 GeV Flux
Γ
Ec
TS
(10 −8 photon cm −2 s −1 )
(GeV)
Total data
8.9 ± 0.4
1.80 ± 0.04
4.0 ± 0.5
6279
Onpulse data
13.9 ± 0.6
1.71 ± 0.04
3.9 ± 0.4
7723
Offpulse data
http://fermi.gsfc.nasa.gov/ssc/data/access/lat/4yr catalog/ap lcs.php
We thank the anonymous referee for helpful suggestions. This research made use of the High Performance Computing Resource in the Core Facility for Advanced Research Computing at Shanghai Astronomical Observatory. This research was supported by the Shanghai Natural Science Foundation for Youth (13ZR1464400), the National Natural Science Foundation of China for Youth (11403075), the National Natural Science Foundation of China (11373055), and the Strategic Priority Research Program "The Emergence of Cosmological Structures" of the Chinese Academy of Sciences (Grant No. XDB09000000). Z.W. is a Research Fellow of the One-Hundred-Talents project of Chinese Academy of Sciences.
. A A Abdo, ApJS. 18817ApJSAbdo, A. A., et al. 2010, ApJS, 188, 405 -. 2013, ApJS, 208, 17
. M Ackerman, M Ajello, W Atwood, arXiv:1501.06054Ackerman, M., Ajello, M., Atwood, W., et al. 2015, arXiv:1501.06054
. M A Alpar, A F Cheng, M A Ruderman, J Shaham, Nature. 300728Alpar, M. A., Cheng, A. F., Ruderman, M. A., & Shaham, J. 1982, Nature, 300, 728
. A M Archibald, V M Kaspi, S Bogdanov, J W T Hessels, I H Stairs, S M Ransom, M A Mclaughlin, ApJ. 72288Archibald, A. M., Kaspi, V. M., Bogdanov, S., Hessels, J. W. T., Stairs, I. H., Ransom, S. M., & McLaughlin, M. A. 2010, ApJ, 722, 88
. J Arons, M Tavani, ApJ. 403249Arons, J., & Tavani, M. 1993, ApJ, 403, 249
. W Bednarek, A&A. 561116Bednarek, W. 2014, A&A, 561, A116
. S Bogdanov, A M Archibald, J W T Hessels, V M Kaspi, D Lorimer, M A Mclaughlin, S M Ransom, I H Stairs, ApJ. 74297Bogdanov, S., Archibald, A. M., Hessels, J. W. T., Kaspi, V. M., Lorimer, D., McLaughlin, M. A., Ransom, S. M., & Stairs, I. H. 2011, ApJ, 742, 97
. S Bogdanov, A Patruno, A M Archibald, C Bassa, J W T Hessels, G H Janssen, B W Stappers, ApJ. 78940Bogdanov, S., Patruno, A., Archibald, A. M., Bassa, C., Hessels, J. W. T., Janssen, G. H., & Stappers, B. W. 2014, ApJ, 789, 40
. O C De Jager, B C Raubenheimer, J W H Swanepoel, A&A. 221180de Jager, O. C., Raubenheimer, B. C., & Swanepoel, J. W. H. 1989, A&A, 221, 180
. R T Edwards, G B Hobbs, R N Manchester, MNRAS. 3721549Edwards, R. T., Hobbs, G. B., & Manchester, R. N. 2006, MNRAS, 372, 1549
. A S Fruchter, D R Stinebring, J H Taylor, Nature. 333237Fruchter, A. S., Stinebring, D. R., & Taylor, J. H. 1988, Nature, 333, 237
. P A Gentile, ApJ. 78369Gentile, P. A., et al. 2014, ApJ, 783, 69
. G B Hobbs, R T Edwards, R N Manchester, MNRAS. 369655Hobbs, G. B., Edwards, R. T., & Manchester, R. N. 2006, MNRAS, 369, 655
. R H H Huang, A K H Kong, J Takata, C Y Hui, L C C Lin, K S Cheng, ApJ. 76092Huang, R. H. H., Kong, A. K. H., Takata, J., Hui, C. Y., Lin, L. C. C., & Cheng, K. S. 2012, ApJ, 760, 92
. J Kataoka, ApJ. 757176Kataoka, J., et al. 2012, ApJ, 757, 176
. R N Manchester, G B Hobbs, A Teoh, M Hobbs, AJ. 129Manchester, R. N., Hobbs, G. B., Teoh, A., & Hobbs, M. 2005, AJ, 129, 1993
. P L Nolan, ApJS. 19931Nolan, P. L., et al. 2012, ApJS, 199, 31
. H J Pletsch, Science. 3381314Pletsch, H. J., et al. 2012, Science, 338, 1314
. V Radhakrishnan, G Srinivasan, Current Science. 511096Radhakrishnan, V., & Srinivasan, G. 1982, Current Science, 51, 1096
. P S Ray, ApJ. 76313Ray, P. S., et al. 2013, ApJ, 763, L13
. M T Reynolds, P J Callanan, A S Fruchter, M A P Torres, M E Beer, R A Gibbons, MNRAS. 3791117Reynolds, M. T., Callanan, P. J., Fruchter, A. S., Torres, M. A. P., Beer, M. E., & Gibbons, R. A. 2007, MNRAS, 379, 1117
M S E Roberts, IAU Symposium. IAU Symposium, ed. J. van Leeuwen291Roberts, M. S. E. 2013, in IAU Symposium, Vol. 291, IAU Symposium, ed. J. van Leeuwen, 127-132
. M S E Roberts, M A Mclaughlin, P Gentile, E Aliu, J W T Hessels, S M Ransom, P S Ray, Astronomische Nachrichten. 335313Roberts, M. S. E., Mclaughlin, M. A., Gentile, P., Aliu, E., Hessels, J. W. T., Ransom, S. M., & Ray, P. S. 2014, Astronomische Nachrichten, 335, 313
. R W Romani, ApJ. 75425Romani, R. W. 2012, ApJ, 754, L25
. R W Romani, A V Filippenko, J M Silverman, S B Cenko, J Greiner, A Rau, J Elliott, H J Pletsch, ApJ. 76036Romani, R. W., Filippenko, A. V., Silverman, J. M., Cenko, S. B., Greiner, J., Rau, A., Elliott, J., & Pletsch, H. J. 2012, ApJ, 760, L36
. T J Johnson, P S Ray, J Roy, C C Cheung, A K Harding, H J Pletsch, S Fort, F Camilo, J Deneva, B Bhattacharyya, B W Stappers, arXiv:1502.06862Johnson, T. J., Ray, P. S., Roy, J., Cheung, C. C., Harding, A. K., Pletsch, H. J., Fort, S., Camilo, F., Deneva, J., Bhattacharyya, B., Stappers, B. W. 2015, arXiv:1502.06862
. J H Taylor, Royal Society of London Philosophical Transactions Series A. 341117Taylor, J. H. 1992, Royal Society of London Philosophical Transactions Series A, 341, 117
. E M H Wu, J Takata, K S Cheng, R H H Huang, C Y Hui, A K H Kong, P H T Tam, J H K Wu, ApJ. 761181Wu, E. M. H., Takata, J., Cheng, K. S., Huang, R. H. H., Hui, C. Y., Kong, A. K. H., Tam, P. H. T., & Wu, J. H. K. 2012, ApJ, 761, 181
. Y Xing, Z Wang, Y Xing, Z Wang, C.-Y Ng, ApJ. 79588Xing, Y., & Wang, Z. 2014, ArXiv e-prints Xing, Y., Wang, Z., & Ng, C.-Y. 2014, ApJ, 795, 88
| []
|
[
"Product set growth in mapping class groups",
"Product set growth in mapping class groups"
]
| [
"Alice Kerr "
]
| []
| []
| We study product set growth in groups with acylindrical actions on quasi-trees and hyperbolic spaces. As a consequence, we show that for every surface S of finite type, there exist α, β > 0 such that for any finite symmetric subset U of the mapping class group M CG(S) we have |U n | (α|U |) βn , so long as no finite index subgroup of U has an infinite order central element. This gives us a dichotomy for the finitely generated subgroups of mapping class groups.As right-angled Artin groups embed as subgroups of mapping class groups, this result also applies to them. We separately prove that we can quickly generate loxodromic elements in right-angled Artin groups, which by a result of Fujiwara[Fuj21]shows that the set of growth rates for many of their subgroups are well-ordered. | null | [
"https://arxiv.org/pdf/2103.12643v5.pdf"
]
| 243,734,952 | 2103.12643 | 6bed3141e2fedb1a34be41a3bbcf1c2e3bce7f4e |
Product set growth in mapping class groups
Alice Kerr
Product set growth in mapping class groups
We study product set growth in groups with acylindrical actions on quasi-trees and hyperbolic spaces. As a consequence, we show that for every surface S of finite type, there exist α, β > 0 such that for any finite symmetric subset U of the mapping class group M CG(S) we have |U n | (α|U |) βn , so long as no finite index subgroup of U has an infinite order central element. This gives us a dichotomy for the finitely generated subgroups of mapping class groups.As right-angled Artin groups embed as subgroups of mapping class groups, this result also applies to them. We separately prove that we can quickly generate loxodromic elements in right-angled Artin groups, which by a result of Fujiwara[Fuj21]shows that the set of growth rates for many of their subgroups are well-ordered.
Introduction
For a finite subset U of a group G, we define its nth product set to be U n = {u 1 · · · u n : u 1 , . . . , u n ∈ U }
The study of growth in groups is the study of how |U n | behaves as n varies. For infinite groups, the usual notion of group growth is when G is taken to be finitely generated, and U is taken to be a ball in G with respect to the word metric induced by some finite generating set S. The usual question that is asked is whether |U n | grows like a polynomial function as n → ∞, an exponential function, or if it lies somewhere between the two. This is known to have links to the algebraic properties of the group, the most famous example being Gromov's proof that the groups of polynomial growth are exactly the virtually nilpotent groups [Gro81].
In finite groups, on the other hand, there is little interest in estimating |U n | for large powers of n, as this is bounded above by the size of the group, and so will eventually become constant. Instead, questions about group growth in this setting tend to focus on getting precise bounds on low powers, such as |U 2 | or |U 3 |.
The question we are interested in here combines aspects of these two cases. Our focus will be on infinite groups, and for U a general finite subset we would like to estimate |U n | for every n ∈ N. On the other hand, instead of being concerned with the asymptotic profile of |U n |, we will be interested in getting a lower bound on |U n |, with this lower bound dependent on the size of U . More specifically, we would like to answer the following question. Question 1. For a group G, do there exist constants α, β > 0 and a class of subgroups H such that for every finite (symmetric) U ⊂ G where U / ∈ H, we have that |U n | (α|U |) βn for every n ∈ N?
If all of the generating sets of a group satisfy this inequality, then that group has uniform product set growth. The aim here however is to not just prove this for the group itself, but to find a dichotomy of finitely generated subgroups, so every such subgroup either belongs to H, or has uniform product set growth for the same α and β. The main result of this paper is that we can find such a dichotomy in mapping class groups. Theorem 1.0.1 (Corollary 4.4. 3). Let M CG(S) be a mapping class group. There exist α, β > 0 such that for every finite symmetric U ⊂ M CG(S) at least one of the following must hold:
1. U has a finite index pure subgroup with non-trivial centre.
2. |U n | (α|U |) βn for every n ∈ N.
Remark 1.0.2. Here we use a broad definition of mapping class groups, where punctures in the surface are allowed to permute, but boundary components are fixed pointwise. A brief discussion of the variation in definitions of the mapping class group is given in Remark 4.1.19.
Answering our question completely for a group also naturally answers it for all of its subgroups. One class of groups that is known to embed in mapping class groups are the right-angled Artin groups (see [CLM12], for example). In particular, they embed as pure subgroups, so the dichotomy is somewhat simplified in this case. Theorem 1.0.3 (Theorem 4.3.4). Let Γ be a finite simple graph, and A(Γ) the right-angled Artin group associated to Γ. There exist constants α, β > 0 such that for every finite symmetric U ⊂ A(Γ) at least one of the following must hold:
1. The centre of U is non-trivial.
2. |U n | (α|U |) βn for every n ∈ N.
As pure subgroups of mapping class groups are torsion-free, the following proposition tells us that the subgroups we have ruled out Theorem 1.0.3 cannot have uniform product set growth for any α and β, which justifies that this result is indeed a dichotomy. The proof that Theorem 1.0.1 also represents a dichotomy of subgroups is more complicated, and is shown in Proposition 4.4.4. Proposition 1.0.4 (Corollary 2.2.5). Let G be a finitely generated group, and suppose that the centre of G is infinite. Then for any constants α, β > 0 we can find a finite generating set U of G such that |U n | < (α|U |) βn for some n ∈ N.
The intuitive reason as to why such subgroups cannot have this type of growth is that we can add central elements to any generating set U . As these elements are contained in an abelian subgroup, which does not have exponential growth, this has minimal impact on the long term growth of U n . This means that we cannot link the growth of U n with the size of U in a meaningful way. In particular, this means that if our group contains subgroups with exponential growth but infinite centres, we cannot expect the dichotomy of subgroups we get in answer to Question 1 to be the same as the dichotomy given by the Tits alternative.
Question 1 has strong links to the more commonly studied notion of uniform exponential growth. Specifically, any finitely generated group such that every (symmetric) generating set satisfies the given inequality will necessarily have uniform exponential growth. Moreover, the finitely generated subgroups that do not lie in H will have uniform uniform exponential growth, in the sense that they will have a common lower bound on their exponential growth rates. Such properties have already been shown for a wide range of groups, including mapping class groups and right-angled Artin groups [Man10], although the question of uniform exponential growth is still open for acylindrically hyperbolic groups in general.
On the other hand, Proposition 1.0.4 tells us that uniform product set growth is a strictly stronger property than uniform exponential growth. If H has uniform exponential growth then so does H ×Z, however Proposition 1.0.4 tells us that this product does not have uniform product set growth. The key difference is that Question 1 asks for the lower bound on the growth to be given in terms of the size of the set, rather than just asking for some common lower bound. Linking growth rates with the size of the generating set in question is a key step in proofs about growth rates being well-ordered, see [FS20] and [Fuj21].
A positive answer to Question 1 has consequences for other types of product set growth as well, namely Helfgott type growth, as observed by Jack Button [But13]. Uniform product set growth also has links with approximate groups, as any approximate group that satisfies the relevant inequality must have bounded size. More details on these applications are given in Section 2.1. Proposition 1.0.5 (Proposition 2.1.8). Let G be a group, and let k 1, α, β > 0. Suppose U is a k-approximate group in G satisfying |U n | (α|U |) βn for every n ∈ N. Then |U | k 2/β −1
α 2 .
There are several known results in the area of product set growth in infinite groups, including complete answers to Question 1 for free groups [Saf11], and more recently hyperbolic groups [DS20]. A more thorough history is given in Section 2.3. Here we will be interested in this second paper, by Delzant and Steenbock, which also contains a result concerning product set growth in acylindrically hyperbolic groups. This theorem included logarithm terms that were not present in the other results, due to the use of Gromov's Tree Approximation Lemma. In [Ker20], it was shown that tree approximation is uniform if the space in question is a quasi-tree, rather than a hyperbolic space. We will use this to improve Delzant and Steenbock's bounds in the quasi-tree case by removing these logarithm terms.
Here if (X, d) is a hyperbolic space that a groups G acts on by isometries, and U ⊂ G is finite, we pick x 0 ∈ X such that u∈U d(x 0 , ux 0 ) is (almost) minimised, then the displacement of U is λ 0 (U ) = max u∈U d(x 0 , ux 0 ). For a more detailed definition, see Definition 3.2.1.
Theorem 1.0.6 (Theorem 3.2.12). Let G be a group that acts acylindrically on a quasi-tree. Then there exist constants K > 0 and α > 0 such that for every finite U ⊂ G at least one of the following must hold:
1. U is virtually Z.
2. λ 0 (U ) < K.
3. |U n | (α|U |) n+1 2 for every n ∈ N.
A result of Balasubramanya tells us that every acylindrically hyperbolic group admits some acylindrical action on a quasi-tree [Bal17], so the above theorem applies to all such groups. The class of acylindrically hyperbolic groups is a wide one, including the hyperbolic groups and relatively hyperbolic groups that are not virtually cyclic, as well as many cubical groups, rightangled Artin groups, and most mapping class groups of surfaces without boundary [Osi16]. The above statement is also interesting because it is not restricted to symmetric sets, and instead holds for all finite subsets.
It is not possible to immediately apply Theorem 1.0.6 to answer Question 1 for certain classes of groups, as the displacement condition is a condition on sets, rather than on the subgroups they generate. In Section 3.2 we will show that we can overcome this by finding loxodromic elements uniformly quickly in the generating sets of a subgroup. Proposition 1.0.7 (Proposition 3.2.17). Let G be a group acting acylindrically on a quasi-tree. There exist α, β > 0 such that for every finite U ⊂ G such that U k contains a loxodromic element for k ∈ N, and U is not contained in a virtually cyclic group, we have that |U n | (α|U |) βn k for every n ∈ N.
It is also possible to show a very similar result for actions on hyperbolic spaces, using an alternative method of generating free subgroups of large enough rank. This comes at the cost of restricting to symmetric sets, however in most cases this restriction is likely to be necessary anyway, as the common methods of finding loxodromic elements tend to assume symmetry. The same result was independently proved in [Fuj21]. Proposition 1.0.8 (Corollary 3.3.5). Let G be a group acting acylindrically on a hyperbolic space. There exist α, β > 0 such that for every finite symmetric U ⊂ G such that U k contains a loxodromic element for k ∈ N, and U is not contained in a virtually cyclic group, we have that |U n | (α|U |) βn k for every n ∈ N.
One example where it is already known that we can generate loxodromic elements uniformly quickly is in mapping class groups, which is what allows us to prove Theorem 1.0.1. In the following result, finding an element with the same active subsurface as the subgroup can be thought of as being roughly the same as finding a loxodromic element. Theorem 1.0.9. [Man13] Consider a mapping class group M CG(S), where S is a connected surface without boundary. There exists a constant N = N (S) ∈ N such that for any finite symmetric U ⊂ M CG(S) there exists n N and f ∈ U n such that f has the same active subsurface as U .
As mentioned previously, the result for right-angled Artin groups follows directly from the fact that they embed as subgroups of mapping class groups. It is however also interesting to consider the right-angled Artin group case separately, as doing so allows us to prove an analogous result to Theorem 1.0.9 for right-angled Artin groups. In other words, we can show that we can quickly generate loxodromic elements in the action of the group on the associated extension graph, which is a quasi-tree.
In the theorem below, for a finite graph Γ = (V, E) we have that A(Γ) is the associated right-angled Artin group, Γ e is the associated extension graph, and for a subset V ⊂ V we have that Γ(V ) is the induced subgraph on those vertices. For more detailed definitions see Section 4.1.
Proposition 1.0.10 (Proposition 4.2.11). Suppose Γ is a finite graph. For U ⊂ A(Γ), let V U ⊂ V be minimal under inclusion such that U is conjugate into A(Γ(V U )). There exists N = N (Γ) ∈ N such that for every finite symmetric U ⊂ A(Γ), where Γ(V U ) is connected and U is neither cyclic nor contained non-trivially in a direct product, there exists n N such that U n contains a loxodromic element on Γ(V U ) e .
A recent result of Fujiwara states that if a group G has an acylindrical and non-elementary action on a hyperbolic graph, and the product of any finite generating set of G contains a loxodromic element within a bounded power, then the set of exponential growth rates of G (with respect to its generating sets) will be well-ordered so long as G is equationally Noetherian [Fuj21]. Linear groups are equationally Noetherian [BM99], and right-angled Artin groups are linear [HW99], so we get the following corollary. Corollary 1.0.11 (Corollary 4.2.13). Suppose Γ is a finite graph. For every G A(Γ) such that G is finitely generated, and is neither cyclic nor contained non-trivially in a direct product, the set of exponential growth rates of G (with respect to its finite generating sets) is well-ordered.
Structure of the paper: In Section 2 we give some of the history of product set growth, as well as a few of its applications. We also give some fundamental results about which groups can and cannot have uniform product set growth, which will be vital in later sections. In Section 3 we combine uniform tree approximation for quasi-trees from [Ker20] with Delzant and Steenbock's methods in [DS20], which allows us to get a new result for product set growth in acylindrically hyperbolic groups. We note how being able to quickly find loxodromic isometries in a group allows us to overcome the displacement condition in this result, and show that we can get a similar statement for actions on hyperbolic spaces. In Section 4 we apply the results of the previous sections to answer Question 1 for mapping class groups. We also prove a result about quickly generating loxodromic elements in right-angled Artin groups.
Acknowledgements: The author would like to thank Cornelia Druţu for the many questions, suggestions, and discussions which helped shape this work, and for taking the time to read multiple drafts of this paper. The author is also grateful to Thomas Delzant for pointing out the relevance of loxodromic elements in the context of applying Theorem 1.0.6, and for the initial suggestion that inspired the extension of this work to actions on hyperbolic spaces. Thanks also go to many others for their generosity in sharing their time and knowledge, and for giving valuable comments and feedback on earlier drafts, in particular Ric Wade, Jack Button, Johanna Mangahas, Sahana Balasubramanya, Nicolaus Heuer, Elia Fioravanti, Ashot Minasyan, Alessandro Sisto, and Markus Steenbock.
Product set growth
Recall that for a finite subset U of a group G, its nth product set is U n = {u 1 · · · u n : u 1 , . . . , u n ∈ U }. We are interested in estimating the size of U n as n increases, specifically in the case that G is an infinite group.
As mentioned in the introduction, when considering infinite groups the most commonly studied form of group growth is the asymptotic growth of balls in a finitely generated group. Let G be a finitely generated group, and let S be a finite generating set of G. We use B S (n) to denote the closed ball of radius n centred at the identity e ∈ G under the word metric induced by S. In the notation of product sets, it is easy to note that when U = B S (1) we will have that U n = B S (n). The growth of G is then characterised by the large scale growth of these balls as n increases. Definition 2.0.1. Let G be a finitely generated group, and let S be a finite generating set of G. Let B S (n) denote the ball of radius n under the word metric induced by S. The exponential growth rate of G with respect to S is
ω(G, S) = lim n→∞ |B S (n)| 1 n .
If ω(G, S) > 1 for some (equivalently any) finite generating set S, then G has exponential growth.
The fact that ω(G, S) > 1 for some finite generating set if and only if it is true for every generating set is a standard result, and is not hard to check. As a result of only needing to prove this inequality for a single generating set, in many cases exponential growth tends to be a relatively easy property to prove. On the other hand, although this tells us that all finite generating sets will exhibit some kind of exponential growth, there is not necessarily any control over their rates of growth, which may be arbitrarily close to one. If we wish to have this type of control, then we will need to prove something stronger. The exponential growth rate of G is
ω(G) = inf S∈S ω(G, S).
If ω(G) > 1 then G has uniform exponential growth.
Clearly uniform exponential growth implies exponential growth. Most groups that are known to have exponential growth also have uniform exponential growth, including free groups, hyperbolic groups, and relatively hyperbolic groups [Xie07]. Away from non-positively curved groups, it is known that every elementary amenable group that has exponential growth also has uniform exponential growth [Osi04], and the same is true for any finitely generated subgroup of GL n (C) [EMO05]. This is not true in general however [Wil04], and there are other cases where this equivalence is still open, such as acylindrically hyperbolic groups. We will discuss these groups in Section 3.1.
If we do understand the growth of a group, then it makes sense to investigate the growth of its subgroups. One possible question is to ask if there is a common lower bound on the exponential growth rates of the subgroups with uniform exponential growth.
Definition 2.0.3. Let G be a group, and let H be a collection of finitely generated subgroups of G. We say that the collection H has uniform uniform exponential growth if inf H∈H ω(H) > 1.
For most of this paper, however, we will not just be considering the growth of balls, but rather the case that U is a general finite subset of our group. We also recall from the introduction that we are interested in getting a specific lower bound on |U n |, where this lower bound is dependent on the size of U .
Definition 2.0.4. Let G be an infinite finitely generated group. We say that G has uniform product set growth if there exist α, β > 0 such that for every finite (symmetric) generating set U of G we have that |U n | (α|U |) βn .
Remark 2.0.5. If the group G in question was finite, then this would be trivially satisfied by letting α = 1 |G| . For this reason we will generally only be interested in determining this property for infinite groups.
The aim of Question 1 is to find out which of the finitely generated subgroups of a group G have uniform product set growth, ideally for the same α and β. In Section 2.1 we will prove some consequences of a group or subgroup having this property. Section 2.2 will then be spent discussing some cases in which we can determine whether a subgroup has uniform product set growth or not. This will be important in Section 4, as we try to answer Question 1 for subgroups of mapping class groups. In Section 2.3 we will give a brief survey of previous results relating to Question 1. Some of these will be the starting point of the work in Section 3.
Applications of uniform product set growth
In this section we will give some properties that are implied by uniform product set growth.
Uniform exponential growth
Here we check that uniform product set growth implies uniform exponential growth, even in the weaker form where we only ask that the inequality holds for symmetric generating sets. Lemma 2.1.1. Let G be a finitely generated group. Suppose there exist constants α, β > 0 such that for every finite symmetric generating set U of G we have that |U n | (α|U |) βn for every n ∈ N. Then either G is finite with |G| 1 α , or G has uniform exponential growth.
Proof. Suppose that G is finite, then G is a finite generating set of itself, and G n = G for every n ∈ N. Therefore |G| (α|G|) βn for every n ∈ N, so |G| 1 α . Now suppose that G is infinite, and let S be a finite generating set of G. We want to calculate a lower bound on ω(G, S) = lim n→∞ 1 n log(|B S (n)|). We begin by noting that B S (n) = B S (1) n , so for every n ∈ N we have that |B S (n)| (α|B S (1)|) βn . If |B S (1)| 1 α then our equation would tell us nothing about |B S (n)|, however to calculate the limit it is sufficient to consider a subsequence, so we just have to start this subsequence with a large enough set.
For any m ∈ N we also have that B S (m) is also a finite generating set of G, and B S (m) n = B S (mn) for all n ∈ N, so |B S (mn)| (α|B S (m)|) βn . We can also see that |B S (m)| m−1+|S|, so choosing m = 1 α + 1 we get that
α|B S (m)| α( 1 α + |S|) 1 + α|S| > 1.
With this choice of m we now get that
|B S (mn)| 1 mn (α|B S (m)|) β m (1 + α|S|) β 1/α +1 .
Hence we have that
ω(G, S) = lim n→∞ |B S (n)| 1 n = lim n→∞ |B S (mn)| 1 mn (1 + α|S|) β 1/α +1 ,
so as S was arbitrary, and |S| 1, we get that
ω(G) = inf S∈S ω(G, S) (1 + α) β 1/α +1 > 1.
Remark 2.1.2. Given α, β > 0, a generating set U of a group with exponential growth, but not uniform exponential growth, could still satisfy |U n | (α|U |) βn for every n ∈ N. Lemma 2.1.1 simply tells us that not every generating set can satisfy this inequality. On the other hand, a set U contained in a group with subexponential growth will only satisfy this inequality in the trivial case that |U | 1 α . A natural corollary of this is that if we have a positive answer to Question 1, that is we have uniform product set growth for a collection of subgroups with the same α and β, then that collection has uniform uniform exponential growth. In this context we can think of Question 1 as being analogous to the question of uniform uniform exponential growth for a collection of subgroups that have uniform exponential growth.
In the other direction, it is not hard to see that uniform product set growth is actually a strictly stronger property than uniform exponential growth. This is simply because uniform exponential growth only requires a common lower bound on the exponential growth rates of generating sets, whereas uniform product set growth requires that the lower bound is given in terms of the size of the set. An example of this difference will be given in Section 2.2, where we consider a direct product where one of the factors has uniform exponential growth and another does not.
Helfgott type growth
As mentioned in the introduction, the study of product set growth has often previously focussed on estimating the size of double and triple products of sets. Here we give a link between uniform product set growth and the growth of triple products.
Here we use that U U −1 U −1 = (U U U −1 ) −1 , so they have the same cardinality. Similar logic applies to the other products. In [Hel08] it is shown that the cardinality of each of these products is bounded above by |U |(|U 3 |/|U |) 3 , and therefore
|U 3 | |U | 8 |U 3 | |U | 3 .
Combining these inequalities, we see that
|U n | |U | 8 |U 3 | |U | 3 n−2 .
If |U 3 | K|U | for some K 1, then substituting this into our above inequality gives us the result that we were looking for.
When uniform product set growth is given in the form |U n | (α|U |) n+1 2 for every n ∈ N, as with many of the results we will see in Section 2.3, we automatically get Helfgott type growth by simply taking c = α 2 and δ = 1. However Lemma 2.1.4 allows us to also obtain Helfgott type growth in the case of the more general type of uniform product set growth, that is when |U n | (α|U |) βn for every n ∈ N. This observation, and the following proof, were given by Jack Button.
Proposition 2.1.5. Let G be a group, and let U ⊂ G be finite. If there exist α, β > 0 such that |U n | (α|U |) βn for every n ∈ N, then |U 3 | ( α β 8 )
1 3 |U | 1+ β 6 .
Proof. This is trivially true if |U | = 1, so suppose that |U | 2. We assume that |U n | (α|U |) βn for every n ∈ N, but |U 3 | < ( α β 8 ) 1 3 |U | 1+ β 6 . We can now take ( α β 8 ) 1 3 |U | 1+ β 6 to be our K from Lemma 2.1.4, and obtain (α|U |) βn |U n | (8K 3 ) n−2 |U | = (α β |U | β 2 ) n−2 |U | for every n 3. We therefore have that |U | βn 2 |U | α 2β |U | β for every n 3. However, as |U | 2, we also have that |U | βn 2 tends to infinity. This contradicts the existence of a fixed upper bound, as given in the above inequality, and hence we must have that |U 3 | ( α β 8 ) 1 3 |U | 1+ β 6 .
Approximate groups
One other possible motivation for studying product set growth is its link with approximate groups, as defined in [Tao08].
Definition 2.1.6. Let G be a group, and let k 1. A finite symmetric subset U ⊂ G is a k-approximate group if there exists a finite subset X ⊂ G such that |X| k and U 2 ⊂ XU .
It is standard that approximate groups have small tripling, which leads us to the following observation of Button [But13].
Proposition 2.1.7. Let G be a group, and let k 1. Suppose U is a k-approximate group in G satisfying |U 3 | (α|U |) 2 for some α > 0. Then |U | ( k α ) 2 . Proof. Let X ⊂ G be such that |X| k and U 2 ⊂ XU . Then U 3 ⊂ XU 2 ⊂ X 2 U , so |U 3 | |X 2 U | |X| 2 |U | k 2 |U |.
We therefore have that
(α|U |) 2 |U 3 | k 2 |U | =⇒ |U | ( k α ) 2 .
Using the same reasoning as Button, we can obtain a slightly different version of this result for sets satisfying the type of uniform product set growth inequality in Question 1.
Proposition 2.1.8. Let G be a group, and let k 1, α, β > 0. Suppose U is a k-approximate group in G satisfying |U n | (α|U |) βn for every n ∈ N.
Then |U | k 2/β −1 α 2 .
Proof. Let X ⊂ G be such that |X| k and U 2 ⊂ XU . Then U n ⊂ X n−1 U for every n ∈ N, so |U n | |X n−1 U | |X| n−1 |U | k n−1 |U | for every n ∈ N.
Let n = 2 β , then we have that
(α|U |) β( 2/β ) |U 2/β | k 2/β −1 |U |.
Note that (α|U |) 2 |U 2/β |. If α|U | 1 then this follows from the above inequality and the fact that 2 β( 2 β ). If α|U | < 1 then (α|U |) 2 |U 2/β | follows immediately. We can therefore conclude that
(α|U |) 2 k 2/β −1 |U | =⇒ |U | k 2/β −1 α 2 .
Obstructions and sufficient conditions
Here we will prove a collection of results about when certain classes of subgroups can and cannot have uniform product set growth. We will begin by showing that any group with infinite centre cannot have uniform product set growth. Our focus will then switch to subgroups of direct products, as there are cases where the uniform product set growth property can pass from the factors to the subgroup, and cases where certain factors make uniform product set growth impossible. We will also show that uniform product set growth passes to finite index supergroups. These tools will be vital in Section 4, when attempting to answer Question 1 for right-angled Artin groups and mapping class groups.
Infinite order central subgroups
So far the only subgroups we have completely ruled out when attempting to find those with uniform product set growth have been those that do not have uniform exponential growth (see Lemma 2.1.1). This may be enough to get us a dichotomy in certain cases, if the subgroups that are not shown to have uniform product set growth are exactly those that also do not have uniform exponential growth, however this will not be sufficient in general. Our main example of this will be when our group has an infinite central subgroup.
We first note a couple of easy results about the growth of |U n |.
Lemma 2.2.1. Let U be a finite subset of a group G. Then |U n | |U n+1 | for every n ∈ N.
Proof. For any u ∈ U and v, w ∈ U n , we have that uv = uw if and only if v = w.
Lemma 2.2.2. Let U be a finite subset of a group G. Then |U n+m | |U n ||U m | for every n, m ∈ N.
Proof. Any u ∈ U n+m can be written as u = vw for some v ∈ U n , w ∈ U m . Remark 2.2.3. An easy consequence of this is that |U nm | |U n | m .
We can in fact show a slightly more general result than just for infinite central subgroups. We first consider any infinite normal subgroup without exponential growth, where this subgroup is a union of finite conjugacy classes. Proposition 2.2.4. Let G be a finitely generated group, and suppose that it has an infinite normal subgroup H, where H does not have exponential growth, and every conjugacy class {ghg −1 : h ∈ H, g ∈ G} is finite. Then for any constants α, β > 0 we can find a finite generating set U of G such that |U n | < (α|U |) βn for some n ∈ N.
Proof. Let G and H be as above, and note that G must be infinite as it has an infinite subgroup. Fix constants α, β > 0, and let U be some finite generating set of G. As H is infinite, we can pick a finite set V ⊂ H such that |V | > 1 α |U | 1 β . Let V be the union of conjugacy classes
{gvg −1 : v ∈ V , g ∈ G}.
As H is a normal subgroup of G, and conjugacy classes are finite, we have that V is a finite subset of H, and |V | |V | > 1
α |U | 1 β . Let W = U ∪ V .
Then W is a finite generating set of G. Suppose that |W n | (α|W |) βn for every n ∈ N. Note that |W | |V |, so (α|W |) βn (α|V |) βn .
We can also note that, as V is closed under conjugation by elements of G, we have that W n = U n ∪ U n−1 V ∪ · · · ∪ U V n−1 ∪ V n . We can see this by considering w 1 , . . . , w n ∈ W , and supposing that w k / ∈ U . Then w k ∈ V , and w 1 · · · w n = w 1 · · · w k−1 w k+1 · · · w n w k , where w k = (w k+1 · · · w n ) −1 w k w k+1 · · · w n . As this is a conjugate of w k ∈ V , we have that w k ∈ V . We can therefore say that |W n | (n + 1)|U n ||V n | (n + 1)|U | n |V n |.
By our assumption, we therefore have that |V n | (α|V |) βn (n+1)|U | n . Hence
ω( V , V ) = lim n→∞ |B V (n)| 1 n lim n→∞ |V n | 1 n lim n→∞ (α|V |) β (n + 1) 1 n |U | = (α|V |) β |U | > 1.
Therefore V has exponential growth. However H does not have exponential growth, and V H, so this is a contradiction. Hence |W n | < (α|W |) βn for some n ∈ N.
Corollary 2.2.5. Let G be a finitely generated group, and suppose that the centre of G, denoted by Z(G), is infinite. Then for any constants α, β > 0 we can find a finite generating set U of G such that |U n | < (α|U |) βn for some n ∈ N.
Proof. Any infinite finitely generated subgroup of Z(G) will satisfy the requirements of Proposition 2.2.4, noting that the conjugacy classes have size one, and Z(G) is an abelian group, so does not have exponential growth.
Corollary 2.2.6. Let G be a finitely generated subgroup of H × Z, where the projective map from G to H is not injective. Then for any constants α, β > 0 we can find a finite generating set U of G such that |U n | < (α|U |) βn for some n ∈ N.
Proof. As the projective map is not injective, we can find g ∈ H, and n, m ∈ Z such that n = m and (g, n), (g, m) ∈ G. Hence (1, n − m) ∈ H, and so {(1, (n − m) k ) : k ∈ Z} is an infinite subgroup in Z(G). The conclusion then follows by applying Corollary 2.2.5.
Direct products with non-exponential factors
In Corollary 2.2.6 we showed that certain subgroups of direct products with Z do not have uniform product set growth. This is clearly not necessarily true for all subgroups of direct products.
Example 2.2.7. Consider F 2 × Z, where F 2 = a, b . Let H = (a, 1), (b, 1) . Then H is a subgroup of F 2 × Z, however for each g ∈ F 2 there exists a unique n ∈ Z such that (g, n) ∈ H.
In other words H ∼ = F 2 , and so H has uniform product set growth if and only if F 2 does.
We would like to understand when direct products and their subgroups have uniform product set growth, and when they do not. We begin by considering a direct product where one of the factors does not have uniform exponential growth. Proposition 2.2.8. Let H 1 × H 2 be a direct product of groups such that H 2 is infinite and does not have uniform exponential growth. Then for any constants α, β > 0 we can find a finite generating set U of H 1 × H 2 such that |U n | < (α|U |) βn for some n ∈ N.
Proof. Fix constants α, β > 0. Suppose that for every finite generating set U of H 1 × H 2 , we have that |U n | (α|U |) βn for every n ∈ N. We will show that this gives a contradiction to H 2 not having uniform exponential growth.
We fix a finite V 1 ⊂ H 1 such that V 1 = H 1 , and then let V 2 ⊂ H 2 be finite with V 2 = H 2 . Let B = B V1 (1)×B V2 (1), then we have that B = H 1 ×H 2 . We note that B n = B V1 (n)×B V2 (n) for every n ∈ N, so |B n | = |B V1 (n)||B V2 (n)|.
As B generates H 1 × H 2 , by our assumption |B n | (α|B|) βn for every n ∈ N, and every possible choice of V 2 . This means that for every n ∈ N we have that
|B n | (α|B|) βn =⇒ |B V1 (n)||B V2 (n)| (α|B V1 (1)||B V2 (1)|) βn =⇒ lim n→∞ (|B V1 (n)||B V2 (n)|) 1 n (α|B V1 (1)||B V2 (1)|) β =⇒ ω(H 1 , V 1 )ω(H 2 , V 2 ) (α|B V2 (1)|) β =⇒ ω(H 2 , V 2 ) (α|B V2 (1)|) β ω(H 1 , V 1 ) .
In particular, this means that if |B V2 (1)|
(2ω(H1,V1)) 1 β α , then ω(H 2 , V 2 ) 2. We let k = (2ω(H1,V1)) 1 β α
, recalling that V 1 was fixed. For an arbitrary finite V 2 ⊂ H 2 such that V 2 = H 2 we let V 2 = B V2 (k). Note that this is also a finite generating set of H 2 , with the additional property that |B V2 (1)| = |B V2 (k)| k. We therefore have that ω(H 2 , V 2 ) 2, and so
2 ω(H 2 , V 2 ) = lim n→∞ |B V2 (n)| 1 n = lim n→∞ |B V2 (nk)| 1 n = ω(H 2 , V 2 ) k , so ω(H 2 , V 2 ) 2 1 k .
Recalling that k was fixed, we therefore have that ω(H 2 ) 2 1 k > 1, so H 2 has uniform exponential growth. This is a contradiction, so there must exist some finite generating set U of H 1 × H 2 that does not satisfy |U n | (α|U |) βn for every n ∈ N.
The key to the above proof was being able to say something about the growth of every ball in H 2 . This will usually not be possible using a subgroup of such a direct product, as it may not be able to give us sufficient information about H 2 to determine whether it has uniform exponential growth or not. On the other hand, if we instead consider a direct product where one of the factors does not have exponential growth, a similar idea can still work.
For a direct product of group H 1 × H 2 , consider the projection map ϕ : H 1 × H 2 → H 1 . For U ⊂ H 1 × H 2 , we can also consider the restriction ϕ| U . This map is a group homomorphism. In Example 2.2.7, this homomorphism is injective, allowing us to see U as a subgroup of H 1 , and effectively allowing us to ignore the direct product.
More generally, the fact that is it a homomorphism means that such a projection will always be k-to-1, where k is the cardinality of ker(ϕ| U ). So long as this cardinality is finite, it turns out that uniform product set growth for U can be inherited from its projection to H 1 .
Lemma 2.2.9. Let H 1 and H 2 be groups, and let U ⊂ H 1 × H 2 be finite. Suppose that the projection U 1 of U to H 1 satisfies |U n 1 | (α|U 1 |) βn for α, β > 0, and that the projective map from U to H 1 is k-to-1 for some k ∈ N. Then |U n | ( α k |U |) βn . Proof. As the projective map is k-to-1 we have that |U | = k|U 1 |, so combined with the fact that |U n | |U n 1 |, we obtain our inequality.
Remark 2.2.10. Note that if one of the factors in a direct product is finite, then the projection to the other factors will always be finite-to-1, and so this case is covered by the above lemma. This is why the assumption of H 2 being infinite was necessary in Proposition 2.2.8.
On the other hand, if the cardinality of the projection is infinite, then this can form an obstruction to uniform product set growth. Proposition 2.2.11. Let H 1 and H 2 be groups, where H 2 does not have exponential growth. Let G H 1 × H 2 be finitely generated, such that the projective map from G to H 1 is infiniteto-1. Then for any constants α, β > 0 we can find a finite generating set U of G such that |U n | < (α|U |) βn for some n ∈ N.
Proof. Fix constants α, β > 0. Suppose that for every finite generating set U of G, we have that |U n | (α|U |) βn for every n ∈ N. We will show that this gives a contradiction to H 2 not having exponential growth.
Let G H1 and G H2 be the projections of G to H 1 and H 2 respectively. Note that as the projection of G to G H1 is infinite-to-1, for every g ∈ G H1 there exists infinitely many h ∈ H 2 such that (g, h) ∈ G.
We fix a finite V ⊂ G such that V = G. Let V H1 and V H2 be its projections to H 1 and H 2 respectively. As V H2 is finite we can write it as
V H2 = {v 1 , . . . , v m }. Pick g ∈ V H1 , then for some k ∈ N choose {v 1 , . . . , v k } ⊂ H 2 such that {v 1 , . . . , v m } ∩ {v 1 , . . . , v k } = ∅ and (g, v i ) ∈ G for every i ∈ {1, . . . , k}. Let V = V ∪ {(g, v i ) : i ∈ {1, . . . , k}}. Then V = V = G, where V H1 = V H1 and |V H2 | k.
Note that the choice of k here was arbitrary.
As V generates G, by our assumption |V n | (α|V |) βn for every n ∈ N. Note that
|V n H2 | |V n | |V n H1 ||V n H2 | = |V n H1 ||V n H2 | |B V H 1 (n)||B V H 2 (n)|.
This means that for every n ∈ N we have that
|V n | (α|V |) βn =⇒ |B V H 1 (n)||B V H 2 (n)| (α|V H2 |) βn =⇒ lim n→∞ (|B V H 1 (n)||B V H 2 (n)|) 1 n (α|V H2 |) β =⇒ ω(G H1 , V H2 )ω(G H1 , V H2 ) (α|V H2 |) β =⇒ ω(G H2 , V H2 ) (α|V H2 |) β ω(G H1 , V H1 ) .
In particular, if we picked V such that |V H2 | (2ω(G H 1 ,V H 1 )) 1 β α and V H1 = V H1 , then we would get that ω(G H2 , V H2 ) 2. We already showed that such a choice of V is possible, so this tells us that G H2 has exponential growth, which is a contradiction as it is a subgroup of H 2 , which does not have exponential growth. We conclude that there must exist some finite generating set U of G that does not satisfy |U n | (α|U |) βn for every n ∈ N.
One potential issue with using these two results to answer Question 1 is that the constants we get from Lemma 2.2.9 are dependent on the cardinality of the projection. Fortunately for us, in the case that H 2 has a a finite index torsion-free subgroup, we find that the cardinality of this projection is bounded by the index of this subgroup. This can be useful when the overall group we are considering has a finite index torsion-free subgroup, as then every subgroup will have a finite index torsion-free subgroup, with the same index.
Lemma 2.2.12. Let H 1 and H 2 be groups, where H 2 has a torsion-free subgroup H 2 of index d ∈ N. Let G ⊂ H 1 × H 2 be finitely generated, such that the projective map from G to H 1 is k-to-1. Then either k d or k is infinite.
Proof. Let G H1 and G H2 be the projections of G to H 1 and H 2 respectively. Suppose that the projective map is k-to-1 for some k such that k > d, or is infinite. We will show that it must be infinite.
The kernel of the projective map has cardinality k, so by the pigeonhole principle there must exist a, b ∈ G H2 such that a = b, (e, a), (e, b) ∈ G, and a, b ∈ hH 2 ⊂ H 2 . That is, they are in the same coset of H 2 in H 2 .
This means that there exist n a , n b ∈ H 2 such that n a = n b and a = hn a , b = hn b , so a −1 b = n −1 a n b = e. Therefore (e, n −1 a n b ) ∈ G, and has infinite order as H 2 is torsion-free. In particular this is in the kernel of the projective map, so the kernel must have infinite cardinality.
In the case that H 2 itself is torsion-free, this statement can be simplified slightly.
Corollary 2.2.13. Let H 1 and H 2 be groups, where H 2 is torsion-free. Let G ⊂ H 1 × H 2 be finitely generated. Then the projective map from G to H 1 is either injective or infinite-to-1.
We are therefore also able to get a simpler version of Proposition 2.2.11 in this case.
Corollary 2.2.14. Let H 1 , H 2 be groups where H 2 is torsion-free and does not have exponential growth. Let G H 1 × H 2 be such that G is finitely generated, and the projection of G to H 1 is not injective. Then for any constants α, β > 0 we can find a finite generating set U of G such that |U n | < (α|U |) βn for some n ∈ N.
This allows us to re-obtain Corollary 2.2.6 as a particular case of Corollary 2.2.14.
Direct products of groups with uniform product set growth
In the previous subsection we dealt with the case where we have a subgroup of a direct product where one of the factor groups does not have exponential growth. We will now consider a very different direct product case, which is a subgroup of a direct product where all of the factors have uniform product set growth. We will show that the growth of the factors passes to the growth of the subgroup, so long as the number of factors is bounded. The first thing to note here is that if we have a subset of a direct product, we can always relate the size of the original set to the size of one of its projections.
Lemma 2.2.15. Let G = G 1 × · · · × G m be a group, and let U ⊂ G be finite, with U i the projection of U to the factor G i . Then max{|U 1 |, . . . , |U m |} |U | 1 m .
Proof. We proceed by induction, noting that the base case where m = 1 is trivial. We now assume that it is true for m = k, and consider the case where m = k + 1.
Let u, v ∈ U , and let u i , v i ∈ U i be their projections to the factor G i . Let ∼ be an equivalence relation on U such that u ∼ v if and only if u k+1 = v k+1 . Let p be the number of such equivalence classes, and note that p = |U k+1 |. There must be an equivalence class of size at least |U | p , call it V .
Let V be the projection of V to G 1 × · · · × G k , and note that |V | = |V | and V i = V i for every i ∈ {1, . . . , k}. By assumption we therefore have that max{|V 1 |, . . . ,
|V k |} = max{|V 1 |, . . . , |V k |} |V | 1 k = |V | 1 k .
We now observe that
max{|U 1 |, . . . , |U k+1 |} = max{max{|U 1 |, . . . , |U k |}, |U k+1 |} max{max{|V 1 |, . . . , |V k |}, |U k+1 |} max{|V | 1 k , |U k+1 |} max{(|U |/p) 1 k , p} |U | 1 k+1 ,
which concludes our induction.
This almost immediately gives us that if all of the projections of a subset U of a direct product satisfy the uniform product set growth inequality, then so will U , with a change of constants that depends on the number of factors in the direct product.
Corollary 2.2.16. Let G = G 1 × · · · × G m be a group, and let U ⊂ G be finite, with U i the projection of U to the factor G i . Suppose there exist α, β > 0 such that |U n i | (α|U i |) βn for every n ∈ N and i ∈ {1, . . . , m}. Then |U n | (α m |U |) βn m for every n ∈ N. Proof. By Lemma 2.2.15 we choose U i such that |U i | |U | 1 m . We can then conclude that |U n | |U n i | (α|U i |) βn (α|U | 1 m ) βn = (α m |U |) βn m for every n ∈ N.
The above result tell us that if we know about the growth of the projections of a specific subset, then we can say something about the growth of the whole subset. This naturally means that if we can say something general about the growth of subsets of the factor groups, then we can say something general about the growth of subsets of the direct product.
Definition 2.2.17. We say a group is contained non-trivially in a direct product when it is a subgroup of a direct product, and none of its projections to any of the factors groups are trivial.
Corollary 2.2.18. Let G = G 1 × · · · × G m be a group. Suppose there exist α, β > 0 such that for every i ∈ {1, . . . , m}, and every finite V i ⊂ G i such that V i has (uniform) exponential growth, we have that |V n i | (α|V i |) βn for every n ∈ N. Then for every finite U ⊂ G at least one of the following must hold:
1. U does not have (uniform) exponential growth.
2. U is contained non-trivially in a direct product of the form H 1 × H 2 , where H 2 does not have (uniform) exponential growth.
3. |U n | (α m |U |) βn m for every n ∈ N.
Proof. Suppose U ⊂ G is such that U has (uniform) exponential growth, and is not contained non-trivially in a direct product H 1 × H 2 where H 2 does not have (uniform) exponential growth. Then every projection U i to G i is either trivial, or U i has (uniform) exponential growth. Suppose that k of the projections are non-trivial, and note that k 1. Then we can regard U as a subset of G i1 × · · · × G i k , where each projection is non-trivial, and hence each projection generates a subgroup with (uniform) exponential growth. Therefore |U n | (α k |U |) βn k for every n ∈ N by Corollary 2.2.16, so as k m we get that |U n | (α m |U |) βn m for every n ∈ N.
Remark 2.2.19. For simplicity's sake we did not include the case where factors may be finite if they are of bounded size, however we note here that if any G i is finite such that |G i | 1 α , then we automatically have that any V i ⊂ G i satisfies |V n i | (α|V i |) βn for every n ∈ N.
Theorem 2.3.3 in the following section will tell us that the assumptions of Corollary 2.2.18 are satisfied if the G i are hyperbolic groups.
Corollary 2.2.20. Let G = G 1 × · · · × G m be a group such that each G i is hyperbolic. There exists a constant α > 0 such that for every finite U ⊂ G at least one of the following must hold:
1. U is virtually cyclic. 2. U is contained non-trivially in a direct product of the form H 1 ×H 2 , where H 2 is virtually cyclic. 3. |U n | (α|U |) n 2m for every n ∈ N.
Proof. The statement of Theorem 2.3.3 says that for each hyperbolic group G i there exists α i > 0 such that for every finite V i ⊂ G i that is not contained in a virtually cyclic subgroup, we have that |V n i | (α i |V i |) n 2 for every n ∈ N. Let α = min{α 1 , . . . , α m } m , then the conclusion follows from Corollary 2.2.18.
This also suggests a possibly more general technique for finding uniform product set growth in groups that contain direct products with a bounded number of factors. If we can prove something about subsets that are not contained non-trivially in direct products, then we can carry this over to subsets that are, so long as all of the projections have the correct properties.
It is important, however, to remark here that Corollary 2.2.18 and Corollary 2.2.20 do not give us a dichotomy of subgroups, as demonstrated by Example 2.2.7. We might be tempted to try to use the work in the previous subsection to get such a dichotomy, however in the general case this is not possible, as using finite-to-1 projections to get down to the factors with uniform product set growth will only work if we have some control over the size of the projections.
On the other hand, we know that we do have some control over these projections in the case of virtually torsion-free groups by Lemma 2.2.12, and so in this case we can get a dichotomy.
Proposition 2.2.21. Let G = G 1 × · · · × G m be a direct product of virtually torsion-free groups. Suppose there exist α, β > 0 such that for every i ∈ {1, . . . , m}, and every finite V i ⊂ G i such that V i has exponential growth, we have that |V n i | (α|V i |) βn for every n ∈ N. Then there exist α , β > 0 such that for every finite U ⊂ G at least one of the following must hold:
1. U is contained a direct product of the form H 1 × H 2 , where H 2 does not have exponential growth and the projection of U to H 1 is infinite-to-1.
2. |U n | (α |U |) β n for every n ∈ N.
Proof. For every G i , there exists a torsion-free subgroup H i of finite index. Let that index be
d i ∈ N. Then H 1 × · · · × H m is a torsion-free subgroup of G with index d = d 1 . . . d m .
Hence every subgroup of G has a torsion-free subgroup with index at most d. Note that d d i for every i.
Let U ⊂ G be finite. Assume that if U is contained a direct product of the form H 1 × H 2 , where H 2 does not have exponential growth, then the projection of U to H 1 is finite-to-1. By Lemma 2.2.12, this projection is would be at most d-to-1. We note that this allows us to rule out the case that U is infinite and does not have exponential growth, as otherwise we could see U as a subgroup of {e} × U , where the projection to {e} is infinite-to-1.
Suppose that U is finite. As it has a torsion-free subgroup of index at most d, this means that the trivial subgroup has index at most d in U . Hence |U | | U | d, so as long as we ensure that α 1 d we will have that U satisfies |U n | (α|U |) βn for every n ∈ N.
We can therefore assume that U is infinite. Let U i be the projection of U to G i , so U U 1 × · · · × U m . We rewrite the U i as V 1 , . . . , V k , W 1 , . . . , W m−k , where the V i have exponential growth, and the W i do not. Note that {V 1 , . . . , V k } is nonempty, as otherwise U would not have exponential growth. On the other hand, {W 1 , . . . , W m−k } may be empty.
Let U be the projection of U to V 1 × · · · × V k . The projection of U to each of these factors is simply V i , and as each V i has exponential growth, the properties of G i tell us that |V n i | (α|V i |) βn for every n ∈ N. Noting that k m, we have that |(U ) n | (α m |U |) βn m for every n ∈ N by Corollary 2.2.16.
We know that W 1 × · · · × W m−k has a torsion-free subgroup of index at most d, so the
projection of U to V 1 ×· · ·× V k is at most d-to-1. It therefore follows that |U n | ( α m d |U |)
βn m for every n ∈ N by Lemma 2.2.9.
Combining this with the finite case, we see that if we let α = min{ α m d , 1 d } and β = β m , then any finite U ⊂ G that satisfies our initial assumption will also satisfy |U n | (α |U |) β n for every n ∈ N.
Remark 2.2.22. This is a dichotomy of subgroups by Proposition 2.2.11.
The method used in the above proof is very similar to the methods we will use in Section 4, where we will be considering right-angled Artin groups and mapping class groups, which are virtually torsion-free and have bounds of the number of factors in any subgroup that is a direct product.
We note here that, as with Corollary 2.2.18, the above result has a corollary for products of hyperbolic groups. We again use the fact that Theorem 2.3.3 tells us that hyperbolic groups satisfy the hypotheses of Proposition 2.2.21, although this time we need to add the assumption that these hyperbolic groups are virtually torsion-free.
Corollary 2.2.23. Let G = G 1 × · · · × G m
be a direct product of virtually torsion-free hyperbolic groups. There exist α, β > 0 such that for every finite U ⊂ G at least one of the following must hold:
1. U is contained a direct product of the form H 1 × H 2 , where H 2 is virtually cyclic and the projection of U to H 1 is infinite-to-1.
2. |U n | (α|U |) βn for every n ∈ N.
Remark 2.2.24. Whether all hyperbolic groups are virtually torsion-free or not is an open question, and is equivalent to the problem of whether all hyperbolic groups are residually finite [KW00].
Passing to finite index supergroups
We now move away from direct products, and consider another way that uniform product set growth can be inherited. It is well known that uniform exponential growth passes to finite index supergroups [SW92,Proposition 3.3]. This can be shown by checking that for any finite generating set of our supergroup, there exists a finite generating set of our finite index subgroup where each generator has bounded length. We can adapt this method to show that uniform product set growth also passes to finite index supergroups.
Lemma 2.2.25. Let G be a finitely generated group, and let U be a finite generating set of G.
Let H G such that [G : H] = d. There exists a set of representatives of the right cosets of H such that each representative has length at most d! in the word metric induced by U .
Proof. We can consider the action of G on the right cosets, given by g · (Hg ) = Hg g. This is a transitive action, so it induces an epimorphism ϕ : G → S d , where S d is the symmetric group of degree d. As U generates G, we have that ϕ(U ) generates S d , and therefore
ϕ(B U (|S d |)) = B ϕ(U ) (|S d |) = S d .
Hence for any coset Hg there exists g ∈ B U (|S d |) such that Hg = g · (H) = Hg . We can therefore find representatives of every coset in
B U (|S d |) = B U (d!).
Proposition 2.2.26. Let G be a finitely generated group, and let U be a finite generating set
of G. Let H G such that [G : H] = d. There exists a finite generating set V of H such that |V | 1 d |U | and V ⊂ B U (2d! + 1). Proof. Let {g 1 , . . . , g d } ⊂
G be a set of representatives of the right cosets of H such that each representative has length at most d! in the word metric induced by U , which we know exists by Lemma 2.2.25. Proposition 2.1 in [Man12] tells us that the set of elements g i ug −1 j such that u ∈ U or u −1 ∈ U , and g j is the right coset representative of g i u, is a finite generating set for H. We call this set V , and note that for u, v ∈ U such that u = v, we have that
g 1 ug −1 i = g 1 vg j −1 implies that i = j. Therefore |V | 1 d |U |.
We also note that each g i ug −1 j has length at most 2d! + 1 in the word metric induced by U , so V ⊂ B U (2d! + 1).
Proposition 2.2.27. Let G be a finitely generated group, and let U be a finite symmetric generating set of G. Let H G such that [G : H] = d. Suppose that there exist α, β > 0 such that for any finite symmetric generating set V of H, we have that |V n | (α|V |) βn for every n ∈ N.
Let m = 2d! + 1. Then |U n | α 2 m β d |U | βn m for every n ∈ N.
Proof. We know from Proposition 2.2.26 that there exists a finite symmetric generating set V of H such that |V | 1 d |U | and V ⊂ B U (m). Let n ∈ N. As B U (n) = {e} ∪ U ∪ · · · ∪ U n , we have that |B U (n)| (n + 1)|U n |.
For every n ∈ N we have that
((n + 1)|U n |) m |B U (n)| m |B U (n) m | = |B U (m) n | |V n | (α|V |) βn α d |U | βn . Therefore |U n | 1 n + 1 α d |U | βn m = α d(n + 1) m βn |U | βn m
It now only remains to note that (n + 1) 1 n 2 for every n ∈ N, and the conclusion follows.
Known results for product set growth
Here we will give some of the history of product set growth in infinite groups, including those groups for which the answer to Question 1 is already known.
The origins of Question 1 come from the study of growth in finite and abelian groups, where there is often interest in estimating the size of |U 2 | and |U 3 |. One example of this is given in [Hel08], where it is shown that there exist α, ε > 0 such that for certain U ⊂ SL 2 (Z/pZ) with p prime we have that |U 3 | α|U | 1+ε . Some of the techniques used in this paper were adapted to infinite groups by Chang, who showed the same result for finite U ⊂ SL 2 (C) such that U is not finite or metabelian [Cha08,But13]. In particular, as F 2 SL 2 (C), the same result applies to U ⊂ F 2 such that U is not cyclic.
Chang believed that for free groups there should be a direct combinatorial proof of this fact, with a better estimate for ε. The first proof in this direction was given by Razborov, in the wider case of virtually free groups. He was able to show that for a virtually free group G, any finite U ⊂ G satisfies |U 3 | |U | 2 / log(|U |) O(1) , so long as U is not virtually cyclic [Raz14].
Both of these results were then further improved on by Safin in the case of free groups. In particular, Safin was able to completely answer Question 1 in this case.
Theorem 2.3.1. [Saf11]
Let G be a free group. Then for every finite U ⊂ G at least one of the following must hold:
1. U is isomorphic to Z. 2. |U n | ( 1 372 |U |) n+1 2 for every n ∈ N.
Remark 2.3.2. In Safin's paper this result is stated slightly differently. The first difference is that in [Saf11], all of the cyclic subgroups are ruled out. However as any free group is torsion-free, these cyclic subgroups are only the trivial group and the subgroups isomorphic to Z, and the trivial case is taken care of by the fact that any set containing a single element trivially satisfies the second inequality. The other difference is that the inequality in [Saf11] is expressed in the form |U n | c n (|U |)
n+1 2
for some constants c n > 0, however a simple inspection of the argument tells us that we can rewrite it in the above form.
This result gives us the dichotomy of subgroups that we are looking for, as Lemma 2.1.1 proved that Z cannot have uniform product set growth. Moreover, Safin showed that n+1 2 is the best possible exponent for such a lower bound in free groups, and so this will also be the best possible exponent for any wider class of groups. Another notable feature of this paper is the method used to prove this result, as it turned out to be highly generalisable to other types of groups.
The first generalisation of Safin's method was by Button, who was able to give a tripling result for free products. In particular, he showed that there exists α > 0 such that for any finite U ⊂ G * H we have that |U 3 | α|U | 2 , so long as U is not infinite cyclic or dihedral, and U is not conjugate into one of the factors [But13].
Button asked if this type of tripling applies to hyperbolic groups. This question was answered recently by Delzant and Steenbock, who were able to greatly generalise the methods of Safin and Button to extend Safin's result to hyperbolic groups.
Theorem 2.3.3. [DS20]
Let G be a hyperbolic group. Then there exists a constant α > 0 such that for every finite U ⊂ G at least one of the following must hold:
1. U is virtually Z. 2. |U n | (α|U |) n+1 2 for every n ∈ N.
Remark 2.3.4. The original version of this theorem ruled out all virtually cyclic groups, it is known that the size of finite subgroups in hyperbolic groups is bounded, see [BG95] for example. This means that the size of finite subsets generating finite groups is bounded, so as long as this is at most 1 α the above statement holds. In fact, Delzant and Steenbock generalised these methods even further, and were able to apply them to groups acting acylindrically on hyperbolic spaces. The subgroup structure of an acylindrically hyperbolic group may be much more complicated than the subgroup structure of a hyperbolic group, so it is not generally sufficient to simply rule out the virtually Z subgroups when looking for uniform product set growth. In particular, it is hard to say anything about subsets whose image under the orbit map have small diameter.
Delzant and Steenbock ruled out such subsets by using a certain notion of displacement. If (X, d) is a hyperbolic space that a groups G acts on by isometries, and U ⊂ G is finite,
pick x 0 ∈ X such that u∈U d(x 0 , ux 0 ) is (almost) minimised, then the displacement of U is λ 0 (U ) = max u∈U d(x 0 , ux 0 )
. For a more detailed definition, see Definition 3.2.1. The statement for acylindrically hyperbolic groups is then as follows.
Theorem 2.3.5. [DS20]
Let G be a group that acts acylindrically on a hyperbolic space. Then there exist constants K > 0 and α > 0 such that for every finite U ⊂ G at least one of the following must hold:
1. U is virtually Z. 2. λ 0 (U ) < K log 2 (2|U |). 3. |U n | α log 6 2 (2|U |) |U | n+1 2 for every n ∈ N.
Remark 2.3.6. The original version of this theorem also ruled out all virtually cyclic groups, however Proposition 4.4 in [DS20] tells us that any set that generates a finite subgroup must satisfy at least one of |U | 1 α or λ 0 (U ) < K log 2 (2|U |). In the case that the space being acted on is a simplicial tree, Delzant and Steenbock obtained another version of this theorem, where the logarithm terms do not appear. In Section 3.2 we will show that this also holds when the space being acted on is a quasi-tree. We will also show that the displacement condition in these theorems can be overcome by considering the sets U such that U k contains a loxodromic element, for some bounded k ∈ N.
We end this section by referring the reader to a couple of recent papers on this topic. The first is by Coulon and Steenbock [CS21], which contains results on product set growth in Burnside groups. They were also able to obtain the following result for groups with acylindrical actions on hyperbolic spaces, which can be compared with Theorem 2.3.5. Here instead of displacement they use the l ∞ -energy of a set U of isometries of a space X, which is given by λ(U ) = inf x∈X sup u∈U d(x, ux).
Theorem 2.3.7. [CS21]
Let G be a group that acts acylindrically on a hyperbolic space. Then there exist constants C > 0 such that for every finite U ⊂ G at least one of the following must hold:
1. λ(U ) C. 2. |U n | 1 Cλ(U ) |U | n+1 2 for every n ∈ N.
The other recent paper is by Cui, Jiang, and Yang, who proved uniform uniform exponential growth for certain subgroups of relatively hyperbolic groups. The lower bound they got on the exponential growth rate depended on the size of the generating set, which came from a product set growth type result. We leave the definition of an elementary subgroup in this scenario to their paper. Theorem 2.3.8. [CJY21] Let G be a non-elementary relatively hyperbolic group. Then there exist constants α, β > 0 such that for every finite symmetric U ⊂ G at least one of the following must hold:
1. U is elementary.
2. |U n | (α|U |) βn for every n ∈ N.
Remark 2.3.9. This is not explicitly stated in their main theorem, however can be seen in the final stages of their proof, where they show that for some k, N 0 > 0 there exists T ⊂ B U (nk) such that |T | 1 2N0 |U |, and T is a free group of rank |T |. An inequality of the form |B U (n)| (α |U |) βn can be used to get an inequality for symmetric sets by using the fact that (n + 1)|U n | |B U (n)|, and that (n + 1) 1 n 2 for every n ∈ N, as in the proof of Proposition 2.2.27.
Growth in acylindrically hyperbolic groups
The focus of this section will be to prove some results about product set growth for certain subsets of acylindrically hyperbolic groups. This will provide us with some of the tools that we will need in Section 4 when trying to answer Question 1 for subgroups of mapping class groups.
In Section 3.1 we will give some necessary background information regarding quasi-trees, hyperbolic spaces, and acylindrically hyperbolic groups. In Section 3.2, we will use the fact that tree approximation is uniform in quasi-trees [Ker20] to get a generalisation of Delzant and Steenbock's result for groups acting acylindrically on trees, to groups acting acylindrically on quasi-trees. As noted below in Theorem 3.1.12, it was shown by Balasubramanya that every acylindrically hyperbolic group has an acylindrical action on a quasi-tree. Hence this generalisation to quasi-trees can be seen as being parallel to Delzant and Steenbock's statement for groups acting acylindrically on hyperbolic spaces, as stated in Theorem 2.3.5.
We will also use Section 3.2 to give a more concrete method by which we can use this generalisation to answer Question 1 for certain subgroups of acylindrically hyperbolic groups, namely by quickly finding loxodromic elements in the action on a quasi-tree. In Section 3.3 we will show that this method can also be used when the acylindrical action is on a hyperbolic space, which is a more practical setting for many groups.
Acylindrically hyperbolic groups
(x, y) x0 = 1 2 (d(x 0 , x) + d(x 0 , y) − d(x, y)).
The Gromov product can be used to define a hyperbolic space, however as we will be working exclusively in geodesic spaces we will give the equivalent definition that uses slim triangles. For x, y in a geodesic metric space, we will use the notation [x, y] to represent any geodesic from x to y. Definition 3.1.2. Let (X, d) be a geodesic metric space. We say that X is δ-hyperbolic, for some δ 0, if for every x, y, z ∈ X any choice of geodesic triangle [
x, y] ∪ [y, z] ∪ [z, x] is δ-slim, meaning that if p ∈ [x, y] then there exists q ∈ [y, z] ∪ [z, x] such that d(p, q) δ.
Convention. Every hyperbolic space we work with in this paper will be assumed to be geodesic.
We now wish to define a quasi-tree. To do this we first need to define what we mean by a tree.
Definition 3.1.3. A metric space (X, d) is an R-tree if for every x, y ∈ X there exists a unique topological embedding α : [0, r] → X, and this embedding is a geodesic, so d(x, y) = r.
A quasi-tree is most commonly defined as being a geodesic metric space that is quasi-isometric to a simplicial tree, which is equivalent to being quasi-isometric to an R-tree [Ker20]. Here, however, we will use Manning's characterisation of these spaces. A key property of hyperbolic spaces is the following result of Gromov, which essentially states that every finite subset of a hyperbolic space can be approximated by an R-tree, with error dependent on the number of points being approximated.
For all
x, y ∈ Y , we have that d(x, y) − 2δ(log 2 (n) + 1) d (f (x), f (y)) d(x, y).
It turns out that in quasi-trees this approximation is uniform, in the sense that the error is no longer dependent on the subset being approximated. 2. For all x, y ∈ Y , we have that d(x, y) − 2(∆ + 2δ) d * (f (x), f (y)) d(x, y).
Acylindrical actions and loxodromic elements
When we look at groups acting on quasi-trees and hyperbolic spaces, we will be specifically interested in the cases where that action is acylindrical. The following terminology comes from [DGO17]. Definition 3.1.8. We say that an isometric action of a group G on a δ-hyperbolic geodesic metric space is acylindrical, or (κ 0 , N 0 )-acylindrical for κ 0 δ and N 0 1, if for every x, y ∈ X with d(x, y) κ 0 we have that |{g ∈ G : d(x, gx) 100δ and d(y, gy) 100δ}| N 0 .
Remark 3.1.9. By a result in [DGO17], if δ > 0 then the this is equivalent to the more common definition of acylindricity which can be found in papers such as [Osi16].
It turns out that every group admits an acylindrical action on a hyperbolic space, simply by considering the trivial action on a point. To have a meaningful class of groups to consider we therefore need another condition on the action. This is a wide class of groups, which includes hyperbolic groups (that are not virtually cyclic), most mapping class groups, most right-angled Artin groups, and Out(F n ). See the appendix of [Osi16] for a more complete list.
In Section 3.2 we will be interested in the acylindrically hyperbolic groups that have an acylindrical action on a quasi-tree. By a result of Balasubramanya, this is in fact the entire class of acylindrically hyperbolic groups. This theorem means that any result for groups acting acylindrically on quasi-trees can automatically be applied to every acylindrically hyperbolic group. This helps motivate Section 3.2.
In Sections 3.2 and 3.3 we will show that we can say something about the growth of a subset of an acylindrically hyperbolic group if that subset contains a certain type of element. By type of element we mean that if a group has an acylindrical action on a hyperbolic space, we can classify the group elements by how they behave in this action.
Definition 3.1.13. Let G be a group acting by isometries on a metric space X. For g ∈ G, the stable translation length of g is
τ (g) = lim n→∞ d(x, g n x) n ,
where x ∈ X.
Remark 3.1.14. It is clear from the triangle inequality that τ (g) is not dependent on the choice of x ∈ X.
Definition 3.1.15. Let G be a group acting by isometries on a metric space X. We say that g ∈ G is an elliptic element if some (equivalently any) orbit of g in X has bounded diameter.
Definition 3.1.16. Let G be a group acting by isometries on a metric space X. We say that g ∈ G is a loxodromic element if τ (g) > 0.
These two types of element are naturally mutually exclusive, as for an elliptic element g we have that τ (g) = 0. A result of Bowditch tells us that these two types classify all elements in an acylindrical action on a hyperbolic space.
Proposition 3.1.17. [Bow08] Let G be a group acting by isometries on a hyperbolic space X. Every element g ∈ G is either elliptic or loxodromic.
Another important concept that will be used later in Section 3 is that of independent loxodromic elements. To define this, it is easier to use an equivalent definition of loxodromic elements in the case of an action on a hyperbolic space.
Definition 3.1.18. Let G be a group acting by isometries on a metric space X. We say that g ∈ G is a loxodromic element if it fixes exactly two points on the Gromov boundary ∂X. We call these points Fix(g).
For the definition of, and facts about, the Gromov boundary of a hyperbolic space we refer to [KB02].
We can now define what it means for two loxodromic elements to be independent.
Definition 3.1.19. Two loxodromic elements g and h are independent if Fix(g)∩Fix(h)= ∅.
We finish this subsection with the notation for the maximal virtually cyclic subgroup containing a given loxodromic element.
Definition 3.1.20. Let G be a group acting acylindrically on a hyperbolic space X, and let g be a loxodromic element in this action. Denote by E(g) the maximal virtually cyclic subgroup containing g.
Remark 3.1.21. This maximal virtually cyclic subgroup exists by Lemma 6.5 in [DGO17], and is exactly the stabiliser of Fix(g) in G.
Growth from actions on quasi-trees
In this section we will be interested in a theorem of Delzant and Steenbock, previously stated as Theorem 2.3.5, where they give a lower bound on the growth of U n for certain subsets of acylindrically hyperbolic groups [DS20]. As a direct result of the fact that tree approximation is uniform in quasi-trees [Ker20] we will be able to improve this theorem in the case of groups acting acylindrically on quasi-trees. By Theorem 3.1.12 this improvement will apply to every acylindrically hyperbolic group, as every such group admits an acylindrical action on a quasi-tree.
Before we can state the result that we wish to improve, we must first give a few definitions. We then fix a base point x 0 ∈ X such that it minimises E(U ) up to δ, so
1 |U | u∈U d(x 0 , ux 0 ) E(U ) + δ.
The displacement of U is then defined to be
λ 0 (U ) = max u∈U d(x 0 , ux 0 ).
Remark 3.2.2. The more usual notion of the displacement of U is to instead take the infimum of max u∈U d(x, ux) over any x ∈ X, which is clearly a lower bound for λ 0 (U ). For a comparison of such quantities see [BF18].
The result that Delzant and Steenbock obtained for groups acting acylindrically on hyperbolic spaces is the following. [DS20] Let G be a group acting (κ 0 , N 0 )-acylindrically on a δ-hyperbolic geodesic metric space. Assume κ 0 δ and N 0 1. There exist constants α = α(δ, κ 0 , N 0 ) > 0 and K = K(κ 0 ) > 0 such that for every finite U ⊂ G at least one of the following must hold:
2. λ 0 (U ) < K log 2 (2|U |). 3. |U n | α log 6 2 (2|U |) |U | n+1 2
for every n ∈ N.
Remark 3.2.4. We note here that the action in Theorem 3.2.3 is not assumed to be nonelementary, and as stated before every group G acts acylindrically on some hyperbolic space. However, if the action in question is elementary, then any finite subset U of G must satisfy that U is virtually Z, or λ 0 (U ) < K log 2 (2|U |), or be such that |U | 1 α , by Proposition 4.4 in [DS20].
We would like to re-state this theorem for groups acting acylindrically on quasi-trees, but with the logarithm terms removed. To do this we will first check that the step in Delzant and Steenbock's proof in which the logarithm terms are introduced can be improved in the case of quasi-trees, and then combine this with the rest of their results. We will then consider the theorem we obtain in the context of Question 1, which means that we would like to understand which subsets have large enough displacement that our theorem will allow us to say something about their product set growth.
The reduction lemma for quasi-trees
The logarithm terms in Theorem 3.2.3 are introduced in a key step that Delzant and Steenbock call the reduction lemma, which we state below. The notation (U x 0 , V x 0 ) x0 R is simply used to mean that (ux 0 , vx 0 ) x0 R for every u ∈ U, v ∈ V . The constant κ 0 is chosen such that κ 0 δ, and will generally be taken from the acylindrical action on the space, as in Theorem 3.2.3.
Lemma 3.2.5 (Reduction lemma). [DS20]
Let U be a finite set of isometries of a δ-hyperbolic geodesic metric space X, with δ > 0. If at most 1 4 of the isometries u ∈ U have displacement d(x 0 , ux 0 ) 10 10 κ 0 log 2 (2|U |), then there exist U 0 , U 1 ⊂ U with cardinalities at least 1 100 |U | such that (U −1 0 x 0 , U 1 x 0 ) x0 1000 log 2 (2|U |)δ and (U 0 x 0 , U −1 1 x 0 ) x0 1000 log 2 (2|U |)δ. In addition, for all u 0 ∈ U 0 and u 1 ∈ U 1 we have that d(x 0 , u 0 x 0 ) 10 10 κ 0 log 2 (2|U |) and d(x 0 , u 1 x 0 ) 10 10 κ 0 log 2 (2|U |). Figure 1 gives a representation of how such sets U 0 and U 1 may look when applied to x 0 . Every element displaces x 0 by a large distance, while U −1 0 and U 1 (respectively U 0 and U −1 1 ) translate x 0 in opposite directions in the space. Once such sets are found, Delzant and Steenbock estimate |U 2n+1 | by instead considering |(U 0 U 1 ) n U 0 |, and showing that a lower bound on this can be given in terms of |U 0 | n . As |U 0 | 1 100 |U |, a lower bound can then be given in terms of |U | n .
The logarithm terms in the reduction lemma are introduced because the proof makes use of Gromov's Tree Approximation Lemma. In their paper Delzant and Steenbock remark that if the space being acted on is a quasi-tree, then the logarithm terms in Theorem 3.2.3 should disappear (see [DS20, Remark 1.18]). We are able to check that this is indeed the case. We include the proofs here for completeness, however the method and ideas are the same as those used in [DS20], with the substitution of Proposition 3.1.7 in place of Gromov's Tree Approximation Lemma.
In everything that follows, we will assume that x 0 has been chosen to minimise E(U ) for our given set of isometries U , as in Definition 3.2.1. For the purposes of the reduction lemma the constant κ 0 only needs to satisfy κ 0 ∆, where ∆ is the hyperbolicity or bottleneck constant for the space, however when this lemma is applied to an acylindrical action it will be taken to be the expected acylindricity constant from Definition 3.1.8.
Remark 3.2.6. To reduce the number of constants, we recall from Remark 3.1.5 that a quasi-tree with bottleneck constant ∆ 0 is ∆-hyperbolic, so we will use this ∆ for both.
We will first consider subsets of U of the form
U y,z = {u ∈ U : d(x 0 , ux 0 ) 10 10 κ 0 , (x 0 , ux 0 ) y ∆, and (x 0 , u −1 x 0 ) z ∆},
where y, z ∈ S(x 0 , 1000∆), the sphere of radius 1000∆ centred at x 0 in X. Note that some of these sets may be empty.
The following result is key to the proof of the reduction lemma. The idea of the minimal energy lemma is that if too many of the isometries in U and U −1 send x 0 in the same direction, then this would contradict our choice of x 0 as minimising E(U ) = 1 |U | u∈U d(x, ux). In particular, y 0 would be a better choice. The statement of the reduction lemma for quasi-trees is as follows.
Lemma 3.2.8 (Reduction lemma). Let U be a finite set of isometries of a quasi-tree X with bottleneck constant ∆ > 0. If at most 1 4 of the isometries u ∈ U have displacement d(x 0 , ux 0 ) 10 10 κ 0 , then there exist U 0 , U 1 ⊂ U with cardinalities at least 1 100 |U | such that
(U −1 0 x 0 , U 1 x 0 ) x0 1000∆ and (U 0 x 0 , U −1 1 x 0 ) x0 1000∆.
In addition, for all u 0 ∈ U 0 and u 1 ∈ U 1 we have that d(x 0 , u 0 x 0 ) 10 10 κ 0 and d(x 0 , u 1 x 0 ) 10 10 κ 0 .
To prove the reduction lemma we need two preliminary lemmas, for which we will assume that X and U are as above. As with everything in this subsection, their proofs follow their analogues in [DS20], with the only difference being the substitution of constants to match the uniform tree approximation lemma.
We first set up our notation. Let r = 1000∆ and S = S(x 0 , r). Consider
V = x∈U x0∪U −1 x0 [x 0 , x].
As U is finite, by Proposition 3.1.7 there exists a metric tree (T, d * ) and a map f : (V, d) → (T, d * ) such that:
1. For all x ∈ U x 0 ∪ U −1 x 0 , the restriction of f to the geodesic segment [x 0 , x] is an isometry.
2. For all y, z ∈ V , we have that d(y, z) − 6∆ d * (f (y), f (z)) d(y, z).
Let S be the sphere of radius r centred at f (x 0 ) in T . As f restricted to any geodesic segment [x 0 , x] is an isometry, we have that S = f (S ∩ V ). For any P, Q ⊂ S, we let k = 10 10 κ 0 and define
U P,Q = {u ∈ U : d(x 0 , ux 0 ) k, and there exists p ∈ P, q ∈ Q such that (f (x 0 ), f (ux 0 )) p = 0 and (f (x 0 ), f (u −1 x 0 )) q = 0}.
This is equivalent to saying that u ∈ U with displacement d(
x 0 , ux 0 ) k is in U P,Q if and only if [f (x 0 ), f (ux 0 )] intersects S at P , and [f (x 0 ), f (u −1 x 0 )] intersects S at Q.
Lemma 3.2.9. Let P ⊂ S and Q ⊂ S\P . Then
(U ,Q −1 x 0 , U P, x 0 ) x0 r,
where and are used as placeholders for any subsets of S.
Proof. Let u 0 ∈ U P, , and u 1 ∈ U ,Q . This means that [f (x 0 ), f (u 0 x 0 )] intersects S at P , and [f (x 0 ), f (u −1 1 x 0 )] intersects S at Q. As P ∩ Q = ∅, and we are in a tree, this means that
(f (u 0 x 0 ), f (u −1 1 x 0 )) * f (x0)
r.
As f is an isometry on [x 0 , u 0 x 0 ] and [x 0 , u −1
1 x 0 ], and d * (f (u 0 x 0 ), f (u −1 1 x 0 )) d(u 0 x 0 , u −1 1 x 0 ), we get that (f (u 0 x 0 ), f (u −1 1 x 0 )) * f (x0) = 1 2 (d * (f (x 0 ), f (u 0 x 0 )) + d * (f (x 0 ), f (u −1 1 x 0 )) − d * (f (u 0 x 0 ), f (u −1 1 x 0 ))) 1 2 (d(x 0 , u 0 x 0 ) + d(x 0 , u −1 1 x 0 ) − d(u 0 x 0 , u −1 1 x 0 )) = (u 0 x 0 , u −1 1 x 0 ) x0 , and therefore (u 0 x 0 , u −1 1 x 0 ) x0 r.
Sets of the form U P, and U ,Q therefore have the required Gromov product, and have the required displacement by definition. If we can find such sets that have cardinalities at least 1 100 |U |, then we will be done.
Let P ⊂ S, and Q = S\P . Fix p ∈ P . Let P = P \{p}, and Q = Q ∪ {p}. Note that |U P ,P | |U P,P |, and |U Q ,Q | |U Q,Q |. Then |U P ,P | > 1 100 |U |.
Proof. Suppose that |U P ,P | 1 100 |U |. We want to get a lower bound on |U p,p |, and use this to get a contradiction with the minimal energy lemma.
First note that as Q = S\P and T is a tree, the sets U P,P , U P,Q , U Q,P , and U Q,Q are disjoint and {u ∈ U : d(x 0 , ux 0 ) k} = U P,P U P,Q U Q,P U Q,Q .
We know that |{u ∈ U : d(x 0 , ux 0 ) k}| 75 100 , so |U P,P | 72 100 , and therefore |U P,P \U P ,P | 71 100 . We also have that |U P,P \U P ,P | = |U P ,p | + |U p,P | + |U p,p | |U P ,Q | + |U Q ,P | + |U p,p | 2 100 |U | + |U p,p |.
Hence |U p,p | 69 100 . To get a contradiction with the minimal energy lemma, we want to show that U p,p is a subset of y,z∈B(y0,6∆)∩S U y,z for some y 0 ∈ S.
Consider the set f −1 (p). As p ∈ S, and f maps geodesics from x 0 isometrically, we have that
f −1 (p) ⊂ S. Fix y 0 ∈ f −1 (p). For any z ∈ f −1 (p), we get that d(y 0 , z) d * (f (y 0 ), f (z)) + 6∆ = 6∆. Hence f −1 (p) ⊂ B(y 0 , 6∆) ∩ S.
It therefore suffices to show that U p,p is a subset of y,z∈f −1 (p) U y,z . Let u ∈ U p,p . The function f maps [x 0 , ux 0 ] isometrically onto [f (x 0 ), f (ux 0 )], and p lies on [f (x 0 ), f (ux 0 )], so there exists y ∈ f −1 (p) such that y ∈ [x 0 , ux 0 ]. This means that (x 0 , ux 0 ) y = 0 < ∆. Similarly, we can pick z ∈ f −1 (p) such that (x 0 , u −1 x 0 ) z = 0 < ∆. Hence u ∈ U y,z , so 69 100 |U p,p | y,z∈f −1 (p) U y,z y,z∈B(y0,6∆)∩S U y,z .
This contradicts the minimal energy lemma, so |U P ,P | > 1 100 |U |. We will now use these results to prove the reduction lemma.
Proof of reduction lemma. We want to find two sets U 0 , U 1 of the form U P, and U ,Q , with P ⊂ S and Q ⊂ S\P , such that |U P, | > 1 100 and |U ,Q | > 1 100 . Let P (0) = S and Q (0) = ∅. Note that S is finite, so we can write P (0) as P (0) = {p 1 , . . . , p N } for some N ∈ N. We now recursively define P (n) = P (n−1) \{p n }, and Q (n) = Q (n−1) ∪ {p n }.
If for any n we have that |U P (n) ,Q (n) | > 1 100 or |U Q (n) ,P (n) | > 1 100 , then we can set U 0 = U 1 to be this set, and we are done. Suppose otherwise, so |U P (n) ,Q (n) | 1 100 and |U Q (n) ,P (n) | 1 100 for all n. We can see that |U Q (n) ,Q (n) | is an increasing sequence, with |U Q (0) ,Q (0) | = 0 and
|U Q (N ) ,Q (N ) | = |U S,S | = |{u ∈ U : d(x 0 , ux 0 ) k}| 75 100 .
Hence there exists n such that |U Q (n−1) ,Q (n−1) | 1 100 and |U Q (n) ,Q (n) | > 1 100 . By Lemma 3.2.10 we also get that |U P (n) ,P (n) | > 1 100 , so we can set U 0 = U P (n) ,P (n) and U 1 = U Q (n) ,Q (n) . The subsets used in [DS20] actually take a slightly stronger form, where the displacement of x 0 by one subset is great than that of the other, however as in [DS20] such sets are easily obtainable as a corollary of the reduction lemma.
Corollary 3.2.11. Let U be a finite set of isometries of a quasi-tree X with bottleneck constant ∆ > 0. If at most 1 4 of the isometries u ∈ U have displacement d(x 0 , ux 0 ) 10 10 κ 0 , then there exist U 0 , U 1 ⊂ U with cardinalities at least 1 200 |U | such that
(U −1 0 x 0 , U 1 x 0 ) x0 1000∆ and (U 0 x 0 , U −1 1 x 0 ) x0 1000∆.
In addition, for all u 0 ∈ U 0 and u 1 ∈ U 1 we have that
10 10 κ 0 d(x 0 , u 0 x 0 ) d(x 0 , u 1 x 0 ).
Proof. Let U 0 and U 1 be the sets chosen in the reduction lemma. Let m 0 be the median of {d(x 0 , ux 0 ) : u ∈ U 0 }, and m 1 be the median of {d(x 0 , ux 0 ) : u ∈ U 1 }.
If m 0 m 1 , then let U 0 = {u ∈ U 0 : d(x 0 , ux 0 ) m 0 } and U 1 = {u ∈ U 1 : d(x 0 , ux 0 ) m 1 }. If m 1 < m 0 then let U 0 = {u ∈ U 1 : d(x 0 , ux 0 ) m 1 } and U 1 = {u ∈ U 0 : d(x 0 , ux 0 ) m 0 }.
In both cases note that U 0 and U 1 have cardinalities at least half of that of the original sets.
Growth of sets with large displacement
When combined with the rest of the work in [DS20], Corollary 3.2.11 gives us the desired result for acylindrical actions on quasi-trees.
Theorem 3.2.12. Let G be a group acting (κ 0 , N 0 )-acylindrically on a quasi-tree X with bottleneck constant ∆ > 0. Assume κ 0 ∆ and N 0 1. There exist constants α = α(∆, κ 0 , N 0 ) > 0 and K = K(κ 0 ) > 0 such that for every finite U ⊂ G at least one of the following must hold:
1. U is virtually Z. 2. λ 0 (U ) < K. 3. |U n | (α|U |) n+1 2
for every n ∈ N.
In particular, we can take α = ∆ 2 10 52 N 6 0 κ 2 0 and K = 10 14 κ 0 .
Proof. This is a combination of Corollary 3.2.11 from this paper along with Corollary 5.6 and Proposition 6.18 from [DS20], using d = 1, b = 10, and δ = ∆.
Remark 3.2.13. This is a generalisation of Delzant and Steenbock's result for groups acting acylindrically on trees [DS20].
We now recall Theorem 3.1.12, which stated that every acylindrically hyperbolic group admits a non-elementary acylindrical action on a quasi-tree. This allows us to apply Theorem 3.2.12 to the entire class of acylindrically hyperbolic groups.
Corollary 3.2.14. Let G be an acylindrically hyperbolic group. Then there exists a quasi-tree with bottleneck constant ∆ > 0 on which G has a non-elementary (κ 0 , N 0 )-acylindrical action, with and κ 0 ∆ and N 0 1, and constants α = α(∆, κ 0 , N 0 ) > 0 and K = K(κ 0 ) > 0 such that for every finite U ⊂ G at least one of the following must hold: for every n ∈ N.
Remark 3.2.15. We emphasise here that this is not a direct improvement of Theorem 3.2.3, as in both statements the displacement condition is dependent on the action under consideration. In particular, for a certain acylindrically hyperbolic group G, and finite subset U of G, we may have that the displacement of U is large under the acylindrical action of G on some general hyperbolic space, but small under the acylindrical action of G on a quasi-tree.
Loxodromic elements and displacement
To use Theorem 3.2.12 to say anything about the growth of finite subsets of a specific acylindrically hyperbolic group, we must first be able to say something about which finite subsets will have large displacement under some action on a quasi-tree.
One difficulty with this is that the displacement λ 0 is dependent on the basepoint x 0 , which is dependent on the finite set U . It is easier instead to try to find sets U such that max u∈U d(x, ux) is large for every point x in the space being acted on. As is often the case with exponential growth questions, this can be done by finding loxodromic elements.
Proposition 3.2.16. [Bow08]
Let G be a group acting acylindrically on a δ-hyperbolic space. There exists ν > 0, dependent only on δ and the acylindricity constants, such that if g ∈ G is loxodromic then τ (g) ν.
The following application of this to Theorem 3.2.12 was pointed out by Thomas Delzant.
Proposition 3.2.17. Let G be a group acting acylindrically on a quasi-tree. Suppose that U ⊂ G is finite and not contained in a virtually cyclic subgroup, and that U k contains a loxodromic element for some k ∈ N. Then there exist constants α, β > 0 such that |U n | (α|U |) βn for all n ∈ N. The constant α is dependent only on the bottleneck constant of the quasi-tree and the acylindricity constants, and β is additionally inversely proportional to k.
Proof. Let X be the quasi-tree on which G acts acylindrically. Let u ∈ U k be our given loxodromic element. By Proposition 3.2.16 we know that there exists ν > 0, dependent only on the bottleneck constant of X and the acylindricity constants, such that τ (u) ν. Let K > 0 be as in Theorem 3.2.12, and let m = K ν . It is clear from the definition of the stable translation length that τ (u m ) = mτ (u) K.
Let x ∈ X be arbitrary, and note that for n ∈ N we have that d(x, u nm x) nd(x, u m x), so τ (u m ) d(x, u m x). Therefore, as u m ∈ U km , we conclude that λ 0 (U km ) K. Let p = km. We assumed that U is not contained in a virtually cyclic subgroup, and the existence of a loxodromic element means that it is not contained in a finite subgroup, so we can apply Theorem 3.2.12 to say that
|U np | (α|U p |) n+1 2 (α|U |) n+1 2
, for all n ∈ N, where α > 0 is dependent only on the bottleneck constant of X and the acylindricity constants. Suppose i ∈ N is such that i p. Then there exists n ∈ N such that np i < (n + 1)p. Therefore
|U i | |U np | (α|U |) n+1 2 (α|U |) i 4p .
Now suppose i < p. If α|U | < 1 then it is trivial that |U i | (α|U |) i 4p , as |U i | 1. If we instead have that α|U | 1 then Theorem 3.2.12 tells us that α < 1, so |U i | |U | α|U | (α|U |) i 4p . Recalling that p = k K ν , this gives us our result. This growth is dependent on k, however if we find an upper bound on k then we will have a lower bound for the growth. In other words, given a class of finite subsets of G, if we can always generate a loxodromic element within a bounded number of steps, then each set U in this class will satisfy |U n | (α|U |) βn for some uniform constants α, β > 0.
We note here that these loxodromic elements do not have to be from a single acylindrical action on a quasi-tree, so long as the collection of quasi-trees and associated acylindrical actions admit an upper bound on the bottleneck constants and acylindricity constants. In particular, this will automatically be satisfied if the loxodromic elements can be found from a finite collection of quasi-trees and acylindrical actions, as then the associated constants will automatically be bounded.
Growth from actions on hyperbolic spaces
In the previous section we showed that being able to quickly generate loxodromic elements in an acylindrical action on a quasi-tree is enough to get us uniform product set growth. Theorem 3.1.12 tells us that every acylindrically hyperbolic group admits an acylindrical action on a quasi-tree, however this construction does not give us an easy way to find loxodromic elements. There are groups with more natural actions on quasi-trees where this is possible to do, as we will see in Section 4, however for many acylindrically hyperbolic groups their most natural acylindrical action is on a hyperbolic space that is not a quasi-tree.
In this section we will show that the statements we proved in Subsection 3.2.3 also hold for finite symmetric subsets of groups that act acylindrically on hyperbolic spaces. This was independently shown as Proposition 2.9 in [Fuj21], using similar ideas. Note that this does not replace the generality of Theorem 3.2.12, as here we only consider symmetric subsets, and do not allow for other possible ways of finding sets of large displacement. On the other hand we do obtain a practical way of answering Question 1 for a wider variety of acylindrically hyperbolic groups.
We will first need the following, which is not written explicitly in [DGO17], but follows directly from various statements in that paper. The proof here uses terminology defined in [DGO17].
Proposition 3.3.1. [DGO17]
Let G be a group acting acylindrically on a hyperbolic space X. There exists N ∈ N such that for any loxodromic g ∈ G, whenever g 1 , . . . , g k are conjugates of g N such that they are all independent loxodromic elements, the group g 1 , . . . , g k is a free group of rank k.
Proof. By Proposition 6.34(b) in [DGO17], for any α > 0 there exists N = N (α) such that for any loxodromic g ∈ G the group g N is what they call α-rotating with respect to the action of G on a cone-off C(X). Theorem 5.3 in [DGO17] tells us that the normal closure of g N is g N = * a∈A a −1 g N a for some (possibly infinite) A ⊂ G. This means that any set of conjugates of g N will generate a free group.
Let g 1 , . . . , g k be conjugates of g N such that they are all independent loxodromic elements in the action of G on X. By the way C(X) is constructed, g is elliptic in the action of G on C(X), so every g i is also elliptic. Therefore Theorem 5.3 in [DGO17] tells us that g i a −1 i g N a i for some a i ∈ A. If a i = a ± j for i = j then that would contradict g i and g j being independent, hence g 1 , . . . , g k is a free group of rank k.
We will want to consider an equivalence relation on G, for the purposes of which we will fix a loxodromic element g ∈ G. We then define the equivalence relation by a ∼ b if and only if ba −1 ∈ E(g). The following result is well known. Proof. It is clear that these are loxodromic elements, so we need to check that they are independent.
We first prove that if a / ∈ E(g) then a −1 ga / ∈ E(g). Let Fix(g) = {γ − , γ + }. Suppose a −1 ga ∈ E(g), then (a −1 ga) 2 = a −1 g 2 a stabilises γ + and γ − . In particular, this means that g 2 aγ + = aγ + and g 2 aγ − = aγ − , so as Fix(g 2 ) = Fix(g) we have that {γ + , γ − } = {aγ + , aγ − }, and therefore a ∈ E(g).
Now suppose that a ∼ b, so ba −1 / ∈ E(g). Then ab −1 gba −1 / ∈ E(g), so b −1 gb / ∈ a −1 E(g)a.
We can see that a −1 E(g)a E(a −1 ga), and similarly aE(a −1 ga)a −1 E(g), by considering the points these groups fix on the boundary. Therefore a −1 E(g)a = E(a −1 ga), and so b −1 gb / ∈ E(a −1 ga). Hence a −1 ga and b −1 gb are independent.
We can now use this equivalence relation to partition the set that we are interested in, and then consider various cases that could arise from this. The following method of proof was suggested by Thomas Delzant. Proposition 3.3.3. Let G be a group acting acylindrically on a hyperbolic space. There exist α, β > 0 such that for every finite symmetric U ⊂ G such that U contains a loxodromic element and is not contained in a virtually cyclic group, we have that |U n | (α|U |) βn for every n ∈ N.
Proof. Let g be a loxodromic element in U . We consider the equivalence classes of U under the equivalence relation ∼, which we previously defined such that a ∼ b if ba −1 ∈ E(g). We will consider three different cases.
The first case is that no equivalence class contains more than |U | 1 2 elements, so there must be at least |U | 1 2 classes. Take one element a i from each, then in U 3 we have elements of the form a −1 i ga i . These are pairwise independent loxodromic elements by Lemma 3.3.2. By Proposition 3.3.1 we know that there exists N ∈ N, which is not dependent on the choice of g or U , such that the a −1 i g N a i generate a free group of rank equal to the number of generators. Call this generating set S, then |S n | = |S| n and S ⊂ U N +2 , so as |S| |U | 1 2 we get that |U n | N +2 |U n(N +2) | |S n | = |S| n |U | n 2 , and hence |U n | |U | n 2(N +2) . The second case is that some equivalence class S ⊂ U contains at least |U | 1 2 elements, and this is not the class of g. Let a be in S, then a ∼ g, so a / ∈ E(g). We therefore have that a ∈ E(g)a = E(g). For every b ∈ S we have that b ∼ a, so ba −1 ∈ E(g), which means that b ∈ E(g)a. Hence S can be written as S = V a for some V ⊂ E(g). Lemma 3.9 from [DS20] then tells us that there exists γ > 0, which is not dependent on the choice of U , such that
|S n | = |(V a) n | ( 1 γ |V |) n = ( 1 γ |S|) n .
Hence
|U n | |S n | ( 1 γ |S|) n ( 1 γ 2 |U |) n 2 .
The third and final case is that some equivalence class S ⊂ U contains at least |U | 1 2 elements, and this is the class of g. This means that S ⊂ E(g), so as E(g) is a virtually cyclic group, and U is not virtually cyclic, there must exist a ∈ U \S.
Consider Sa ⊂ U 2 . Using the same γ as before, we can see that
|U n | 2 |U 2n | |(Sa) n | ( 1 γ |S|) n ( 1 γ 2 |U |) n 2 ,
and hence |U n | ( 1 γ 2 |U |) n 4 . We can therefore pick α = min{1, 1 γ 2 } and β = 1 2(N +2) to conclude.
Remark 3.3.4. The constants α and β are dependent on the acylindricity constants of the action, and the hyperbolicity constant of the space being acted on, so two different acylindrical actions on hyperbolic spaces by the same group will give different constants.
We can now obtain our new version of Proposition 3.2.17, for finite symmetric subsets of groups with acylindrical actions on hyperbolic spaces. See [Fuj21, Proposition 2.9] for a very similar version of this statement. Corollary 3.3.5. Let G be a group acting acylindrically on a hyperbolic space. There exist α, β > 0 such that for every finite symmetric U ⊂ G such that U k contains a loxodromic element for k ∈ N, and U is not contained in a virtually cyclic group, we have that |U n | (α|U |) βn k for every n ∈ N.
Proof. The set U k satisfies the conditions of Proposition 3.3.3, so we have that
|U n | |U nk | 1 k (α|U k |) βn k (α|U |) βn k .
As in the quasi-tree case, this means that if we have a collection of subgroups that each act acylindrically on one of a collection of hyperbolic spaces, such that all the constants involved are bounded, and for each relevant subset we can quickly generate a loxodromic element in one of these actions, then this will give us our desired uniform product set growth.
Applications to subgroups of mapping class groups
In this section we will try to apply the results of Section 3 to answer Question 1 for mapping class groups. That is, we would like to find a dichotomy for the finitely generated subgroups of our mapping class group, where either |U n | (α|U |) βn for every symmetric generating set of our subgroup, where α, β > 0 are dependent only on the mapping class group in question, or our subgroup cannot satisfy this property for any α, β > 0.
Answering this question for mapping class groups also answers it for any group that embeds as a subgroup of a mapping class group, which includes right-angled Artin groups. We will however deal with the right-angled Artin group case separately here. The reason for doing this is that right-angled Artin groups have a natural acylindrical action on an associated quasi-tree called the extension graph, and so in proving something about the growth of these groups we have to prove that we can quickly generate loxodromic elements on these extension graphs. This is analogous to an already known result for mapping class groups [Man13]. In addition, the existence of these loxodromics has an application to the set of exponential growth rates of a right-angled Artin group.
The other reason for treating right-angled Artin group separately is that the methods used are slightly simpler than those for mapping class groups, partly due to right-angled Artin groups being torsion-free, and so this being included should hopefully make the mapping class group section easier to follow.
In Section 4.1 we will give the necessary background on right-angled Artin groups and mapping class groups. In Section 4.2 we will prove our result for the quick generation of loxodromics in right-angled Artin groups, and in Section 4.3 we use this to obtain our product set growth result. We then finish by extending this result to mapping class groups in Section 4.4.
Right-angled Artin groups and mapping class groups
Here we give the definitions of right-angled Artin groups and mapping class groups, along with some known results which will be useful later.
A(Γ) = V | [v, w] = 1 if {v, w} ∈ E .
Example 4.1.2. If Γ does not contain any edges, then A(Γ) will be the free group of rank |V |. If Γ is a complete graph, that is every pair of vertices are joined by an edge, then A(Γ) is the free abelian group of rank |V |. Every right-angled Artin group has another associated graph called the extension graph, denoted Γ e (for a definition see [KK13]). So long as the defining graph Γ is connected, we get the following two results. In Section 4.2 we will be interested in generating loxodromic elements in the action of A(Γ) on Γ e . This is made easier by the fact that there exists a characterisation of these elements, given in [KK14a]. To state this characterisation, we first need to give some terminology.
Remark 4.1.6. With the exception of the extension graph, all other graphs in the context of right-angled Artin groups will be assumed to be finite and simple. Unless stated otherwise, we will denote the vertex set of a graph Γ by V , and the edge set by E.
Notation 4.1.7. We denote the complement of a graph Γ by Γ c .
Definition 4.1.8. The join of two graphs Γ 1 and Γ 2 is Γ 1 * Γ 2 = (Γ c 1 Γ c 2 ) c . Alternatively, we can see that if Γ 1 = (V 1 , E 1 ) and
Γ 2 = (V 2 , E 2 ) then Γ 1 * Γ 2 = (V 1 ∪ V 2 , E), where E = E 1 ∪ E 2 ∪ {{s 1 , s 2 } : s 1 ∈ V 1 and s 2 ∈ V 2 }.
Definition 4.1.9. A graph Γ is said to split as a nontrivial join if Γ = Γ 1 * Γ 2 , where Γ 1 and Γ 2 are nonempty graphs. A subjoin of a graph is a subgraph that splits as a nontrivial join.
Example 4.1.10. If Γ = Γ 1 * Γ 2 , then A(Γ) = A(Γ 1 ) × A(Γ 2 ).
Let Γ = (V, E) be a graph, and let V = {s 1 , . . . , s k }. Recall that A(Γ) is generated by V , so g ∈ A(Γ) can be represented by a word w written in the alphabet V ∪ V −1 .
Definition 4.1.11. A word w representing g ∈ A(Γ) in the alphabet V ∪ V −1 is reduced if its length is the same as the word length of g. In other words, it is a minimal length representative of g. We will often identify g with a reduced word representing it, and say that g is reduced.
The word w is cyclically reduced if every cyclic permutation of w is also reduced. Equivalently, it is a minimal length representative of the conjugation class of g.
Remark 4.1.12. This is very similar to the notions of reduced and cyclically reduced words in a free group. The difference is that in the free group every element has a unique reduced word representing it, whereas in a right-angled Artin group each reduced representative is only unique up to being able to swap the order of vertices when they commute. See Section 2 of [AM15] for a more detailed explanation of this.
Remark 4.1.13. Every reduced word w can be written in the form w = uw u −1 , where w is a cyclically reduced word, and u is a possibly empty word. This w is a minimal length representative of the conjugation class of w.
Definition 4.1.14. The support of a word w is the set of vertices s ∈ V such that s or s −1 is a letter of w. The support of g ∈ A(Γ) is the support of a reduced word that represents g, and this does not depend on the reduced word chosen.
The characterisation of elliptic and loxodromic elements in the action of A(Γ) on Γ e is then as follows.
Theorem 4.1.15. [KK14a] Let Γ be a connected graph that is not an isolated vertex, and suppose 1 = g ∈ A(Γ) is cyclically reduced. Then g is elliptic in the action of A(Γ) on Γ e if and only if the support of g is contained in a subjoin of Γ.
We now move on to giving some basic definitions and facts about mapping class groups.
Definition 4.1.16. Let S be an oriented surface with finite genus and finitely many boundary components, punctures, and connected components. The mapping class group of S, denoted M CG(S), is the group of orientation preserving isotopy classes of homeomorphisms of S that restrict to the identity on the boundary ∂S, where the isotopies fix components of the boundary pointwise. These isotopy classes are called mapping classes.
The relevance of acylindricity to the mapping class group is given by the following result. Remark 4.1.19. A mapping class group of a surface with nonempty boundary would have infinite centre [FM11], and so would not be acylindrically hyperbolic [Osi16]. For this reason, in many places the mapping class group is defined either for surfaces without boundary, or the boundary is fixed setwise rather than pointwise. In the latter case the boundary components can effectively be seen as punctures that are not allowed to permute, and the mapping class group is a finite index subgroup of the version where the punctures are allowed to permute. Sometimes the mapping class group is defined in such a way as to allow the boundary components themselves to permute, as in [Iva92] for example, in which case this is exactly the same group as considering the boundary components as punctures. Although in our definition we fix our boundary pointwise, we will later see that we can easily reduce to the case where we only consider punctures.
The curve complex is defined in many places, including [MM99], where its hyperbolicity was first proved. As with the extension graph for right-angled Artin groups, we will not need the exact definition of it here. The important facts are the ones given by the above theorem, and that, again as with the extension graph, we have a characterisation of the loxodromic elements in this action.
Definition 4.1.20. A simple closed curve on a surface S is essential if it is not homotopic to a point, a puncture, or to any component of the boundary ∂S.
Definition 4.1.21. A mapping class f ∈ M CG(S) is a pseudo-Anosov if there is no essential simple closed curve in S that is fixed by a power of f (up to isotopy). A mapping class f is said to be pseudo-Anosov on a connected subsurface S if there exists a representative of f such that its restriction to S gives a pseudo-Anosov mapping class in M CG(S ). When we consider mapping class groups, we will therefore not talk about loxodromic elements, but refer to pseudo-Anosovs instead. In other words, to apply our results from Section 3 to mapping class groups, we will need the ability to quickly generate pseudo-Anosovs, which is provided by the following theorem. The problem with this strategy is that many subgroups of mapping class groups may not contain a pseudo-Anosov on the whole surface. The standard way to deal with this situation is to cut our surface in some canonical way, and consider the groups induced by the restrictions of our mapping classes to the remaining subsurfaces. We refer elsewhere for the definition of a Dehn twist around a curve γ, for example [FM11]. The important facts for us to know here is that Dehn twists are infinite order elements, and are central in any group that fixes γ. In particular, Dehn twists around disjoint and homotopically distinct simple closed curves commute with each other, so in Proposition 4.1.24 we have that T γ1 , . . . , T γ k ∼ = Z k . Recall from Section 2.2.1 that if a group has an infinite order element in its centre, then it does not have uniform product set growth. This means that if we work with subgroups of M CG(S) that fix individual curves (rather than possibly permute a set of them), then ruling out the subgroups that have such infinite order central elements will mean that the restriction of ϕ in Proposition 4.1.24 will give us an injective homomorphism into M CG(S\ Not only are pure subgroups easier to work with, it turns out that every subgroup of mapping class group has a pure subgroup of finite index. A common approach to studying subgroups is therefore to first work with the pure subgroups, and then extend what we find to their supergroups. This is the approach we will take when trying to prove uniform product set growth for certain subgroups of mapping class groups, with the extension to supergroups allowed by Proposition 2.2.27. Given a subgroup of a mapping class group, we can take a pure finite index subgroup, cut along a set of curves that are fixed by this pure subgroup, and consider the restriction to subsurfaces on which our group is not the identity. There may however be many curves that are fixed, some of which may intersect each other. Fortunately there is a canonical way of choosing a multicurve that allows us to cut the surface such that the restriction of our pure subgroup to every component is either the identity, or contains a pseudo-Anosov.
Definition 4.1.29. For a pure subgroup G M CG(S), the canonical reduction multicurve σ is the (possibly empty) union of all homotopically distinct essential curves γ in S such that G fixes γ, but if a curve ξ intersects γ then G does not fix ξ. Remark 4.1.31. It follows from Corollary 4.1.26 that T γ1 , . . . , T γ k ∩ G is central in G, as g(γ i ) = γ i for every g ∈ G. We also note here that if G M CG(S) has G as a finite index subgroup, then although G does not fix each γ i , it does fix the set {γ 1 , . . . , γ k } (see [Iva92]). Hence by Lemma 4.1.25 we have that T γ1 , . . . , T γ k ∩ G is normal in G .
To apply our results from Section 2.2.3 to the direct product in Theorem 4.1.30, we need a bound on the number of factors. The following result allows us to do this. Mangahas' result that any finite symmetric U such that U contains a pseudo-Anosov actually contains a pseudo-Anosov in of U n for some bounded n (see Theorem 4.1.23), was in fact a particular case of a more powerful result that she proved. The rough idea is that if we take the union of all subsurfaces on which some element of our group acts as a pseudo-Anosov, then we can find an element that acts as a pseudo-Anosov on this entire union within some bounded product of our generating set. The language defined here will be useful in our proofs about right-angled Artin groups.
Definition 4.1.33. The active subsurface A(G) of a pure subgroup G M CG(S), with canonical reduction multicurve σ, is the union of the connected components of S\σ such that some mapping class in G is pseudo-Anosov on that connected component, plus the annuli that are the neighbourhood of any γ ∈ σ that are not in the boundary of an already selected component. The active subsurface of a pure mapping class f is the active subsurface of f .
Remark 4.1.34. We note here that if G M CG(S) is a pure subgroup generated by a finite set U , then the union of the active subsurfaces of generators in U is a subset of the active subsurface of G. Moreover, if the active subsurface of G is disconnected then the union of the active subsurfaces of generators will also be disconnected, as if every generator fixes a subsurface then so does G.
The fact that every subgroup of a mapping class group has a pure finite index subgroup allows the extension of the definition of the active subsurface to all subgroups of the mapping class group. This subsurface does not depend on the choice of finite index subgroup [Iva92]. With these definitions we can now state Mangahas' full result. Many of the tools that we have introduced here require that the surface in question has no boundary. As noted in Remark 4.1.19, if the surface had nonempty boundary then the mapping class group would have infinite centre, namely the subgroup generated by a Dehn twist around a curve parallel to the boundary. This means that it would not be acylindrically hyperbolic, so many of our tools would not apply, however Corollary 2.2.5 tells us that it would also not have uniform product set growth.
To deal with subgroups of such groups, we will employ the capping homomorphism, which effectively says that either our subgroup has infinite centre, or we can view it as a subgroup of a mapping class group of a surface without boundary.
Short loxodromics in right-angled Artin groups
We would first like to apply Proposition 3.2.17 to say something about product set growth in right-angled Artin groups. To do so we need to show that we can quickly generate loxodromic elements in the action of our group on its extension graph. We already have a characterisation of these loxodromic elements, as given by Theorem 4.1.15, and we also have a result about the quick generation of pseudo-Anosovs in subgroups of mapping class groups from Theorem 4.1.36.
In this section we will use these facts to get an analogous result for right-angled Artin groups, using the existence of a natural embedding of right-angled Artin groups into mapping class groups [CLM12]. The idea is that we embed our right-angled Artin group into a mapping class group, then quickly generate a pseudo-Anosov within this embedding. Under certain circumstances this pseudo-Anosov will correspond to a loxodromic element in the original right-angled Artin group.
There are several ways to see right-angled Artin groups as subgroups of mapping class groups, see for example Section 7.3 of [Kob12]. The construction we use here is taken from Section 2.4 of [CLM12], and for a more detailed picture of it we refer to there.
Given a finite simple graph Γ, with vertex set V = {s 1 , . . . , s k }, there exists a surface S and a collection of nonannular connected subsurfaces X = {X 1 , . . . , X k } such that:
• X i ∩ X j = ∅ if and only if {s i , s j } is an edge in Γ.
• X i ∩ X j = ∅ if and only if X i and X j cannot be isotoped to be disjoint.
• If X i ∩ X j = ∅ then this intersection is homeomorphic to a disc.
• X i and X j cannot be isotoped such that X i ⊂ X j when i = j.
• Each X i is a twice punctured torus.
A picture of such a surface can be found in Figure 4 of [CLM12]. For each X i we can pick an f i in M CG(S) that is pseudo-Anosov on X i , and the identity elsewhere. This statement allows us to see A(Γ) as a subgroup of M CG(S). In particular, every element of A(Γ) will be sent to a mapping class that is pseudo-Anosov on some collection of connected subsurfaces of S, and is the identity elsewhere. These subsurfaces can be constructed explicitly. The following notation is taken directly from [CLM12].
Let {X 1 , . . . , X r } ⊂ X. We denote by Fill(X 1 , . . . , X r ) the minimal essential subsurface of S that contains X 1 ∪ · · · ∪ X r . That is, if X 1 ∪ · · · ∪ X r has any discs in its complement then we add these discs to get Fill(X 1 , . . . , X r ). Note that Fill(X 1 , . . . , X r ) is connected if and only if X 1 ∪ · · · ∪ X r is connected, and that Fill(X i ) ∩ Fill(X j ) = ∅ if and only if X i ∩ X j = ∅.
Suppose that 1 = g ∈ A(Γ) is cyclically reduced, and that r is the minimal number such that g is a word in the first r generators s 1 , . . . , s r , changing the indices if necessary. Then we say that Fill(g) = Fill(X 1 , . . . , X r ). If g = hg h −1 for some h, g ∈ A(Γ) then Fill(g) = φ(h)Fill(g ). If g = 1, we say that Fill(g) = ∅. We can relate this construction to the characterisation of loxodromic elements given in Theorem 4.1.15.
Lemma 4.2.4. Suppose Γ is a connected graph that is not an isolated vertex, and let 1 = g ∈ A(Γ) be cyclically reduced. Then g is a loxodromic on Γ e if and only if the subsurface Fill(g) is connected, and Fill(g) ∩ Fill(s) = ∅ for all s ∈ V .
Proof. Let {s 1 , . . . , s r } be the vertices in the support of g. By Theorem 4.1.15 we have that g is elliptic on Γ e if and only if {s 1 , . . . , s r } is contained in a subjoin of Γ. This is the case if and only if {s 1 , . . . , s r } induces a subjoin, or there exists s ∈ V such that {s 1 , . . . , s r } is contained in the 1-neighbourhood of s. Now {s 1 , . . . , s r } induces a subjoin if and only if we can write it as a disjoint partition A B such that A and B are nonempty and for every a ∈ A, b ∈ B, we have that {a, b} is an edge in Γ. By construction of φ, this is true if and only if the corresponding partition A B of {X 1 , . . . , X r } is such that for every X ∈ A , Y ∈ B , we have that X ∩ Y = ∅. This correspondence gives us that {s 1 , . . . , s r } induces a subjoin if and only if X 1 ∪ · · · ∪ X r is disconnected, which is the case if and only if Fill(g) = Fill(X 1 , . . . , X r ) is disconnected.
For the other case, there exists s ∈ V such that {s 1 , . . . , s r } lies in the 1-neighbourhood of s if and only if there exists s ∈ V such that Fill(s i ) ∩ Fill(s) = ∅ for every s i . This is equivalent to saying that Fill(g) ∩ Fill(s) = ∅. We therefore conclude that g is elliptic if and only if Fill(g) is disconnected, or there exists s ∈ V such that Fill(g) ∩ Fill(s) = ∅.
Our strategy now is that given a finite symmetric U ⊂ A(Γ), we can consider φ(U ) ⊂ M CG(S), and use Theorem 4.1.36 to find g ∈ U n such that φ(g) has the same active subsurface as φ( U ). By Lemma 4.2.4, if we want this g to be a loxodromic on the extension graph Γ e then we need the active subsurface of φ( U ) to be connected. We will give a sufficient condition for this to be the case.
1 ) ∪ · · · ∪ Fill(u n ) is disconnected. Write u i ∈ U as u i = g i s i,1 · · · s i,ri g −1 i
with the s i,j being vertices of Γ and s i,1 · · · s i,ri being cyclically reduced. We then have that Fill(u i ) = φ(g i )Fill(s i,1 · · · s i,ri ) = φ(g i )Fill(X i,1 , . . . , X i,ri ) ⊃ φ(g i )(Fill(X i,1 ) ∪ · · · ∪ Fill(X i,ri )) = Fill(g i s i,1 g −1 i ) ∪ · · · ∪ Fill(g i s 1,ri g −1 i ) Let S be a connected component of Fill(u i ), then for some j we have that Fill(g i s i,j g −1 i ) ⊂ S , as otherwise every φ(g i s i,j g −1 i ) would be the identity on S , which would contradict φ(u i ) being pseudo-Anosov on S . In particular, this means that n i=1 ri j=1 Fill(g i s i,j g −1 i ) is disconnected. As each Fill(g i s i,j g −1 i ) is connected, we partition these subsurfaces into A B, where A = ∅, B = ∅, and all subsurfaces in A are disjoint from those in B. We denote the corresponding partition of the elements g i s i,j g −1 i by A B , and note that for a ∈ A , b ∈ B , we have that [a, b] = 1.
We therefore have that
U A B = A × B .
We conclude by noting that we cannot have U A as then φ( U ) would act trivially on the subsurfaces in B, which would contradict the fact that these are contained in the active subsurface of φ( U ). Similarly, we do not have U B , so U is contained non-trivially in a direct product.
We now need to address the second condition in Lemma 4.2.4, which says that for a cyclically reduced element to be loxodromic in our right-angled Artin group then we need its active subsurface to intersect every subsurface associated to a vertex. To give a sufficient condition for this to be the case, we need the following two lemmas. The method used in the proof of Lemma 4.2.8 was suggested by Ric Wade.
Definition 4.2.6. For a vertex v in a graph Γ, the link of v, denoted link(v), is the set of vertices adjacent to v in Γ.
Notation 4.2.7. If V ⊂ V , we denote the subgraph of Γ induced by V as Γ(V ).
Lemma 4.2.8. Let Γ be a finite graph, and let U ⊂ A(Γ) be finite, where U = {1}. Suppose V ⊂ V is minimal under inclusion such that U is conjugate into A(Γ(V )). Then for every s ∈ V there exists g ∈ U 2 such that when we write g in the form g = hg h −1 , where h, g ∈ A(Γ) and g is cyclically reduced, we have that s is in the support of g .
Proof. Suppose that V ⊂ V is such a minimal set. The conclusion of this lemma is true of U if and only if it is true of any conjugate of U , so we will assume that U ⊂ A(Γ(V )).
For every g ∈ U , we can write g in the form g = hg h −1 , where h, g ∈ A(Γ) and g is cyclically reduced. Note that this g is unique up to cyclic permutation and reordering letters if they commute. Let s ∈ V . Suppose that for some g ∈ U we have that s in the support of g . Then s is in the support of (g ) 2 = (g 2 ) , so we are done. We therefore suppose that for every g ∈ U we have that s is not in the support of g .
Let V = V \{s}. Consider H = A(Γ(V ∩ link(s))) A(Γ(V )), and let ϕ be the identity map on H. From the definition of a HNN extension we can see that this homomorphism gives
us that A(Γ(V )) = A(Γ(V )) * H . That is, if A(Γ(V )) = V |R then A(Γ(V )) = V ∪ {s} | R ∪ {svs −1 = v | v ∈ V ∩ link(s)} .
Consider the action of U A(Γ(V )) on the Bass-Serre tree associated with this HNN extension. As every g ∈ U is conjugate into A(Γ(V )), every g has a fixed point in this action. If the same was true of every g ∈ U 2 , then by [Ser80, p. 64] we would have that U has a fixed point in this action, and so U is conjugate into A(Γ(V )). This contradicts the minimality of V , therefore for some g ∈ U 2 we have that s is in the support of g .
Lemma 4.2.9. For any s ∈ V and g ∈ A(Γ), we have that Fill(s) and φ(g)Fill(s) cannot be isotoped to be disjoint.
Proof. Consider g ∈ A(Γ) written as a reduced word g = g 1 · · · g n , where each g i is a vertex or its inverse. We will prove our statement by induction on how many of the g i 's are not equal to s or s −1 , so let µ(g) = |{g i : g i = s ±1 }|.
We will show that for every g ∈ A(Γ) there exists a genus one subsurface T (g) of Fill(s) such that φ(g)T (g) ⊂ Fill(s). From this we will be able to conclude that T (g) and φ(g)T (g) cannot be isotoped to be disjoint, as both subsurfaces have genus one, as does Fill(s). This will give us that Fill(s) and φ(g)Fill(s) cannot be isotoped to be disjoint.
We begin with the base case, µ(g) = 0. Then as each g i = s ±1 we have that φ(g i )Fill(s) = Fill(s), so φ(g)Fill(s) = Fill(s). We therefore have that T (g) = Fill(s) is our required subsurface.
Suppose that the statement is true for every g ∈ A(Γ) such that µ(g) = k. Now consider g ∈ A(Γ) such that µ(g) = k + 1. Let g i be the first letter in g 1 · · · g n such that g i = s ±1 . Then µ(g i+1 · · · g n ) = k, so there exists a genus one subsurface T (gi+1···gn) of Fill(s) such that φ(g i+1 · · · g n )T (gi+1···gn) ⊂ Fill(s).
Hence we can apply Proposition 4.2.10 to show that there exists N (Γ(V U )) ∈ N such that there exists n N (Γ(V U )) such that gU n g −1 contains a loxodromic element on Γ(V U ) e . The action of gU g −1 on Γ(V U ) e translates to an action of U on Γ(V U ) e , so we can view this as U n containing a loxodromic element.
Let U be the set of all possible finite symmetric U ⊂ A(Γ) such that Γ(V U ) is connected and U is neither cyclic nor contained non-trivially in a direct product. As Γ is a finite graph, there are only finitely many possibilities for V U . We therefore let N = max{N (Γ(V U )) : U ∈ U}, and the result follows.
As mentioned in the introduction, the ability to quickly generate these loxodromic elements has an application to ξ(G) = {ω(G, S) : S is a finite generating set of G}, the set of exponential growth rates of G with respect to its finite generating sets.
Theorem 4.2.12. [Fuj21] Let G be a group with an acylindrical and non-elementary action on a hyperbolic graph X. Suppose that there exists a constant M such that for any finite symmetric generating set S of G, the product set S M contains a loxodromic element on X. Assume that G is equationally Noetherian. Then, ξ(G) is a well-ordered set.
Corollary 4.2.13. Suppose Γ is a finite graph. For every G A(Γ) such that G is finitely generated, and is neither cyclic nor contained non-trivially in a direct product, ξ(G) is a wellordered set.
Proof. Note that G is a subgroup of a right-angled Artin group, so is linear [HW99], and therefore is equationally Noetherian [BM99]. Let V G ⊂ V be minimal under inclusion such that G is conjugate into A(Γ(V G )). If Γ(V G ) is connected, then we conclude using Proposition 4.2.11 and Theorem 4.2.12. If Γ(V G ) is not connected, we instead use its action on the associated Bass-Serre tree (see Example 1.7 in [Fuj21]), and again conclude using Theorem 4.2.12.
Product set growth in right-angled Artin groups
We can now use the results of the previous section to answer Question 1 for right-angled Artin groups. For a given finite symmetric U ⊂ A(Γ), we will use either Proposition 4.2.11 or the action of a free product on its Bass-Serre tree to quickly generate a loxodromic element in the action of U on one of a finite collection of quasi-trees, which will allow us to apply Proposition 3.2.17.
This will not give us our final result, as we will then be able to break down the direct product case created by Lemma 4.2.5 using some of our observations from Section 2.2, however it does get us very close to the best possible result.
We want to show that there exists N ∈ N such that for every finite symmetric U ⊂ A(Γ), where U is not contained non-trivially in a direct product and is not isomorphic to Z, there exists n N such that U n contains a loxodromic element for an acylindrical action of U on one of a finite collection of quasi-trees.
Let V U ⊂ V be minimal under inclusion such that U is conjugate into A(Γ(V U )). We will technically be proving our result for some conjugate gU g −1 ⊂ A(Γ(V U )), however as noted previously we can see any action of gU g −1 as being an action of U , and product set growth is preserved by conjugation. There are now only two cases for us to consider.
The first case is that Γ(V U ) is connected, which means that we can apply Proposition 4.2.11. Let N = N (Γ) ∈ N be the constant that we get from this proposition. Note that the fact that there are only finitely many such V U ⊂ V means that there are also only finitely many quasi-trees Γ(V U ) e , and finitely many associated acylindrical actions.
The second case is that Γ(V U ) is disconnected. Write Γ(V U ) = Γ 1 Γ 2 , and note that Γ 1 and Γ 2 are both induced subgraphs of Γ. Then A(Γ(V U )) = A(Γ 1 ) * A(Γ 2 ), and we recall that any free product acts acylindrically on its associated Bass-Serre tree.
If U fixes a vertex in the tree, then U is contained in the stabiliser of that vertex. This implies U is conjugate into A(Γ 1 ) or A(Γ 2 ), which contradicts the minimality of V U . Therefore U does not fix a vertex, so U 2 must contain a loxodromic element in this action, as this is an action on a tree [Ser80,p. 64]. Note that as Γ is a finite graph, there are only finitely many possible choices for A(Γ 1 ) * A(Γ 2 ) A(Γ), so only finitely many acylindrical actions to consider on the associated Bass-Serre trees.
We now combine the cases by letting U ⊂ A(Γ) be any finite symmetric subset such that U is not contained non-trivially in a direct product and is not isomorphic to Z, and letting N = max{2, N }. We can conclude that there exists n N such that U n contains a loxodromic element for some acylindrical action on one of a finite collection of quasi-trees. Therefore by Proposition 3.2.17 there exist constants α, β > 0 such that |U n | (α|U |) βn for all n ∈ N.
As noted in Section 2.2, it is unsurprising that the presence of direct products is a problem when trying to link the size of U n with the size of U . However we also showed in that section that we can still sometimes say something in the case of direct products, so long as all of the factors exhibit uniform product set growth. We can use this observation to give an improved version of Proposition 4.3.2.
Recall that in Corollary 2.2.16 the growth was linked to the number of factors in the direct product. It is a well known fact that the maximal rank of a free abelian subgroup of a rightangled Artin group A(Γ) is equal to the number of vertices in a maximal complete subgraph of Γ, see for example [CV09], [KK14b], or [Kob21]. The fact that the maximal rank is bounded is also a corollary of Theorem 4.1.32 and the fact that right-angled Artin groups are subgroups of mapping class groups, however it is nice to have a specific bound linked to the properties of Γ. The following lemma is a natural consequence of this fact.
Lemma 4.3.3. Let Γ be a finite graph, and let N ∈ N be the number of vertices in the maximal complete subgraph of Γ. If G 1 × · · · × G n A(Γ) with each G i non-trivial, then n N .
Proof. As A(Γ) is torsion free, if we take arbitrary 1 = g i ∈ G i we get that Z n g 1 ×· · ·× g n G 1 × · · · × G n A(Γ), so we must have that n N .
We can now combine this with Corollary 2.2.16 to improve Proposition 4.3.2.
Theorem 4.3.4. Let Γ be a finite graph. There exist constants α, β > 0 such that for every finite symmetric U ⊂ A(Γ) at least one of the following must hold:
Proof. Let U ⊂ A(Γ) be finite and symmetric, and suppose that U has trivial centre. In particular, U cannot be Z. Note that, by the same reasoning as in Corollary 2.2.6, if U is a subgroup of a direct product of the form H × Z, then the projection of U to H is injective.
If U is not contained non-trivially in a direct product, then we get that |U n | (α|U |) βn for every n ∈ N, with the α and β coming from Proposition 4.3.2. Now suppose that U is contained non-trivially in a direct product G 1 × · · · × G m A(Γ), and let U i be the projection of U to G i .
Suppose that some U i G i is contained non-trivially in a direct product G i,1 × G i,2 G i . Then U is contained in G 1 × · · · × G i−1 × G i,1 × G i,2 × G i+1 × · · · × G m . Lemma 4.3.3 tells us that there exists N ∈ N that bounds the number of factors in a direct product in A(Γ), so we can assume that each U i is not contained non-trivially in a direct product, and that m N .
Suppose that some U i is Z. Then, by our assumption, we must have that the projection of U to G 1 × · · · × G i−1 × G i+1 × · · · × G m is injective, so U can instead be viewed as a subgroup of this direct product. We can therefore assume that no U i is Z, without loss of generality. We also note that each U i is finite and symmetric. Hence Proposition 4.3.2 tells us that |U n i | (α|U i |) βn for every n ∈ N and i ∈ {1, . . . , m}. We can now apply Corollary 2.2.16 to get that |U n | (α|U | Let α = min{α, α N } and β = β N . Combining the two cases, we have shown that for every finite symmetric U ⊂ A(Γ), where U has trivial centre, we have that |U n | (α |U |) β n for all n ∈ N.
We note here that, given the conclusion of Corollary 2.2.5, this is the best result we can hope for in any group that contains direct products, so this answers Question 1 for right-angled Artin groups. On the other hand, the use of Theorem 4.1.36 in the proof of Proposition 3.2.17 has necessarily restricted us to symmetric subsets of the group, so the question as to whether the same result holds for non-symmetric sets remains open.
Product set growth in mapping class groups
We now move on to the more general case of pure subgroups of mapping class groups. The general structure of the proof is the same as in the previous section. That is, we consider our subgroup as a subgroup of a direct product. With the exception of certain cases, we show that each of the factor groups in the direct product satisfy uniform product set growth. We can then apply Corollary 2.2.16 to get growth for the whole subgroup.
Theorem 4.4.1. Let G M CG(S) be a pure subgroup of a mapping class group. There exist α, β > 0 such that for every finite symmetric U ⊂ G at least one of the following must hold: component by a once punctured disc. By Proposition 4.1.37 we have a natural homomorphism ψ : U → M CG(S ), with kernel T ξ1 , . . . , T ξ l ∩ U .
Recall from Corollary 4.1.26 that the group T ξ1 , . . . , T ξ l ∩ U is central in U , and therefore by our initial assumption T ξ1 , . . . , T ξ l ∩ U is trivial. This means that ψ is injective, and we can therefore suppose without loss of generality that S has no boundary. Now let {γ 1 , . . . , γ k } be the (possibly empty) set of curves in the canonical reduction multicurve for U . Let S be the set of connected components of S\ k i=1 γ i . By Theorem 4.1.30 we have a natural homomorphism ϕ : U → Π Σ∈S M CG(Σ), with kernel T γ1 , . . . , T γ k ∩ U .
Recall from Corollary 4.1.26 that the group T γ1 , . . . , T γ k ∩ U is central in U , and therefore by our initial assumption T γ1 , . . . , T γ k ∩ U is trivial. This means that ϕ is injective, and it therefore makes sense to talk about the projection of U to a factor of Π Σ∈S M CG(Σ).
Consider the projection of U to such a factor M CG(Σ). By Theorem 4.1.30, this projection is either trivial, or contains a pseudo-Anosov on Σ. If the projection is trivial, we can ignore this factor, and consider U as a subgroup of the direct product of the remaining factors. If the projection is Z, then we can similarly ignore this factor, as the proof of Corollary 2.2.6 tells us that either the projection to the remaining factors is injective, or U has infinite centre.
Let S be the surfaces Σ in S such that the projection of U to M CG(Σ) is neither trivial nor Z. The above reasoning tells us that we can see U as a subgroup of Π Σ∈S M CG(Σ). Let U Σ be the projection of U to M CG(Σ) for some Σ ∈ S . As U Σ contains a pseudo-Anosov on Σ, Theorem 4.1.23 tells us that there is some N ∈ N, dependent only on S, such that for some k N we have that U k Σ contains a pseudo-Anosov on Σ. Recall that the action of U Σ on the curve complex associated to Σ is acylindrical, and this curve complex is hyperbolic. Therefore, as U Σ is not Z, Corollary 3.3.5 tells us that there exist α Σ , β Σ > 0 determined by Σ such that |U n Σ | (α Σ |U Σ |) βΣn for every n ∈ N. There are only finitely many homeomorphism classes of essential subsurfaces of S, that is the subsurfaces with boundary components that are either in ∂S or are essential in S, so we can take α = inf α Σ and β = inf β Σ over all possible subsurfaces, and we will get that α, β > 0. Now let g be the genus of S, p be the number of punctures and c be the number of connected components, recalling that if our original surface had b boundary components and p punctures, then by our use of Proposition 4.1.37 the surface that we are now considering has b+p punctures. By Theorem 4.1.32, the maximal rank of a free abelian subgroup of M CG(S) is 3g + p − 3c. Therefore as each factor of Π Σ∈S M CG(Σ) contains an infinite order element, we must have that |S | 3g + p − 3c. Let α = α 3g+p−3c and let β = β 3g + p − 3c .
Then by Corollary 2.2.16 we have that |U n | (α |U |) β n for every n ∈ N.
Combining this with the case that U is finite, we see that if we let α = min{α , 1}, then any set U that satisfies our initial hypotheses will also satisfy |U n | (α |U |) β n for every n ∈ N.
By Corollary 2.2.5, this is a dichotomy of subgroups, and so answers Question 1 for the pure subgroups of mapping class groups. Theorem 4.1.28 tells us that every subgroup of a mapping class group has a pure subgroup of finite index. We can therefore combine Theorem 4.4.1 with Proposition 2.2.27 to get the following.
1. U has a finite index pure subgroup with non-trivial centre.
2. |U n | (α|U |) βn for every n ∈ N. It does not immediately follow that this is a dichotomy of subgroups, as we have not yet shown that the groups in the first case do not have uniform product set growth. We prove that this is in fact a dichotomy below.
Proposition 4.4.4. Let M CG(S) be a mapping class group, and let G M CCG(S) be finitely generated. Suppose that G has a finite index pure subgroup with non-trivial centre. Then for any constants α, β > 0 we can find a finite generating set U of G such that |U n | < (α|U |) βn for some n ∈ N.
Proof. First, let {ξ 1 , . . . , ξ l } be the (possibly empty) set of curves in the boundary of S. Recall from the proof of Theorem 4.4.1 that either T ξ1 , . . . , T ξ l ∩ G is non-trivial, in which case as this group is central in G we have that G does not have uniform product set growth by Corollary 2.2.5, or T ξ1 , . . . , T ξ l ∩ G is trivial, and we can assume without loss of generality that S has empty boundary. Here it does not matter that G might not be a pure subgroup, as no element in G permutes the boundary curves. Now let P be a finite index pure subgroup of G with non-trivial centre. Let {γ 1 , . . . , γ k } be the (possibly empty) set of curves in the canonical reduction multicurve for P . Recall from Corollary 4.1.26 that the group T γ1 , . . . , T γ k ∩P is central in P . Suppose that T γ1 , . . . , T γ k ∩P is non-trivial, and therefore infinite.
Note that P G, and by Remark 4.1.31 we have that T γ1 , . . . , T γ k ∩G G, so T γ1 , . . . , T γ k ∩ P G. In particular, recall that G acts on {γ 1 , . . . , γ k } by permuting the curves, and so g(γ i ) = γ j for every g ∈ G. Hence {gT γi g −1 : g ∈ G} ⊂ {T γ1 , . . . , T γ k }, which tells us that the orbit of any element of T γ1 , . . . , T γ k ∩ P under the conjugation action of G is finite. As T γ1 , . . . , T γ k ∩ P is an abelian group, it does not have exponential growth, so we can apply Proposition 2.2.4 to show that G does not have uniform product set growth. Now suppose that T γ1 , . . . , T γ k ∩ P is trivial. This has finite index in T γ1 , . . . , T γ k ∩ G, and so as T γ1 , . . . , T γ k is torsion-free we must also have that T γ1 , . . . , T γ k ∩ G is trivial. Recall from Proposition 4.1.24 that there is a natural homomorphism ϕ : M CG(S, {γ 1 , . . . , γ n }) → M CG(S\ k i=1 γ i ), with kernel T γ1 , . . . , T γ k , where M CG(S, {γ 1 , . . . , γ n }) is the subgroup of M CG(S) that fixes the set of curves {γ 1 , . . . , γ n }. We have that the restriction of this homomorphism to ϕ : G → M CG(S\ k i=1 γ i ) is injective, so we can view G as a subgroup of M CG(S\ k i=1 γ i ). Let S be the collection of connected components of S\ k i=1 γ i , and recall from Theorem 4.1.30 that we can view P as a subgroup of Π Σ∈S M CG(Σ), where the projection of P to each factor is either trivial or contains a pseudo-Anosov on Σ. From the proof of Theorem 4.4.1 we know that if none of these projections are Z, then P would have uniform product set growth. However this would contradict the fact that P has non-trivial centre, so we know that at least one of these projections must be Z.
Let S be the connected surfaces Σ in S such that the projection of P to M CG(S) is neither trivial nor Z. Note that an element g ∈ G may permute the surfaces in S, however g will stabilise the sets of subsurfaces S and S\S . This is because for any g ∈ G the projection of P to any subsurface will be a subgroup of the projection of P to the image of that subsurface under g, as P is a normal subgroup, and the projection of a pure element to some Σ ∈ S is just the restriction to that subsurface.
We can therefore write g ∈ G as g = (a g , b g ), where a g is the restriction of g to Σ∈S Σ, and b g is the restriction of g to Σ∈S\S Σ. Let A = a g | g ∈ G and B = b g| g ∈ G , and note that elements in these two groups commute with each other. Therefore G A × B.
It follows that P is also a subgroup of A × B. Let B be the projection of P to B. As we know that the projection of P to at least one Σ ∈ S\S is Z, we get that B is an infinite subgroup of a free abelian group, and so is itself free abelian.
We would like to show that B is a finite index subgroup of B. Let u 1 , . . . , u d be coset representatives of P in G, so u i = (a i , b i ). This means that for every g ∈ G there exists g ∈ P and u i such that (a i a g , b i b g ) = (a g , b g ). It follows that b 1 , . . . , b d are coset representatives of B in B, so we have that B is virtually free abelian.
Suppose that the projection of G to A is finite-to-1, then as P G the projection of P to A is finite-to-1. We can view P as a subgroup of A × B , and as B is torsion-free, Corollary 2.2.13 tells us that the projection of P to A is in fact injective. By our construction of A, this would mean that P is isomorphic to a pure subgroup of Π Σ∈S Σ, where the projection of P to each factor contains a pseudo-Anosov, and is not Z.
Following the proof of Theorem 4.4.1, this would mean that P has uniform product set growth, and therefore cannot have non-trivial centre. This is a contradiction, so the projection of G to A is infinite-to-1. As B is virtually free abelian, it does not have exponential growth, and so by Proposition 2.2.11 we have that G does not have uniform product set growth. In other words, for any constants α, β > 0 we can find a finite generating set U of G such that |U n | < (α|U |) βn for some n ∈ N.
.
Let G be a finitely generated group, and let S = {S ⊂ G : S is a finite generating set of G}.
.
Let (X, d) be a metric space. Let x 0 , x, y ∈ X. The Gromov product of x and y at x 0 is
Theorem 3.1.4 (Manning's bottleneck criterion). A geodesic metric space (X, d) is a quasi-tree if and only if there exists a constant ∆ 0 (the bottleneck constant) such that for every geodesic [x, y] in X, and every z ∈ [x, y], every path between x and y intersects the closed ball B(z, ∆).
Remark 3.1.5. It follows from Theorem 3.1.4 that a quasi-tree with bottleneck constant ∆ 0 is ∆-hyperbolic.
Proposition 3.1.6 (Gromov's Tree Approximation Lemma). [Gro87, p. 155-157] Let X be a δ-hyperbolic geodesic metric space. Let x 0 , z 1 , . . . , z n ∈ X, and let Y be a union of geodesic segments n i=1 [x 0 , z i ]. Then there is an R-tree T and a map f : (Y, d) → (T, d ) such that: 1. For all 1 i n, the restriction of f to the geodesic segment [x 0 , z i ] is an isometry.
Proposition 3.1.7.[Ker20] Let (X, d) be a δ-hyperbolic quasi-tree, with bottleneck constant ∆ 0. Let x 0 ∈ X, and let Z ⊂ X. Let Y be a union of geodesic segments z∈Z [x 0 , z]. Then there is an R-tree T and a map f : (Y, d) → (T, d * ) such that:1. For all z ∈ Z, the restriction of f to the geodesic segment [x 0 , z] is an isometry.
An acylindrical action by a group G on a hyperbolic geodesic metric space X is non-elementary if G is not virtually cyclic and all orbits are unbounded. Definition 3.1.11.[Osi16] A group is acylindrically hyperbolic if it admits a non-elementary acylindrical action on a hyperbolic geodesic metric space.
Theorem 3.1.12.[Bal17] Every acylindrically hyperbolic group admits a non-elementary acylindrical action on a quasi-tree.
Definition 3.2.1. [DS20]Let U be a finite set of isometries of a δ-hyperbolic geodesic metric space (X, d), with δ > 0. The normalised 1 -energy of U is
Figure 1 :
1The reduction lemma
Lemma 3 .2. 7 ( 0 ,
370Minimal energy).[DS20] Let U be a finite set of isometries of a ∆-hyperbolic geodesic metric space X, with ∆ > 0. Let y 0 ∈ S(x 0 , 1000∆), and Y = B(y 0 , 100∆) ∩S(x 1000∆
|, and |U Q ,Q | > 1 100 |U |.
.
If a ∼ b, then a −1 ga and b −1 gb are independent loxodromic elements.
.
For a finite simple graph Γ = (V, E), its right-angled Artin group is
Remark 4.1.3. Right-angled Artin groups are always torsion-free.
Lemma 4.1.4.[KK13] Let Γ be a finite connected graph. The extension graph Γ e is a quasi-tree.
Theorem 4.1.5.[KK14a] Let Γ be a finite connected graph. The action of A(Γ) on Γ e is acylindrical.
Theorem 4.1.17.[Bow08] Let S be a surface without boundary. The mapping class group M CG(S) acts acylindrically on the curve complex associated to the surface S, which is a hyperbolic space.Remark 4.1.18. The hyperbolicity constant of the curve complex is independent of S [Aou12, CRS14, HPW15].
Proposition 4.1.22.[MM99] Let S be a surface without boundary. A mapping class f ∈ M CG(S) is a loxodromic element in the acylindrical action on the curve complex associated to S if and only if it is a pseudo-Anosov.
Theorem 4.1.23.[Man13] Consider a mapping class group M CG(S), with S a connected surface without boundary. There exists a constant N = N (S) ∈ N such that for any finite symmetric U ⊂ M CG(S) where U contains a pseudo-Anosov, there exists n N such that U n contains a pseudo-Anosov.
Proposition 4.1.24 (Cutting homomorphism). [FM11, Proposition 3.20] Let M CG(S) be a mapping class group, and let {γ 1 , . . . , γ k } be a set of disjoint and homotopically distinct essential simple closed curves in S. Let M CG(S, {γ 1 , . . . , γ n }) be the subgroup of M CG(S) containing the mapping classes that fix this set of curves. Then there exists a natural homomorphism ϕ : M CG(S, {γ 1 , . . . , γ n }) → M CG(S\ k i=1 γ i ), with kernel T γ1 , . . . , T γ k , where T γi is a Dehn twist around γ i .
Lemma 4.1.25.[FM11, Fact 3.7] Let γ be a simple closed curve in a surface S. Let T γ be a Dehn twist around γ, then for every g ∈ M CG(S) we have that gT γ g −1 = T g(γ) .
Corollary 4.1.26. [FM11, Fact 3.8] Let γ be a simple closed curve in a surface S. The Dehn twist T γ commutes with every mapping class in M CG(S, {γ}).
. A mapping class f ∈ M CG(S) is said to be pure if there exists a set {γ 1 , . . . , γ k } of disjoint and homotopically distinct essential simple closed curves in S such that f fixes each γ i , does not permute the connected components of S\ k i=1 γ i , and there exists a representative of f such that the restriction of this representative to any connected component of S\ k i=1 γ i is either pseudo-Anosov or the identity. A subgroup of M CG(S) is pure if every element is a pure mapping class.
Theorem 4.1.28.[Iva92] Every mapping class group has a pure normal subgroup of finite index.
Theorem 4.1.30. [Iva92] Let G M CG(S) be a pure subgroup of a mapping class group. Let {γ 1 , . . . , γ k } be the set of curves in the canonical reduction multicurve for G. Let S be the set of connected components of S\ k i=1 γ i . The homomorphism in Proposition 4.1.24 restricts to a homomorphism ϕ : G → Π Σ∈S M CG(Σ), with kernel T γ1 , . . . , T γ k ∩G. Moreover, the projection of G to some M CG(Σ) is either trivial, or contains a pseudo-Anosov on Σ.
Theorem 4.1.32.[BLM83] Let S be a surface of genus g with p punctures, and c connected components. A free abelian subgroup of M CG(G) has rank at most 3g + p − 3c.
.
The active subsurface A(G) of an arbitrary subgroup G M CG(S), with pure finite index subgroup H, is given by A(G) = A(H).
Theorem 4.1.36.[Man13] Consider a mapping class group M CG(S), with S a connected surface without boundary. There exists a constant N = N (S) ∈ N such that for any finite symmetric U ⊂ M CG(S) there exists n N and f ∈ U n such that f has the same active subsurface as U .
Proposition 4.1.37 (Capping homomorphism). [FM11, Theorem 3.18] Let M CG(S) be a mapping class group of a surface S with boundary components {ξ 1 , . . . , ξ l }. Let S be the surface obtained from S by capping each boundary component by a once punctured disc. Then there exists a natural homomorphism ψ : M CG(S) → M CG(S ), with kernel T ξ1 , . . . , T ξ l , where T ξi is a Dehn twist around ξ i . As every element of M CG(S) certainly fixes the boundary components, we have by Corollary 4.1.26 that T ξ1 , . . . , T ξ l is central in M CG(S).
Theorem 4.2.1. [CLM12] Given the collection F = {f 1 , . . . , f k }, there exists M ∈ N such that the map φ : A(Γ) → M CG(S) defined by φ(s i ) = f M i is an injective homomorphism.
Theorem 4.2.2.[CLM12] For any g ∈ A(Γ) we have that φ(g) is pseudo-Anosov on each connected component of Fill(g), and is the identity elsewhere.
Remark 4.2.3. In other words, Theorem 4.2.2 tells us that A(Γ) can be seen as a pure subgroup of M CG(S), and for any g ∈ A(Γ) the active subsurface of φ(g) is exactly Fill(g).
.
Let U ⊂ A(Γ) be finite. If the active subsurface of φ( U ) is disconnected then U A(Γ) is contained non-trivially in a direct product.Proof. Let U = {u 1 , . . . , u n }. Suppose the active subsurface of φ( U ) is disconnected. As noted in Remark 4.1.34, this means that Fill(u
Remark 4.4.2. As right-angled Artin groups embed as pure subgroups of mapping class groups, Theorem 4.4.1 reproves Theorem 4.3.4.
Proof. Let U ⊂ M CG(S) be finite and symmetric. Let P be a fixed finite index pure subgroup of M CG(S), where [M CG(S) : P ] = d. Then H = P ∩ U is a finite index pure subgroup of U . Suppose that H has trivial centre. Let V ⊂ H be finite and symmetric. Then by Theorem 4.4.1 there exist uniform α, β > 0 such that |V n | (α|V |) βn for every n ∈ N. Let m = 2d!
k i=1 γ i ). If we further assume that our subgroup does not permute the connected components of S\ k i=1 γ i , then this will be a map into a direct product, which Section 2.2 tells us how to deal with.For these reasons, it makes sense for us to restrict ourselves to thinking about the pure subgroups of mapping class groups.
. |U n | (α|U |) βn for every n ∈ N.
Consider Y ∩ φ(g i+1 · · · g n )T (gi+1···gn) , and note that some connected component must have genus one, as each intersection that we remove is a disc. Let T be this component. We then have that φ(g i )T = T , and hence φ(g 1 · · · g i )T ⊂ Fill(s), as for each j < i we have that g j = s ±1 . Let T (g) = φ(g i+1 · · · g n ) −1 T . As T ⊂ φ(g i+1 · · · g n )T (gi+1···gn) we get that T (g) ⊂ T (gi+1···gn) ⊂ Fill(s). We also have that φ(g)T (g) = φ(g 1 · · · g i )T ⊂ Fill(s), so this is our required genus one subsurface. Let Y = Fill(s)\ s ∈V \{s} Fill(s ), so the part of Fill(s) that is fixed under the action of any other φ(s ). This concludes our induction, and as a consequence the statement is proved. We can now combine these results to show that under certain circumstances we can quickly generate loxodromic elements in A(Γ)Let Y = Fill(s)\ s ∈V \{s} Fill(s ), so the part of Fill(s) that is fixed under the action of any other φ(s ). Consider Y ∩ φ(g i+1 · · · g n )T (gi+1···gn) , and note that some connected component must have genus one, as each intersection that we remove is a disc. Let T be this component. We then have that φ(g i )T = T , and hence φ(g 1 · · · g i )T ⊂ Fill(s), as for each j < i we have that g j = s ±1 . Let T (g) = φ(g i+1 · · · g n ) −1 T . As T ⊂ φ(g i+1 · · · g n )T (gi+1···gn) we get that T (g) ⊂ T (gi+1···gn) ⊂ Fill(s). We also have that φ(g)T (g) = φ(g 1 · · · g i )T ⊂ Fill(s), so this is our required genus one subsurface. This concludes our induction, and as a consequence the statement is proved. We can now combine these results to show that under certain circumstances we can quickly generate loxodromic elements in A(Γ).
Suppose Γ is a connected graph that is not an isolated vertex. There exists N = N (Γ) ∈ N such that for every finite symmetric U ⊂ A(Γ), where U is not contained nontrivially in a direct product and U is not conjugate into A(Γ ) for any induced subgraph Γ ⊂ Γ. Proposition 4.2.10. there exists n N such that U n contains a loxodromic element on Γ eProposition 4.2.10. Suppose Γ is a connected graph that is not an isolated vertex. There exists N = N (Γ) ∈ N such that for every finite symmetric U ⊂ A(Γ), where U is not contained non- trivially in a direct product and U is not conjugate into A(Γ ) for any induced subgraph Γ ⊂ Γ, there exists n N such that U n contains a loxodromic element on Γ e .
) be the surface constructed such that φ : A(Γ) → M CG(S) is an injective homomorphism. Proof. Let S = S(Γin the sense of Theorem 4.2.1. As U is not contained non-trivially in a direct product, Lemma 4.2.5 tells us that the active subsurface of φ( U ) is connected. By Theorem 4.1.36, there exists a constant N ∈ N, dependent only on S, such that for some n NProof. Let S = S(Γ) be the surface constructed such that φ : A(Γ) → M CG(S) is an inject- ive homomorphism, in the sense of Theorem 4.2.1. As U is not contained non-trivially in a direct product, Lemma 4.2.5 tells us that the active subsurface of φ( U ) is connected. By The- orem 4.1.36, there exists a constant N ∈ N, dependent only on S, such that for some n N
We have already noted that Fill(g) is connected, so we must have that Fill(g ) is connected. We now want to show that Fill(g ) has non-empty intersection with Fill(s) for every s ∈ V . s is in the support of f . Therefore as Fill(f ) ⊂ Fill(g) we have that φ(a)Fill(s) ⊂ φ(a). Let g = hg h −1 where g is cyclically reduced, and note that g is loxodromic if and only if g is loxodromic. Recall that Fill(g) = φ(h)Fill(g ). Fill(f ) ⊂ φ(h)Fill(gLet g = hg h −1 where g is cyclically reduced, and note that g is loxodromic if and only if g is loxodromic. Recall that Fill(g) = φ(h)Fill(g ). We have already noted that Fill(g) is connected, so we must have that Fill(g ) is connected. We now want to show that Fill(g ) has non-empty intersection with Fill(s) for every s ∈ V . s is in the support of f . Therefore as Fill(f ) ⊂ Fill(g) we have that φ(a)Fill(s) ⊂ φ(a)Fill(f ) ⊂ φ(h)Fill(g ),
Fill(s) and φ(h −1 a)Fill(s) cannot be isotoped to be disjoint, so Fill(s) ∩ Fill(g ) = ∅. As Γ is not an isolated vertex. Recall from Lemma 4.2.9 that. we can therefore conclude that g is a loxodromic element on Γ e by Lemma 4.2.4, so g ∈ U n is also a loxodromic element. We can rephrase this result to give it in a more similar format to Theorem 4.1.36Recall from Lemma 4.2.9 that Fill(s) and φ(h −1 a)Fill(s) cannot be isotoped to be disjoint, so Fill(s) ∩ Fill(g ) = ∅. As Γ is not an isolated vertex, we can therefore conclude that g is a loxodromic element on Γ e by Lemma 4.2.4, so g ∈ U n is also a loxodromic element. We can rephrase this result to give it in a more similar format to Theorem 4.1.36.
For U ⊂ A(Γ), let V U ⊂ V be minimal under inclusion such that U is conjugate into A(Γ(V U )). There exists N = N (Γ) ∈ N such that for every finite symmetric U ⊂ A(Γ), where Γ(V U ) is connected and U is neither cyclic nor contained non-trivially in a direct product. Suppose Γ is a finite graph. there exists n N such that U n contains a loxodromic element on Γ(V U ) eProposition 4.2.11. Suppose Γ is a finite graph. For U ⊂ A(Γ), let V U ⊂ V be minimal under inclusion such that U is conjugate into A(Γ(V U )). There exists N = N (Γ) ∈ N such that for every finite symmetric U ⊂ A(Γ), where Γ(V U ) is connected and U is neither cyclic nor contained non-trivially in a direct product, there exists n N such that U n contains a loxodromic element on Γ(V U ) e .
for any U ⊂ A(Γ) there exists such a V U ⊂ V . Let U ⊂ A(Γ) be finite and symmetric, and suppose that Γ(V U ) is connected and U is neither cyclic nor contained non-trivially in a direct product. It was shown in Section 3 of [AM15] that. Let g ∈ A(Γ) be such that gU g −1 ⊂ A(Γ(V UProof. It was shown in Section 3 of [AM15] that for any U ⊂ A(Γ) there exists such a V U ⊂ V . Let U ⊂ A(Γ) be finite and symmetric, and suppose that Γ(V U ) is connected and U is neither cyclic nor contained non-trivially in a direct product. Let g ∈ A(Γ) be such that gU g −1 ⊂ A(Γ(V U )).
As gU g −1 is not cyclic, we have that Γ(V U ) is not an isolated vertex, and we also have that. As gU g −1 is not cyclic, we have that Γ(V U ) is not an isolated vertex, and we also have that
U is isomorphic to Z 2. U is contained non-trivially in a direct product. U is isomorphic to Z 2. U is contained non-trivially in a direct product.
|U n | (α|U |) βn for every n ∈ N. |U n | (α|U |) βn for every n ∈ N.
Note first that if U is finite, then it is the trivial group, and so will satisfy the inequality |U n | (α|U |) βn as long as α 1. We will therefore assume that U is infinite. 1. The centre of U is non-trivialProof. Note first that if U is finite, then it is the trivial group, and so will satisfy the inequality |U n | (α|U |) βn as long as α 1. We will therefore assume that U is infinite. 1. The centre of U is non-trivial.
|U n | (α|U |) βn for every n ∈ N. |U n | (α|U |) βn for every n ∈ N.
As pure subgroups are torsion-free, this means that U is trivial, so as long as we ensure that α 1 we will have that U satisfies |U n | (α|U |) βn for every n ∈ N. We can now assume that U is infinite. Suppose U ⊂ G is symmetric and finite, and that the centre Z( U ) is trivial. Suppose that U is finite. Let {ξ 1 , . . . , ξ l } be the (possibly empty) set of curves in the boundary of S. Let S be the surface obtained from S by capping each boundary Corollary 4.4.3. Let M CG(S) be a mapping class group. There exist α, β > 0 such that for every finite symmetric U ⊂ M CG(S) at least one of the following must hold: ReferencesProof. Suppose U ⊂ G is symmetric and finite, and that the centre Z( U ) is trivial. Suppose that U is finite. As pure subgroups are torsion-free, this means that U is trivial, so as long as we ensure that α 1 we will have that U satisfies |U n | (α|U |) βn for every n ∈ N. We can now assume that U is infinite. Let {ξ 1 , . . . , ξ l } be the (possibly empty) set of curves in the boundary of S. Let S be the surface obtained from S by capping each boundary Corollary 4.4.3. Let M CG(S) be a mapping class group. There exist α, β > 0 such that for every finite symmetric U ⊂ M CG(S) at least one of the following must hold: References
Tits alternatives for graph products. Yago Antolín, Ashot Minasyan, Journal für die reine und angewandte Mathematik. 704Yago Antolín and Ashot Minasyan. Tits alternatives for graph products. Journal für die reine und angewandte Mathematik, 704:55-83, 2015.
Uniform hyperbolicity of the graphs of curves. Tarik Aougab, Geometry & Topology. 17Tarik Aougab. Uniform hyperbolicity of the graphs of curves. Geometry & Topology, 17:2855-2775, 2012.
. H Sahana, Balasubramanya, Acylindrical group actions on quasi-trees. Algebraic & Geometric Topology. 17Sahana H Balasubramanya. Acylindrical group actions on quasi-trees. Algebraic & Geometric Topology, 17:2145--2176, 2017.
On the joint spectral radius for isometries of non-positively curved spaces and uniform growth. Emmanuel Breuillard, Koji Fujiwara, arXiv:1804.00748PreprintEmmanuel Breuillard and Koji Fujiwara. On the joint spectral radius for isometries of non-positively curved spaces and uniform growth. Preprint, arXiv:1804.00748, 2018.
Finite subgroups of hyperbolic groups. Algebra and Logic. O V Bogopolskii, V N Gerasimov, 34O. V. Bogopolskii and V. N. Gerasimov. Finite subgroups of hyperbolic groups. Algebra and Logic, 34(6):343-345, 1995.
Abelian and solvable subgroups of the mapping class group. Joan S Birman, Alex Lubotzky, John Mccarthy, Duke Mathematical Journal. 504Joan S. Birman, Alex Lubotzky, and John McCarthy. Abelian and solvable subgroups of the mapping class group. Duke Mathematical Journal, 50(4):1107-1120, 1983.
Algebraic geometry over groups i. algebraic sets and ideal theory. Gilbert Baumslag, Alexei Myasnikov, Journal of Algebra. 219Gilbert Baumslag and Alexei Myasnikov. Algebraic geometry over groups i. algebraic sets and ideal theory. Journal of Algebra, 219:16-79, 1999.
Brian H Bowditch, Tight geodesics in the curve complex. Inventiones mathematicae. 171Brian H. Bowditch. Tight geodesics in the curve complex. Inventiones mathematicae, 171:281-300, 2008.
Explicit Helfgott type growth in free products and in limit groups. J O Button, Journal of Algebra. 389J.O. Button. Explicit Helfgott type growth in free products and in limit groups. Journal of Algebra, 389:61-77, 2013.
Product theorems in SL 2 and SL 3. Mei-Chu Chang, Journal of the Institute of Mathematics of Jussieu. 7Mei-Chu Chang. Product theorems in SL 2 and SL 3 . Journal of the Institute of Mathematics of Jussieu, 7:1-25, 2008.
Lower bound on growth of nonelementary subgroups in relatively hyperbolic groups. Yu-Miao Cui, Yue-Ping Jiang, Wen-Yuan Yang, arXiv:2103.02304PreprintYu-miao Cui, Yue-ping Jiang, and Wen-yuan Yang. Lower bound on growth of non- elementary subgroups in relatively hyperbolic groups. Preprint, arXiv:2103.02304, 2021.
The geometry of right angled Artin subgroups of mapping class groups. Groups, Geometry, and Dynamics. Matt T Clay, Christopher J Leininger, Johanna Mangahas, 6Matt T. Clay, Christopher J. Leininger, and Johanna Mangahas. The geometry of right angled Artin subgroups of mapping class groups. Groups, Geometry, and Dynamics, 6(2):249-278, 2012.
Uniform hyperbolicity of the curve graph via surgery sequences. Matt Clay, Kasra Rafi, Saul Schleimer, Algebraic & Geometric Topology. 14Matt Clay, Kasra Rafi, and Saul Schleimer. Uniform hyperbolicity of the curve graph via surgery sequences. Algebraic & Geometric Topology, 14:3325-3344, 2014.
Product set growth in Burnside groups. Rémi Coulon, Markus Steenbock, arXiv:2102.10885PreprintRémi Coulon and Markus Steenbock. Product set growth in Burnside groups. Preprint, arXiv:2102.10885, 2021.
Finiteness properties of automorphism groups of right-angled Artin groups. Ruth Charney, Karen Vogtmann, Bulletin of the London Mathematical Society. 41Ruth Charney and Karen Vogtmann. Finiteness properties of automorphism groups of right-angled Artin groups. Bulletin of the London Mathematical Society, 41:94-102, 2009.
Hyperbolically embedded subgroups and rotating families in groups acting on hyperbolic spaces. François Dahmani, Vincent Guirardel, Denis Osin, American Mathematical Society245François Dahmani, Vincent Guirardel, and Denis Osin. Hyperbolically embedded subgroups and rotating families in groups acting on hyperbolic spaces. Memoirs of the American Mathematical Society, 245(1156), 2017.
Product set growth in groups and hyperbolic geometry. Thomas Delzant, Markus Steenbock, Journal of Topology. 13Thomas Delzant and Markus Steenbock. Product set growth in groups and hyperbolic geometry. Journal of Topology, 13:1183-1215, 2020.
On uniform exponential growth for linear groups. Alex Eskin, Shahar Mozes, Hee Oh, Inventiones mathematicae. 160Alex Eskin, Shahar Mozes, and Hee Oh. On uniform exponential growth for linear groups. Inventiones mathematicae, 160:1-30, 2005.
A Primer on Mapping Class Groups. Benson Farb, Dan Margalit, Princeton University PressPrinceton, NJBenson Farb and Dan Margalit. A Primer on Mapping Class Groups. Princeton University Press, Princeton, NJ, 2011.
The rates of growth in a hyperbolic group. Koji Fujiwara, Zlil Sela, arXiv:2002.10278PreprintKoji Fujiwara and Zlil Sela. The rates of growth in a hyperbolic group. Preprint, arXiv:2002.10278, 2020.
The rates of growth in an acylindrically hyperbolic group. Koji Fujiwara, arXiv:2103.01430PreprintKoji Fujiwara. The rates of growth in an acylindrically hyperbolic group. Preprint, arXiv:2103.01430, 2021.
Groups of polynomial growth and expanding maps. Michael Gromov, Publications Mathématiques de l'IHÉS. 53Michael Gromov. Groups of polynomial growth and expanding maps. Publications Mathématiques de l'IHÉS, 53:53-78, 1981.
Hyperbolic groups. Mikhail Gromov, Essays in Group Theory. New York, NYSpringer-VerlagMikhail Gromov. Hyperbolic groups. In Essays in Group Theory, pages 75-263. Springer-Verlag, New York, NY, 1987.
Growth and generation in SL 2 (Z/pZ). H A Helfgott, Annals of Mathematics. 167H. A. Helfgott. Growth and generation in SL 2 (Z/pZ). Annals of Mathematics, 167:601-623, 2008.
1-slim triangles and uniform hyperbolicity for arc graphs and curve graphs. Sebastian Hensel, Piotr Przytycki, Richard C H Webb, Journal of the European Mathematical Society. 17Sebastian Hensel, Piotr Przytycki, and Richard C.H. Webb. 1-slim triangles and uniform hyperbolicity for arc graphs and curve graphs. Journal of the European Math- ematical Society, 17:755-762, 2015.
On linear and residual properties of graph products. Tim Hsu, Daniel T Wise, The Michigan Mathematical Journal. 46Tim Hsu and Daniel T. Wise. On linear and residual properties of graph products. The Michigan Mathematical Journal, 46:251-259, 1999.
Subgroups of Teichmüller Modular Groups. Nikolai V Ivanov, Translations of Mathematical Monographs. 115American Mathematical SocietyNikolai V. Ivanov. Subgroups of Teichmüller Modular Groups, volume 115 of Transla- tions of Mathematical Monographs. American Mathematical Society, Providence, RI, 1992.
Boundaries of hyperbolic groups. Ilya Kapovich, Nadia Benakli, Combinatorial and Geometric Group Theory. Providence, RIAmerican Mathematical Society296Ilya Kapovich and Nadia Benakli. Boundaries of hyperbolic groups. In Combinatorial and Geometric Group Theory, volume 296 of Comtemporary Mathematics. American Mathematical Society, Providence, RI, 2002.
Tree approximation in quasi-trees. Alice Kerr, arXiv:2012.10741PreprintAlice Kerr. Tree approximation in quasi-trees. Preprint, arXiv:2012.10741, 2020.
Embedability between right-angled Artin groups. Sang-Hyun Kim, Thomas Koberda, Geometry & Topology. 171Sang-Hyun Kim and Thomas Koberda. Embedability between right-angled Artin groups. Geometry & Topology, 17(1):493-530, 2013.
The geometry of the curve graph of a rightangled Artin group. Sang-Hyun Kim, Thomas Koberda, International Journal of Algebra and Computation. 242Sang-Hyun Kim and Thomas Koberda. The geometry of the curve graph of a right- angled Artin group. International Journal of Algebra and Computation, 24(2):121-169, 2014.
An obstruction to embedding right-angled Artin groups in mapping class groups. Sang-Hyun Kim, Thomas Koberda, International Mathematics Research Notices. 14Sang-Hyun Kim and Thomas Koberda. An obstruction to embedding right-angled Artin groups in mapping class groups. International Mathematics Research Notices, 2014(14):3912-3918, 2014.
Right-angled Artin groups and a generalized isomorphism problem for finitely generated subgroups of mapping class groups. Thomas Koberda, Geometric and Functional Analysis. 22Thomas Koberda. Right-angled Artin groups and a generalized isomorphism problem for finitely generated subgroups of mapping class groups. Geometric and Functional Analysis, 22:1541-1590, 2012.
Geometry and combinatorics via right-angled Artin groups. Thomas Koberda, arXiv:2103.09342PreprintThomas Koberda. Geometry and combinatorics via right-angled Artin groups. Pre- print, arXiv:2103.09342, 2021.
The equivalence of some residual properties of word-hyperbolic groups. Ilya Kapovich, Daniel T Wise, Journal of Algebra. 223Ilya Kapovich and Daniel T. Wise. The equivalence of some residual properties of word-hyperbolic groups. Journal of Algebra, 223:562-583, 2000.
Uniform uniform exponential growth of subgroups of the mapping class group. Johanna Mangahas, Geometric and Functional Analysis. 19Johanna Mangahas. Uniform uniform exponential growth of subgroups of the mapping class group. Geometric and Functional Analysis, 19:1468-1480, 2010.
How Groups Grow. Avinoam Mann, London Mathematical Society Lecture Note Series. 395Cambridge University PressAvinoam Mann. How Groups Grow, volume 395 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge, UK, 2012.
A recipe for short-word pseudo-Anosovs. Johanna Mangahas, American Journal of Mathematics. 1354Johanna Mangahas. A recipe for short-word pseudo-Anosovs. American Journal of Mathematics, 135(4):1087-1116, 2013.
A Howard, Yair N Masur, Minsky, Geometry of the complex of curves I: hyperbolicity. Inventiones mathematicae. 138Howard A. Masur and Yair N. Minsky. Geometry of the complex of curves I: hyper- bolicity. Inventiones mathematicae, 138:103-149, 1999.
Algebraic entropy of elementary amenable groups. Denis Osin, Geometriae Dedicata. 107Denis Osin. Algebraic entropy of elementary amenable groups. Geometriae Dedicata, 107:133-151, 2004.
Acylindrically hyperbolic groups. Denis Osin, Transactions of the American Mathematical Society. 368Denis Osin. Acylindrically hyperbolic groups. Transactions of the American Mathem- atical Society, 368:851-888, 2016.
A product theorem in free groups. Alexander A Razborov, Annals of Mathematics. 1792Alexander A. Razborov. A product theorem in free groups. Annals of Mathematics, 179(2):405-429, 2014.
Powers of sets in free groups. Stanislav R Safin, Sbornik: Mathematics. 20211Stanislav R Safin. Powers of sets in free groups. Sbornik: Mathematics, 202(11):1661- 1666, 2011.
. Jean-Pierre Serre, Trees. Springer-VerlagJean-Pierre Serre. Trees. Springer-Verlag, Berlin, Germany, 1980.
Groups of dimension 1 are locally free. John Stallings, Bullletin of the American Mathematical Society. 742John Stallings. Groups of dimension 1 are locally free. Bullletin of the American Mathematical Society, 74(2):361-364, 1968.
Growth rates, Z p -homology, and volumes of hyperbolic 3-manifolds. Transactions of the. B Peter, Philip Shalen, Wagreich, American Mathematical Society331Peter B. Shalen and Philip Wagreich. Growth rates, Z p -homology, and volumes of hy- perbolic 3-manifolds. Transactions of the American Mathematical Society, 331(2):895- 917, 1992.
Product set estimates for non-commutative groups. Terence Tao, Combinatorica. 28Terence Tao. Product set estimates for non-commutative groups. Combinatorica, 28:547-594, 2008.
On exponential growth and uniformly exponential growth for groups. Inventiones mathematicae. S John, Wilson, 155John S Wilson. On exponential growth and uniformly exponential growth for groups. Inventiones mathematicae, 155:287-303, 2004.
Growth of relatively hyperbolic groups. Xiangdong Xie, Proceedings of the American Mathematical Society. 1353Xiangdong Xie. Growth of relatively hyperbolic groups. Proceedings of the American Mathematical Society, 135(3):695-704, 2007.
| []
|
[
"Exact Gap Computation for Code Coverage Metrics in ISO-C",
"Exact Gap Computation for Code Coverage Metrics in ISO-C"
]
| [
"Dirk Richter \nMartin-Luther\nMartin-Luther-University of Halle-Wittenberg\nGermany\n",
"Christian Berg [email protected] \nUniversity of Halle-Wittenberg\nGermany\n"
]
| [
"Martin-Luther\nMartin-Luther-University of Halle-Wittenberg\nGermany",
"University of Halle-Wittenberg\nGermany"
]
| [
"Workshop on Model"
]
| Test generation and test data selection are difficult tasks for model based testing. Tests for a program can be meld to a test suite. A lot of research is done to quantify the quality and improve a test suite. Code coverage metrics estimate the quality of a test suite. This quality is fine, if the code coverage value is high or 100%. Unfortunately it might be impossible to achieve 100% code coverage because of dead code for example. There is a gap between the feasible and theoretical maximal possible code coverage value. Our review of the research indicates, none of current research is concerned with exact gap computation. This paper presents a framework to compute such gaps exactly in an ISO-C compatible semantic and similar languages. We describe an efficient approximation of the gap in all the other cases. Thus, a tester can decide if more tests might be able or necessary to achieve better coverage. | 10.4204/eptcs.80.4 | [
"https://arxiv.org/pdf/1202.6121v1.pdf"
]
| 15,714,269 | 1202.6121 | 959052233f39f4eaaa9c80891fdf00e0802fb1e3 |
Exact Gap Computation for Code Coverage Metrics in ISO-C
2012
Dirk Richter
Martin-Luther
Martin-Luther-University of Halle-Wittenberg
Germany
Christian Berg [email protected]
University of Halle-Wittenberg
Germany
Exact Gap Computation for Code Coverage Metrics in ISO-C
Workshop on Model
201210.4204/EPTCS.80.4
Test generation and test data selection are difficult tasks for model based testing. Tests for a program can be meld to a test suite. A lot of research is done to quantify the quality and improve a test suite. Code coverage metrics estimate the quality of a test suite. This quality is fine, if the code coverage value is high or 100%. Unfortunately it might be impossible to achieve 100% code coverage because of dead code for example. There is a gap between the feasible and theoretical maximal possible code coverage value. Our review of the research indicates, none of current research is concerned with exact gap computation. This paper presents a framework to compute such gaps exactly in an ISO-C compatible semantic and similar languages. We describe an efficient approximation of the gap in all the other cases. Thus, a tester can decide if more tests might be able or necessary to achieve better coverage.
Introduction
Tests are used in model based testing to identify software defects. High quality test generation and test data selection can be difficult tasks when the test has to satisfy a lot of requirements or cannot be created automatically because of the undecidability of the halting problem in Turing powerful languages. Given requirements for a test suite (set of tests) are functional or non-functional (e.g. execution times, runtime, usage of memory, correctness or a minimum value of a code coverage metric). Code coverage metrics quantify the quality of a test suite rather imprecisely and guide testers only. There is a gap between the feasible and theoretical maximal possible code coverage value. Sometimes demanded requirements are unsatisfiable because of gaps. Unnecessary additional tests will be computed while not all requirements are satisfied. This enlarges the test suite and introduces redundancy. Fortunately these problems (caused by metric imprecisions) can be solved by computing these gaps, which is not possible for Turing powerful languages in general. Therefore this paper presents suitable models in a new C-like syntax. These models allow to use an ISO-C compatible semantic. In this paper we show how to compute such gaps exactly for these models using formal verification techniques resp. software model checking ideas. The paper is organized as follows: at first we clarify basics and used notations; we then present our framework, apply it to some common coverage metrics and illustrate this on some examples. Finally we discuss related work and present a summary and conclusions.
Basics
Code Coverage Metrics γ
Let T P = 2 tests be the set of all possible sets of tests and P a program written in a common programming language such as C, C++ or Java. Each t = {α 1 , α 2 , ...} ∈ T P is a test suite with tests α i for program P. The function γ P : T P → [0, 1] is a code coverage metric, if γ P is monotonically increasing. The program P can be omitted, if it is well-defined by the context. In this paper some common code coverage metrics for functions, statements, decisions, branches and conditions will be considered as examples. Other ones (e.g. linear code sequence and jump coverage, jj-path coverage, path coverage, entry/exit coverage or loop coverage) can be adapted in a similar way.
The function coverage metric γ P f (t) := | f unc(t)|/| f unc(P)| is the ratio of functions f unc(t) that has been called in the test suite t, to all functions f unc(P) in P [12].
The statement coverage metric γ P s (t) := |stats(t)|/|stats(P)| is the ratio of statements stats(t) that has been executed in the test suite t, to all statements stats(P) in P [12]. To distinguish the same statement s on different program points l 1 and l 2 , we annotate each statement s with unique labels l 1 and l 2 from program P, so that l 1 : s ∈ stats(P) and l 2 : s ∈ stats(P). Let blocks(P) be all basic blocks [1] in P. Every program point has a surrounding basic block. The basic block inter-procedural control flow graph BBICFG P = (blocks(P), edges(P)) (see Fig. 1) consists of the basic blocks blocks(P) as nodes and edges edges(P) ⊆ blocks(P) 2 , where (b 1 , b 2 ) ∈ edges(P) iff there is an execution path of length 1 from the end of block b 1 to the entry of block b 2 (execution of the last statement of block b 1 ). The decision coverage metric γ P d (t) := |edges(t)|/|edges(P)| is the ratio of executed edges edges(t) of the control flow graph BBICFG P for t, to all edges edges(P) in P.
The branch coverage metric γ P b (t) := |blocks(t)|/|blocks(P)| is the ratio of basic code blocks blocks(t) executed during test suite t, to all basic blocks blocks(P) in P [13]. Even if all basic blocks are covered by test suite t and γ P b (t) = 1, there can be uncovered branching edges in the basic block inter-procedural control flow graph BBICFG P . Thus γ P d (t) < 1 is possible in this case.
Let bExpr(l) be the set of all Boolean sub-expressions on label l of program P and BExpr(P) := {(l, bExpr(l)) • l ∈ labels(P)}.
The condition or predicate coverage metric γ P c (t) := |exval(t, P)|/(2 · |BExpr(P)|) is the ratio of evaluations of boolean sub-expressions exval(t, P) ⊆ BExpr(P) × {true, f alse} of the test suite t, to all evaluations of boolean sub-expressions in P [12]. The relation exval(t, P) describes the evaluations of sub-expressions e on label l under test suite t, such that ((l, e),true) ∈ exval(t, P) iff there is a test α ∈ t where e can be evaluated to true on label l under test α. When boolean operations are not short circuited, condition coverage does not necessarily imply decision coverage.
Code Coverage Metric Gap δ
Let γ : T P → [0, 1] be a code coverage metric for a program P. The code coverage metric gap δ γ (P) ∈ [0, 1] is the smallest difference between the coverage ratio of a test suite t ∈ T P and the theoretical maximal value 1:
δ γ (P) := inf t∈T P (1 − γ(t)).(2)
Let δ x (P) denote δ γ x (P), where x ∈ {c, d, s, f , b}. Obviously dead code can cause δ γ (P) > 0. If some evaluations of boolean (sub-)expressions cannot be realized, δ γ (P) > 0 is possible without dead code (e.g. condition or decision coverage). In Turing powerful programming languages the gap δ γ (P) can not be computed in general, because the halting problem is undecidable. In this case the gap δ γ (P) can be approximated only. We show how to compute the exact gap δ γ (P) for an ISO-C compatible semantic by adequate modeling.
Suitable Models
A more expressive model describing the program behaviour allows for a more accurate approximation of the gap δ γ (P). If the model is not Turing powerful and the model behaviour is equivalent to the program behaviour, the gap δ γ (P) can be exactly computed. Therefore we defined an ISO-C compatible semantic using pushdown systems (PDS) [9]. The split of the ISO-C language definition into platform-independent semantics and platform-specific semantics has a serious implication for deciding the halting problem of ISO-C programs: whether a C program halts or not depends on the platform-specific semantic. Thus, even though the halting problem for a C program is decidable for a platform-specific semantic, the halting property can become undefined if no specific platform is assumed [9]. Now we present an extension of the PDS used in [9] to symbolic pushdown systems (SPDS) using an ISO-C like syntax. SPDS use a more compact representation and define the PDS configurations and transitions symbolically. A SPDS is a tuple S = (vgbl, f unc), where vgbl is a finite set of variables (global variables in ISO-C) and f unc is a set of functions (pairwise different names) with an initial function main ∈ f unc. Each variable v has an integer type 1 bits(v) ∈ N ≥1 and a fixed length len(v) ∈ N ≥1 . Every variable v is an array. A function is a tuple ( f , param, vlcl, stats), where ( f , param) is a function signature with a unique function name f and a finite list of parameter variables param. The set vlcl is a finite set of variables (local variables in ISO-C), such that param ⊆ vlcl. The body of f is a finite list of statements stats. Each statement l : s ∈ stats has a unique label l ∈ labels( f ), and f st( f ) ∈ labels( f ) is the label of the first statement in the list stats.
v[i] of variable v ∈ vars with index i ∈ Z is evaluated as [[ v[i] ]] c f g := g(v, i) v ∈ vgbl ∧ 0 ≤ i < len(v) c(v, i) v ∈ vlcl( f ) ∧ 0 ≤ i < len(v) ⊥ otherwise.(4)
For e ∈ Expr and a statement l : s ∈ stats( f ), s has one of the following forms:
• ].
• f (v 1 , . . . , v n ); corresponds to a function call (call by value), iff ( f , param) is a signature, where param = [p 1 , p 2 , . . . , p n ], v i ∈ vars and bits(v i ) ≤ bits(p i ) for all 1 ≤ i ≤ n.
• return; corresponds to a function return.
• i f (e) goto l ; corresponds to a conditional jump 2 to label l ∈ labels( f ). ] for e ∈ Expr, whereby rand(⊥) = ⊥. Further ISO-C statements and variations for other languages can be mapped to these basic statements in the modeling phase. All variables (global and local) are uninitialized and have initially a random value. A test α for S is a subset of global variables with predefined values for label f st(main). A configuration s = (g, [(l n , c n ), (l n−1 , c n−1 ), . . . , (l 1 , c 1 )]) of S represents a state of the underlying Kripke structure with the current execution label l n ∈ labels, the valuation g : vgbl × Z → Z of global variables and the stack content. The stack content consist of a list of function calls with current execution labels l i ∈ labels(S) and valuations for local variables c i : vlcl( f unc(l i )) × Z → Z. The head of s is head(s) = (g, (l n , c n )). The set of all possible configurations is con f (S). A run of S is a sequence of consecutive configurations beginning with an initial configuration (g init , [ f st(main), c init ]) ∈ con f (S). SPDS are (like PDS) not Turing powerful and can be used to model the behaviour of (embedded) ISO-C programs. There is no restriction on the recursion depth.
Exact Gap Computation Framework
Let γ : T P → [0, 1] be a code coverage metric for a program P. Our framework to compute the gap δ γ (P) consists of the following steps:
1. If necessary, create a SPDS model S with ISO-C compatible semantic for program P.
2. Modify the model S to a SPDS model S to enable gap analysis for the code coverage metric γ.
3. Compute exact variable ranges for some new variables in S . 4. Conclude the exact size of the gap δ γ (S) in S for the code coverage metric γ.
5.
Conclude the size of the gap δ γ (P) in P.
SPDS Modeling (step 1)
If the given program P is not written in ISO-C (e.g. Java) or P has another platform-specific semantic, we create a SPDS model S for P by abstraction. Otherwise the behaviour of S is the same of P by mapping all the ISO-C statements to the basic SPDS-statements of section 2.3 using abbreviations (described in this section). Java can be handled using the tool JMoped [7]. Often other languages and corresponding statements can be mapped to the basic SPDS-statements in a similar fashion. For simplification we present some common mappings, which are abbreviations for previously defined basic SPDS-statements. We sketch the ideas only, because of limited space. Fig. 1 shows the SPDS example P 1 in ISO-C syntax, where "char x" in line 1 is an abbreviation for "int x(8) [1]" to declare an integer array of type bits(x) = 8 and len(x) = 1. Omitted Returns and Labels: If there is no return statement at the end of a function body, its existence is assumed during interpretation of the symbolical description of S. The same holds for statements without labels, such that each statement in the SPDS has a unique label after interpretation. Parameter Expressions: Basic SPDS-statements allow variables to be passed as parameters in function calls. We can simulate to pass expressions by temporary local variables. Let "l : f (e 1 , e 2 , . . . , e n );" be a function call with expressions e i ∈ Expr, where ( f , param) is a signature with param = [p 1 , p 2 , . . . , p n ]. We introduce new local SPDS-Variables pe i / ∈ vars with type bits(pe i ) = bits(p i ) and len(pe i ) = 1. These variables are used to evaluate the expressions before the function call: pe i = e i . Instead of e i now pe i is passed to f using the basic SPDS-statement f (pe 1 , pe 2 , . . . , pe n ). The function call "l : f (e 1 , e 2 , . . . , e n )" is interpreted as "l : pe 1 = e 1 ; pe 2 = e 2 ; . . . pe n = e n ; f (pe 1 , pe 2 , . . . , pe n )". Now only basic SPDSstatements are used. The code coverage metrics are adapted accordingly. For example the statements pe i = e i ; are ignored for the statement coverage metric. Return Values: A function can return a value. This value can be used to set a variable "v = f (...);". If a function returns an expression e via "return e", a new global variable ret f / ∈ vars is introduced. The type of ret f equals the return type of function f and len(ret f ) = 1. The statement "return e;" is interpreted as "ret f = e; return;". On the other hand the assignment "v = f (...);" is interpreted as " f (...); v = ret f ;" to store the return value of f in v. Function Calls in Expressions: If there is a function call f (..) in an expression e, this function is evaluated in a temporary local variable. Boolean operations in SPDS are strict and not short circuited. Short circuited expressions (also increments i++ and decrements i--) can be mapped to strict expressions without side effects by several conditional statements. Thus every function call f (..) in an expression e will be definitively evaluated during the evaluation of e. Accordingly it is safe to do every call before the evaluation of e. Sometimes the order of this evaluation is implementation defined (as in ISO-C) and depends on the source language (e.g. i++*i++). Thus we use priority and associativity for calculating this order. The intermediate representation of a compiler can be used, too, to achieve a mapping to SPDS. Unconditional Jump: The statement "goto l;" is mapped to "i f (1) goto l;". The dead branching edge to the following statement is ignored by the decision coverage etc. Skip Statement: Particularly low level languages have often a "no operation" statement. We use the statement skip, which does not change variable settings. The statement "l : skip;" can be interpreted using the conditional branch "l : i f (0) goto l;". If there is a global variable v ∈ vars, this can also be interpreted as "l : v = v;". The former needs no consideration for the coverage metrics. Random Numbers: In ISO-C the function rand() returns a pseudo random value between 0 and the constant RAND MAX, where RAND MAX depends on the system. We can map this behaviour using the SPDS function rand(RAND MAX). A similar mapping for random numbers is possible in other languages. Conditional Statements: Let s1 and s2 be lists of statements. The conditional statement "i f (e) s1 else s2;" is interpreted as "i f (e) goto l 1 ; s2; goto l ; l 1 : s1; l : skip;", where l 1 , l / ∈ labels. "i f (e) s;" is an abbreviation for "i f (e) s else skip;". Local Variable Definitions: A local variable can be defined during an assignment of a basic block or a loop header. Such local variable definitions are mapped to local variables of the surrounding function.
Renaming can be done easily if necessary. Loops: A for-loop of the form " f or (init; cond; inc) body;" is interpreted as "init; l : body; inc; i f (cond) goto l;", where l / ∈ labels. The do and while loops are interpreted in a similar way. Modular Arithmetic and Integer Overflow: The ISO-C standard says that an integer overflow causes "undefined behaviour", meaning that compilers conforming to the standard can generate any code: from completely ignoring the overflow to aborting the program. Our solution terminating the system is conform to the ISO-C standard. Evaluations of expressions in SPDS are not restricted to arithmetic bounds, but dynamic type mismatches are possible for assignments v = e. In the case of modeling nonterminating modular arithmetic the modulo operator % can be used to shrink the expression e to fit the size bits(v). Hence, a dynamic type mismatch does not occur. Dynamic Memory and Pointers: In ISO-C a certain amount of the heap can be reserved using the function malloc(int). It returns an address on the heap. The heap is finite, because the number of addresses is finite. This behaviour is simulated using a global array heap of type bits(heap) = 8 with length len(heap) = m and a global variable ptr with type bits(ptr) = log 2 (m) and len(ptr) = 1, which points to the next free space in the heap array. The function malloc(int) can be implemented as shown in Listing 1 with 1024 heap elements respectively, which needs a 10 bit variable ptr for accessing. A memory exceptions occurs (label memout), if there is not enough memory left. Dynamic Arrays: Array semantic in ISO-C is defined by pointers and access to its elements is defined by pointer arithmetic. Thus malloc(int) can be used for this purpose.
Other constructs and statements from other languages (e.g. classes, structs, objects, dynamic parameter lists, etc.) can be mapped in a similar way. If an arithmetic exception occurs, the SPDS ends and the corresponding ISO-C program P can have undefined behaviour according to the language specification. P can terminate, which is a complying behaviour. Therefore this behaviour is used for our modeling process. Other implementation defined behaviour can be modeled similarly.
Extraction of Exact Variable Ranges (step 3)
In step 2 a SPDS S is created for the SPDS S by slightly modifying S (explained in the next section). For a PDS B an automaton Post * (B) can be computed, which accepts all reachable configurations of B [18]. Thus for the SPDS S a similar automaton Post * (S ) can be created, because S is just a symbolical PDS. This is a basic step in symbolic model checking using Moped [14]. We use the Post * algorithm of the model checker Moped for our implementation by mapping our SPDS definitions to the input language Remopla 3 [17]. The proof is a consequence of the definitions. The computation of exact variable ranges is more timeconsuming than model-checking the reachability in S [8]. Fortunately range l (v) can be approximated using static data flow analyses and test suites. This is the case for focusing on efficiency or unbounded recursion depth in combination with unbounded parallelism. For further reference in comparisons, explanations, and proofs see [8].
SPDS Supplementation (step 2) and Exact Gap Inference (step 4)
Now we show exemplary, how to apply our framework to common code coverage metrics.
Function Coverage Gap δ f (S)
We supplement S with a new global variable v / ∈ vgbl(S) using type bits(v) = 1 and len(v) = 1 without any assignment or reading usage on v to ensure the existence of at least one global variable in S . By construction this variable v has a random undefined value [[v]] ∈ {0, 1} on each label l ∈ label(S ) resp. on each program point. The exact function coverage gap δ f (S) can be concluded from the exact ranges of variables in S as follows:
Lemma 3.2 δ f (S) = 1 − |{ f ∈ f unc(S) • range S f st( f ) (v) = / 0}| | f unc(S)|(5)
Proof (
(v) = / 0. Choose a test suite t ∈ T S such that | f unc(t )| is maximal. With the maximality of t we have | f unc(t )| ≥ |{ f ∈ f unc(S) • range S f st( f ) (v) = / 0}|.(6)
Every function f ∈ f unc(t ) has a cover witness test α ∈ t , so that the label
f st( f ) is reachable under test α. Thus it is range S f st( f ) (v) = / 0. On the other hand we obtain | f unc(t )| ≤ |{ f ∈ f unc(S) • range S f st( f ) (v) = / 0}|,(7)
because each f ∈ f unc(S) with range S f st( f ) (v) = / 0 has at least one test α (not necessarily ∈ t ) to cover the function f , which can be detected by an evaluation of v. Accordingly it is
sup t∈T S | f unc(t)| = |{ f ∈ f unc(S) • range S f st( f ) (v) = / 0}|,(8)
which is equivalent to
inf t∈T S (1 − γ f (t)) = 1 − |{ f ∈ f unc(S) • range S f st( f ) (v) = / 0}| | f unc(S)| .(9)
The exact branch coverage gap δ b (S) can be computed similarly. Instead of function entry points, just the block entry points are considered.
Statement Coverage Gap δ s (S)
The SPDS S will be supplemented with a new variable v / ∈ vgbl(S) and the type bits(v) = 1 and len(v) = 1 similarly to the function coverage gap. The exact statement coverage gap δ s (S) can be computed:
Lemma 3.3 δ s (S) = 1 − |{l : s ∈ stats(S) • range S l (v) = / 0}| |stats(S)|(10)
Proof (sketch): Similar to Lemma 3.2 prove sup t∈T S |stats(t)| = |{l : s ∈ stats(S) • range S l (v) = / 0}|.
Decision Coverage Gap δ d (S)
The branch coverage uses the nodes of the control flow graph BBICFG S and the decision coverage uses the edges. The execution of an edge (b 1 , b 2 ) ∈ edges(S) in the control flow graph BBICFG P depends on several conditions such as arithmetic overflow, division-by-zero or boolean expressions for conditional branches. To compute the exact decision coverage gap, we introduce a new global variable v in / ∈ vars(S) into S with type bits(v in ) = 1+ log 2 (|blocks(S)|) and len(v in ) = 1. Each label l belongs to a basic block b l , which can be identified by a unique number n b l ∈ N ≥0 . This number is assigned to the variable v in to detect the past basic block for a statement. The type bits(v in ) is big enough to store every unique identifier n b l . Each statement "l : s" ∈ stats(S) is modified 4 to "l : v in = n b l ; l : s" in S , where l / ∈ labels(S) is unique. So it is possible to determine the past basic block on label l using the exact range of the SPDS variable v in ∈ vgbl(S ). The exact decision coverage gap δ d (S) can be computed:
Lemma 3.4 δ d (S) = 1 − |{(a, b) ∈ edges(S) • n a ∈ range S f st(b) (v in )}| |edges(S)|(11)
Proof (sketch): Let a, b ∈ blocks(S) be basic blocks. By construction it is n a ∈ range S f st(b) (v in ), iff there is an execution path from the end of basic block a directly to the first label f st(b) of basic block b in S. This is equivalent to the existence of a test α, such that (a, b) ∈ edges(α). Thus we have
∃α ∈ t : (a, b) ∈ edges(α) ⇔ n a ∈ range S f st(b) (v in )(12)
for a chosen t ∈ T S , where |edges(t )| is maximal. Hence it is
sup t∈T S |edges(t)| = |{(a, b) ∈ edges(S) • n a ∈ range S f st(b) (v in )}|,(13)
which shows (11) similar to Lemma 3.2.
Condition Coverage Gap δ c (S)
For the condition coverage all boolean sub-expressions (conditions BExpr(S)) on each label are considered. The theoretical maximal value can be achieved, when every condition (l, e) ∈ BExpr(S) can be 1 (true) and 0 ( f alse). For each boolean sub-expression b ∈ B = l∈labels(S) bExpr(l) we introduce new boolean global variables v b / ∈ vars(S) with type bits(v b ) = 1 and len(v b ) = 1 into S . Let further bExpr(l) = {e 1 , e 2 , . . . e n } be the set of all boolean sub-expressions on label l. Each statement "l : s" ∈ stats(S) with bExpr(l) = / 0 will be modified to "l : v e 1 = e 1 ; v e 2 = e 2 ; . . . v e n = e n ; l : s" in S , where l / ∈ labels(S) is unique. The statement "l : s" ∈ stats(S) will be modified to "l : skip; l : s" in S , if bExpr(l) = / 0. Hence the existence of label l ∈ labels(S ) is guaranteed. Thus the exact condition coverage gap δ c (S) can be computed:
Lemma 3.5 δ c (S) = 1 − ∑ l∈labels(S) e∈bExpr(l) |range S l (v e )| 2 · |BExpr(S)|(14)
Proof (sketch): Choose a t ∈ T S such that |exval(t , S)| is maximal. Then it is
((l, e), b) ∈ exval(t , S) (15) ⇔ expression e can be evaluated to b ∈ {0, 1} on label l in S (16) ⇔ b ∈ range S l (v e ).(17)
This proves (14)
|range S l (v e )|.(18)
Conclusion for δ γ (P) based on δ γ (S) (step 5)
The computed gap δ γ (S) is exact (δ γ (S) = δ γ (P)), if the behaviour of S is equivalent to the behaviour of P. Hence step 1 does not abstract nor simplify the program behaviour. This is the case for our ISO-C compatible semantic [9] on C programs. If S abstracts from the behaviour of P, δ γ (S) is only an approximation for δ γ (P). The approximation degree depends on the degree of this abstraction.
Gap Approximation using δ
− γ and δ + γ
The gap can be approximated by abstracting the program P to a simpler behavior of SPDS S as shown above. On the other hand the exact variable ranges range l (v) can be approximated, too. This is a more practical approach particularly for huge software systems. Let range + l (v) be an over-and range − l (v) an under-approximation of range l (v). The sets range − l (v) can be realized using a test suite t ∈ T P . All occurring variable values during the tests α ∈ t can be used as lower bound for range l (v). On the other hand range + l (v) can be realized using a conservative data flow analysis. This usually results in additional variable values, which never can be achieved. Both δ − γ and δ + γ can be defined similar to δ γ using range − l (v) and range + l (v) instead of range l (v). It is easy to realize, how to bound the gap δ γ using range − l (v) and range + l (v):
Lemma 4.1 δ + γ ≤ δ γ ≤ δ − γ .
Obviously the gap approximation is perfect and an exact gap is found, if δ + γ = δ − γ . In this case it is not necessary to compute exact variable ranges.
Exemplary Illustration
As a comparative measurement of our method the values calculated by gcov [22] 5 are presented at the end of this section. The free tool gcov calculates the code coverage during an execution, which can be used to track the code coverage of a test suite.
To show the concepts presented so far, we use example P 1 in Fig. 1 and example P 2 in Fig 2. The constructs in the presented ISO-C code are automatically mapped to SPDS-statements as described in section 3.1. P 1 contains an arithmetic exception, caused by a division-by-zero. Hence P 1 contains a lot of dead code and any test suite with at least one test would be complete (i.e. there is no way to cover more code). There is no test necessary (t = / 0), because the variables x and y are initialized on labels l0 and l1. Thus it is range l (x) = range l (x) − = {0} and range l (y) = range l (y) − = {1} for all l ∈ L, where L = {l0, l1, l2, lb0, lb1}. It is range l (x) = range l (x) − = range l (y) = range l (y) − = / 0 for all l ∈ labels(P 1 ) \ L. All the conditions in conditional branches are considered to be statements (see BBICFG in the right part of Fig. 1), because a conditional branch can contain a statement (e.g. x=0 in "if (x=0)...").
Thus it is |stats(P 1 )| = 15, |blocks(P 1 )| = 12, |edges(P 1 )| = 15, BExpr(P 1 ) = {(lc0, x == 0), (lb0, y < x)} and exval(t, P 1 ) = {((lb0, y < x), f alse)}. Additionally P 1 is supplemented with variables v, v in , v x==0 and v y<x to program P 1 , where the range values can be computed accordingly. An inter-procedural conservative interval analysis [20,21] can detect range l (v) = range l (v in ) = / 0 for all l ∈ L = {lb0 , lc2} and range l (v) = {0, 1} for all l ∈ labels(P 1 ) \ L . This is used to compute δ + f , δ + b and δ + s . The edges (b2, b3), (b7, b9), (b9, b2), (b3, b10), (b3, b5) ∈ edges(P 1 ) are never executed, which is discovered by the interval analysis. This results in δ + d (P 1 ) = 5 15 . Thus, the coverage metrics and gaps of Table 1 can be calculated for P 1 as described in the previous sections. The values were obtained using our current implementation of the program described in [8]. Computing the values presented in Table 1 takes less than two seconds on a modern Core i7 CPU equipped with 8 GiB RAM. Table 1 also contains the approximated values as presented in section 4. Although the coverage metrics are far less than 100 %, the test suite t is complete. Additional tests can not improve these coverages as confirmed by the gaps. Table 1: Code Coverages and Gaps for P 1 in Fig. 1 and P 2 in Fig. 2 label(P 2 ) range − * (x) range − * (y) range − * (z) range − * (w) m1,m2,m3,m5',lc0,lc1,lc2 Table 2: range − l in P 2 using test suite t for P 2 in Fig. 2 The test suite t discovers range − l (v) = {0, 1} for each reachable label l in t. Most compilers, i.e. GCC and CL 6 from Microsoft, are not able to do a flow-sensitive, contextsensitive inter-procedural analysis needed for a more precise lower bound in this example. The abstract interpretations done in a compiler or analysis tool do not yield such a precise lower bound, as most other tools are essentially model-checkers. Hence, the lower bound on the range for x and y would include all possible values at label lb1 in P 1 . Thus the lower bound on the function gap would be 0.
γ f (t) δ + f δ f δ − f γ s (t) δ + s δ s δ − s γ d (t) δ + d δ d δ −γ b (t) δ + b δ b δ − b γ c (t) δ + c δ c δ −/ 0 / 0 / 0 / 0 m0,m4 {0, 1, 10} {1, 5} {0, 1, 10} {1, 5} m5,m6 {0} {1, 5} {0, 1, 10} {1, 5}
Additionally, using the tool gcov to compute the coverage of the test suite of example P 1 , no coverage is achieved by any test suite, because gcov does not take arithmetic exceptions into account resulting in 0% coverage. Of more practical relevance is the calculation of the coverage gap for non-arithmetic errors. For instance example P 2 in Fig. 2 has a difficult condition (x < y && z > w). The variables x, y, z and v of P 2 (Fig. 2)
P 2 ) = {(m0, x < y), (m0, z > w), (m0, x < y&&z > w), (m5, x == 127)} and exval(t, P 2 ) = {((m0, x < y),true), ((m0, x < y), f alse), ((m0, z > w),true), ((m0, z > w), f alse), ((m0, x < y&&z > w), f alse), ((m5, x == 127), f alse)}.
The test α = (0, 1, 1, 0) would be a good candidate for the test suite t, because γ s ({α}) = 91% is perfect (proofed by the gap δ s ). Table 1 contains the calculated coverage metrics and gaps for P 2 . As one can see from the third line of Table 1, the exact gap in the existing code is rather small: it consists of the condition x == 127 on label m5 and the following code block. This is one of the examples in which our method can instruct the tester to expand the test suite. More code can not be covered, because in each test the variables x and z as well as y and v are aliases. Although most of the code in the example is alive. As seen in the previous example, the approximated lower and upper bounds are not perfect. An upper bound on the gap δ f of called functions, is δ − f = 1 − 0.5, whereas from the two available functions one was called during the execution of test suite t. It is range l (v) = {0, 1} for every label l ∈ labels(P 2 ) in P 2 (supplementation of P s ). P 2 is also supplemented with variables v in , v x==0 and v y<x , so that the range values can be computed and approximated using an inter-procedural conservative interval analysis (range + ). The results for the variables are shown in Table 2 and 1.
Contrary to gcov the computation of the code coverages followed the C Program and did not rely on any symbolic assembler. Such abstractions might cause more coverage shown than the actual coverage in ISO-C. The statement coverage reported by gcov corresponds to γ s . Values close or exactly corresponding to γ X can be obtained from gcov for these particular examples. Not all values will match γ X , because gcov uses a different definition for decision and branch coverage and relies on symbolic assembler output.
Related Work
A lot of research is done to get a better coverage for a test suite [4]. However an important point is often missing: often it is impossible to cover 100% of the code in practice, because of gaps.
To the best of our knowledge no research has been done to compute provable exact gaps used in code coverage. Conservative strategies underestimate the coverage gap [5]. Current research only approximates the gap. For instance [3] presents a method to automatically add tests by computing a gap of code covered by the test suite and possible code coverage. The authors miss the important point of having code which will and can never be used. In [3] the emphasis is on large scale projects, but especially in such large projects there is code which cannot be executed and should be removed by the compiler. As [11] describes, some code parts are more important than others. Testing parts of a program which will never be executed is then a loss of resources. Gittens et al. use a domain expert to categorize the code, i.e. for which parts of the source code their tool should generate tests automatically. Our gap computation presented in this paper could be used to automatically categorize the code and not depend on a domain expert. Another project of interest is [16] by Kicillof et al., which shows how to create checkable models. The focus of Kicillof et al. are models which can be created by stakeholders or maybe even marketing experts, and thus is directed at their specific problems at Microsoft. Most other research concerning the computation of gaps in coverage targets the pre-silicon design validation, i.e. [6,19].
Both papers on pre-silicon design validation are not concerned with testing gaps. They rather check if a specification can be achieved. However, our paper is concerned with languages similar to C, not any Register Transfer Language (RTL) or even specifications.
As Regehr correctly writes in [15], such specifications, which are checked in [6,19] might have been wrong in the first place. One solution proposed by Regehr for finding errors in specifications is having more people to look over these. A different solution uses our tool and computes parts of the realized specification that are never used, thus giving hints to erroneous specifications.
Whereas Berner et al. are targeting the user of an automatic test system [4] our method targets the automatic test system itself. Berner et al. describe lessons learned from their experience with code coverage analysis tools and automatic test generation tools and propose a list of rules to be followed when introducing and using an automatic test tool. Our research was not concerned with usability and group dynamics in a programming environment.
To the best of our knowledge the current research in testing, be it concolic 7 or model-based, is not concerned with the actual problems of code coverage gaps. Gap coverage analysis is not only useful in test case generation but also in the verification of functional correctness. Imagine the case of a dead function granting more user rights, it is easy to use a buffer overflow to trigger this functionality. Similar methods have been used by the CCC for analyzing and using a trojan horse [10] 8 .
Another important tool, which might be able to compete with our method is Frama-C [5] 9 . Frama-C is a conservative analysis tool, which is able to find dead code, execute a static value analysis and, contrary to gcov, is able to detect runtime-errors triggered for instance by division-by-zero. One of the differences between Frama-C and the method we propose in this paper is the theory behind it. In contrast to Frama-C [5] our method uses exact computation, does not overapproximate the values and does not rely on an experienced user. Our exact value analysis produces neither false negatives nor false positives as in Frama-C. Although their value analysis sometimes detects that a function does not terminate, it cannot be used to prove that a function terminates in general.
Frama-C provides sophisticated plugins, but not all of them handle recursion properly. No sophisticated examples can be handled by Frama-C's value analysis. Some of the examples tested even cause runtime-errors in Frama-C itself, thus it is not reliable 10 . 7 interwoven concrete and symbolic execution 8 especially the section Upload-und Execute-Mechanismus 9 http://frama-c.com 10 It should be noted, that these runtime-errors should vanish in future versions As our review of the research indicates, none of current research done in testing is concerned with exact gap computation.
Summary and Conclusions
This paper presents a framework to compute exact gaps between the feasible and theoretical maximal possible code coverage value. For specifying programs in an ISO-C semantic we use a very powerful model, namely SPDS. The power of SPDS allows to model an ISO-C compatible semantics for programs without abstraction. Therefore we are able to do an exact value analysis using model checking techniques and so we obtain exact gaps. We describe how to efficiently approximate the gap in all the other cases. When using flow-sensitive, path-sensitive, inter-procedural and context-sensitive data flow analyses for approximating the exact values one can also use a model-checking tool. The biggest problems of using a model-checker are false positives or false negatives caused by abstraction. Instead, our approach does not rely on such heavy abstraction and does not cause false alarms on our ISO-C compatible semantic. Thus user input or feedback is not required to decide about false alarms. A lot of computing power is required for using such powerful models. Due to smaller programs and smaller data types our approach is still practical for embedded systems.
Having combined the best parts of model-checking and static analyses we use expansive modelchecking only when needed (e.g. the gap approximation bounds are not small enough). Thus the computation of Post * is needed only if the gap approximation using static analysis and a test suite is not exact (δ − = δ + ).
Using our method a lot of metrics can be compared better among each other now, because of exactly specified gaps. Our method allows the testing of non-functional requirements, too. For example the worst case execution times (WCET) using a WCET metric 11 can be computed.
Our current research considers the practical relevance of exact gap computation for verification of software especially in the area of compiler correctness. Additionally we are considering other areas of research to apply the computation of exact values and exact gaps. For example the computation of exact value ranges can be used for verification of components [2].
Figure 1 :
1SPDS example P 1 in ISO-C syntax and corresponding BBICFG
We use vgbl = vgbl(S), f unc = f unc(S), param = param( f ), vlcl = vlcl( f ) and stats = stats( f ) respectively, if S or f are well-defined by the context. Let denote f unc(l) the function f for which l ∈ labels( f ). ISO-C the SPDS variables are used to build expression Expr using constants and operators. The priority and associativity are the same as in ISO-C. An expression e ∈ Expr can be strictly evaluated to an integer number [[e]] c f g ∈ Z ∪ {⊥} using valuation functions for global and local variables g : vgbl × Z → Z and c f : vlcl( f ) × Z → Z. The symbol ⊥ denotes arithmetic exception (e.g. division-by-zero or index-out-of-bounds). The functions g(v, i) and c f (v, i) return the current value of variable v at index i (value of v[i]). The evaluation functions g and c f can be omitted, if they are well-defined by the context. A variable usage
The set of reachable heads h(S ) := {(g, (l, c)) • (g, [(l, c)...]) ∈ Post * (S )} is finite because of finite variable types. Thus exact variable ranges can be extracted from h(S ). Let v ∈ vars and l ∈ labels, then range S l (v) := {[[v]] c g • (g, (l, c)) ∈ h(S )} is the exact variable range of v. The notation S can be omitted, if S is well-defined by the context. For all values k ∈ range l (v) there is a run of S , such that [[v]] = k on label l and vice versa. h(S ) and range l (v) can be computed symbolically out of Post * (S ) using Ordered Binary Decision Diagrams (OBDD) operations. The computation of h(S ) is a straightforward OBDD restriction operation in Post * (S ) and results in a characteristic function q : {0, 1} n → {0, 1} represented as an OBDD. The input vectors of q are heads h(S ) encoded as finite Bit sequences. The computation of range l (v) uses cofactors. A cofactor of q is q[x i = b](x 1 , x 2 , . . . , x n ) := q(x 1 , x 2 , . . . , x i−1 , b, x i+1 . . . , x n ) [8]. The positive cofactor is q[x i = 1] and the negative cofactor is q[x i = 0]. A characteristic function r : {0, 1} m → {0, 1} for range l (v) can be computed using cofactors: Lemma 3.1 Let k be the starting index of the encoding of v on label l in q and let m be the length of the encoding. Then r(y 1 , y 2 , . . . , y m ) = 1 is valid, iff q[x k = y 1 ][x k+1 = y 2 ] . . . [x k+m−1 = y m ] is not always 0 (not the empty OBDD).
sketch): The new global variable v does not influence the model behaviour. All variable evaluations and reachable labels in S are the same in S. Further it is vgbl(S ) = vgbl(S) ∪ {v}, f unc(S ) = f unc(S) and f unc(S) = / 0 because of main ∈ f unc(S). The main observation is, that a label l ∈ labels(S) is unreachable or dead, iff range S l (v) = / 0. Thus a function f can be called, iff range S f st( f )
Figure 2 :
2are global variables of type char = bits(..) = 8. The whole block below (m1 − m3) becomes dead, if the condition on label m0 evaluates to f alse. Thus commit is not called and 1 char x , SPDS example P 2 in ISO-C syntax for dead code by bad condition + corresponding BBICFG the (indirect) recursion not started. Additionally, for all possible test cases, the condition (x == 127) on label m5 never evaluates to true.Let t = {(0, 1, 0, 1), (1, 1, 1, 1), (10, 5, 10, 5)} be a test suite with (x, y, z, w) being the values set before calling main. It is |stats(P 2 )| = 11, |blocks(P 2 )| = 8, |edges(P 2 )| = 10, BExpr(
v[e 1 ] = e 2 ; corresponds to writing the value [[e 2 ]] into the variable v at index [[e 1 ]
= e and v for usages of v[0] to emulate syntactically non-array variables. The system terminates on statement "i f (e) goto l ;" too, if [[e]] = ⊥. The predefined function rand(e) returns a random number between 0 and [[e]The exception of a dynamic type mismatch occurs for "v[e 1 ] = e 2 ;" and the system terminates, if [[e 2 ]] = ⊥
or the type bits(v) is too small to store the value [[e 2 ]] or [[e 1 ]] /
∈ {0, 1, . . . , len(v)}. We denote v = e for
v[0]
Once reserved space can be reused, because a garbage collector and a function f ree can be implemented in SPDS. A pointer is a SPDS variable used as an index of the heap array and an address is just another index (returned by the address operator &). Variables placed in the heap array support the address operator in contrast to the other SPDS variables. If putting a local variable of a function f into the heap array, the recursion of f will be bound, because of a finite maximal heap size. Coverage metrics have to adapt to these additional SPDS functions, statements and variables to be able to compute the correct gap. Call by Reference: Instead of passing a variable as a function parameter, a pointer can be used to indirectly access variable values in the heap. Thus call by reference can be simulated. Unfortunately this results in bounding the recursion, too.1 int heap (8)[1024];
2 int ptr (10);
3
4 int (10) malloc ( int n (10)) {
5
if ( ptr >= 1024 -n ) goto memout ;
6
ptr = ptr + n ;
7
return ptr -n ; }
Listing 1: Malloc as SPDS in ISO-C like syntax
The Boolean constants f alse and true are represented via 0 and = 0 like ISO-C.
intra-procedural
e.g. we map integers to nonnegative numbers, as Remopla does not support negative integers
Additionally this can be done using the native synchronous parallelism in SPDS without an extra label: "l : v in = n b l , s".
http://gnu.org/software/gcov
Shipped with Microsoft Visual Studio
e.g. γ WCET (t) := max a∈t runtime(a) with supplemetation tick = tick + 1 on each statement, such that γ WCET (t) = max(∪ l∈labels(S) range l (tick)).
Frances E Allen, 10.1145/390013.808479Control flow analysis. SIGPLAN Not. 5Frances E. Allen (1970): Control flow analysis. SIGPLAN Not. 5, pp. 1-19, doi:10.1145/390013.808479.
Automatic Component Protocol Generation and Verification of Components. Andreas Both, Dirk Richter, 10.1109/SEAA.2010.3036th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA). Andreas Both, Dirk Richter (2010): Automatic Component Protocol Generation and Verification of Compo- nents. In: 36th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA), pp. 94-101, doi:10.1109/SEAA.2010.30.
Structural coverage of feasible code. Mauro Baluda, Pietro Braione, Giovanni Denaro & Mauro Pezzè, 10.1145/1808266.1808275Proceedings of the 5th Workshop on Automation of Software Test, AST '10. the 5th Workshop on Automation of Software Test, AST '10New York, NY, USAACMMauro Baluda, Pietro Braione, Giovanni Denaro & Mauro Pezzè (2010): Structural coverage of feasible code. In: Proceedings of the 5th Workshop on Automation of Software Test, AST '10, ACM, New York, NY, USA, pp. 59-66, doi:10.1145/1808266.1808275.
Enhancing Software Testing by Judicious Use of Code Coverage Information. Stefan Berner, Roland Weber & Rudolf, K Keller, 10.1109/ICSE.2007.34Proceedings of the 29th international conference on Software Engineering, ICSE '07. the 29th international conference on Software Engineering, ICSE '07Washington, DC, USAIEEE Computer SocietyStefan Berner, Roland Weber & Rudolf K. Keller (2007): Enhancing Software Testing by Judicious Use of Code Coverage Information. In: Proceedings of the 29th international conference on Software Engineering, ICSE '07, IEEE Computer Society, Washington, DC, USA, pp. 612-620, doi:10.1109/ICSE.2007.34.
Virgile Prevosto: Frama-C's value analysis plug-in. Pascal Cuoq, & , CEA LIST, Software Reliability Laboratory. Saclay91191Pascal Cuoq & Virgile Prevosto: Frama-C's value analysis plug-in. CEA LIST, Software Reliability Labo- ratory, Saclay, F-91191.
Formal verification coverage: computing the coverage gap between temporal specifications. A Das, P Basu, A Banerjee, P Dasgupta, P P Chakrabarti, C Rama Mohan, L Fix, & R Armoni, 10.1109/ICCAD.2004.1382571Proceedings of the 2004 IEEE/ACM International conference on Computer-aided design, ICCAD '04. the 2004 IEEE/ACM International conference on Computer-aided design, ICCAD '04Washington, DC, USAIEEE Computer SocietyA. Das, P. Basu, A. Banerjee, P. Dasgupta, P. P. Chakrabarti, C. Rama Mohan, L. Fix & R. Armoni (2004): Formal verification coverage: computing the coverage gap between temporal specifications. In: Proceedings of the 2004 IEEE/ACM International conference on Computer-aided design, ICCAD '04, IEEE Computer Society, Washington, DC, USA, pp. 198-203, doi:10.1109/ICCAD.2004.1382571.
jMoped: A Java Bytecode Checker Based on Moped. Dejvuth Suwimonteerabuth, Stefan Schwoon, Javier Esparza, Tools and Algorithms for the Construction and Analysis of Systems, Lecture Notes in Computer Science. Berlin HeidelbergSpringer-Verlag3440Dejvuth Suwimonteerabuth, Stefan Schwoon, Javier Esparza (2005): jMoped: A Java Bytecode Checker Based on Moped. In: Tools and Algorithms for the Construction and Analysis of Systems, Lecture Notes in Computer Science (LNCS) 3440, Springer-Verlag Berlin Heidelberg, pp. 541-545. http://www. springerlink.com/content/32p4x035k3rll5nh/.
Dirk Richter, Rekursionspraezise Intervallanalysen. In: 15. Kolloquium Programmiersprachen und Grundlagen der Programmierung (KPS). Maria TaferlDirk Richter (2009): Rekursionspraezise Intervallanalysen. In: 15. Kolloquium Programmiersprachen und Grundlagen der Programmierung (KPS), Maria Taferl. http://www.vmars.tuwien.ac.at/php/ pserver/extern/download.php?fileid=1726.
On Undecidability Results of Real Programming Languages. Dirk Richter, Raimund Kirner, Wolf Zimmermann, 15. Kolloquium Programmiersprachen und Grundlagen der Programmierung (KPS). Maria TaferlDirk Richter, Raimund Kirner, Wolf Zimmermann (2009): On Undecidability Results of Real Programming Languages. In: 15. Kolloquium Programmiersprachen und Grundlagen der Programmierung (KPS), Maria Taferl. http://www.vmars.tuwien.ac.at/php/pserver/extern/download.php?fileid=1726.
Analyse einer Regierungs-Malware. Chaos Computer Club e.V.Chaos Computer Club e.V.: Analyse einer Regierungs-Malware. Available at http://www.ccc.de/ system/uploads/76/original/staatstrojaner-report23.pdf.
All code coverage is not created equal: a case study in prioritized code coverage. Mechelle Gittens, Keri Romanufa, David Godwin, Jason Racicot, 10.1145/1188966.1188981Proceedings of the 2006 conference of the Center for Advanced Studies on Collaborative research, CASCON '06. the 2006 conference of the Center for Advanced Studies on Collaborative research, CASCON '06New York, NY, USAACMMechelle Gittens, Keri Romanufa, David Godwin & Jason Racicot (2006): All code coverage is not created equal: a case study in prioritized code coverage. In: Proceedings of the 2006 conference of the Center for Advanced Studies on Collaborative research, CASCON '06, ACM, New York, NY, USA, doi:10.1145/1188966.1188981.
J Glenford, Myers, The Art of Software Testing. John Wiley and Sons11180319623nd editionGlenford J. Myers (2011): The Art of Software Testing. 3nd edition, John Wiley and Sons, ISBN 1118031962.
Branch Coverage For Arbitrary Languages Made Easy: Transformation Systems to the Rescue. Ira D Baxter, IW APA TV2/IC SE2001. Ira D. Baxter (2001): Branch Coverage For Arbitrary Languages Made Easy: Transformation Systems to the Rescue. In: IW APA TV2/IC SE2001. http://techwell.com/sites/default/files/articles/ XUS1173972file1_0.pdf.
A BDD-based model checker for recursive programs. Javier Esparza, Stefan Schwoon, Lecture Notes in Computer Science. 2102Springer-VerlagJavier Esparza, Stefan Schwoon (2001): A BDD-based model checker for recursive programs. Lecture Notes in Computer Science, 2102:324-336, Springer-Verlag Berlin Heidelberg.
Personal Blog entry of Prof. John Regehr, John Regehr. Computer Science Department, University ofJohn Regehr: Who Verifies the Verifiers? http://blog.regehr.org/archives/370. Personal Blog entry of Prof. John Regehr, Computer Science Department, University of Utah, USA.
Achieving both model and code coverage with automated gray-box testing. Nicolas Kicillof, Wolfgang Grieskamp, Nikolai Tillmann, Victor Braberman, 10.1145/1291535.1291536Proceedings of the 3rd international workshop on Advances in model-based testing, A-MOST '07. the 3rd international workshop on Advances in model-based testing, A-MOST '07New York, NY, USAACMNicolas Kicillof, Wolfgang Grieskamp, Nikolai Tillmann & Victor Braberman (2007): Achieving both model and code coverage with automated gray-box testing. In: Proceedings of the 3rd international workshop on Advances in model-based testing, A-MOST '07, ACM, New York, NY, USA, pp. 1-11, doi:10.1145/1291535.1291536.
Introduction to Remopla. S Kiefer, S Schwoon, D Suwimonteerabuth, Institute of Formal Methods in Computer Science, University of StuttgartS. Kiefer, S. Schwoon, D. Suwimonteerabuth (2006): Introduction to Remopla. Institute of Formal Methods in Computer Science, University of Stuttgart.
S Schwoon, Model-Checking Pushdown Systems. Dissertation, Technical University of Munich. S. Schwoon (2002): Model-Checking Pushdown Systems. Dissertation, Technical University of Munich. http://tumb1.biblio.tu-muenchen.de/publ/diss/in/2002/schwoon.html.
Design intent coverage revisited. Arnab Sinha, Pallab Dasgupta, Bhaskar Pal, Sayantan Das, Prasenjit Basu, & P P Chakrabarti, 10.1145/1455229.1455238ACM Trans. Des. Autom. Electron. Syst. 1432Arnab Sinha, Pallab Dasgupta, Bhaskar Pal, Sayantan Das, Prasenjit Basu & P. P. Chakrabarti (2009): Design intent coverage revisited. ACM Trans. Des. Autom. Electron. Syst. 14, pp. 9:1-9:32, doi:10.1145/1455229.1455238.
Advanced compiler design and implementation. Steven S Muchnick, Morgan Kaufmann PublishersSan Francisco, CalifSteven S. Muchnick (1997): Advanced compiler design and implementation. San Francisco, Calif.: Morgan Kaufmann Publishers.
A class of polynomially solvable range constraints for interval analysis without widenings. Zhendong Su, & David Wagner, 10.1016/j.tcs.2005.07.035Theoretical Computer Science. 3451TACASTools and Algorithms for the Construction and Analysis of SystemsZhendong Su & David Wagner (2005): A class of polynomially solvable range constraints for interval anal- ysis without widenings. Theoretical Computer Science 345(1), pp. 122 -138, doi:10.1016/j.tcs.2005.07.035. Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2004).
The Definitive Guide to GCC. Hagen William Von, APress1590595858William Von Hagen (2008): The Definitive Guide to GCC. APress, ISBN 1590595858.
| []
|
[
"Massive scalar counterpart of gravitational waves in scalarized neutron star binaries",
"Massive scalar counterpart of gravitational waves in scalarized neutron star binaries"
]
| [
"Jing Wang \nSchool of Physics and Astronomy\nSun Yat-sen University\n510275GuangzhouPeople's Republic of China\n"
]
| [
"School of Physics and Astronomy\nSun Yat-sen University\n510275GuangzhouPeople's Republic of China"
]
| [
"Eur. Phys. J. C"
]
| In analogy with spontaneous magnetization of ferromagnets below the Curie temperature, a neutron star (NS), with a compactness above a certain critical value, may undergo spontaneous scalarization and exhibit an interior nontrivial scalar configuration. Consequently, the exterior spacetime is changed, and an external scalar field appears, which subsequently triggers a scalarization of its companion. The dynamical interplay produces a gravitational scalar counterpart of tensor gravitational waves. In this paper, we resort to scalar-tensor theory and demonstrate that the gravitational scalar counterpart from a double neutron star (DNS) and a neutron star-white dwarf (NS-WD) system become massive. We report that (1) a gravitational scalar background field, arising from convergence of external scalar fields, plays the role of gravitational scalar counterpart in scalarized DNS binary, and the appearance of a mass-dimensional constant in a Higgs-like gravitational scalar potential is responsible for a massive gravitational scalar counterpart with a mass of the order of the Planck scale; (2) a dipolar gravitational scalar radiated field, resulting from differing binding energies of NS and WD, plays the role of a gravitational scalar counterpart in scalarized orbital shrinking NS-WDs, which oscillates around a local and scalar-energy-density-dependent minimum of the gravitational scalar potential and obtains a mass of the order of about 10 −21 eV/c 2 . | 10.1140/epjc/s10052-017-5214-x | [
"https://arxiv.org/pdf/1909.01045v1.pdf"
]
| 126,300,238 | 1909.01045 | 74f57d6a189d906a36d863835c8895cdd5058c73 |
Massive scalar counterpart of gravitational waves in scalarized neutron star binaries
2017
Jing Wang
School of Physics and Astronomy
Sun Yat-sen University
510275GuangzhouPeople's Republic of China
Massive scalar counterpart of gravitational waves in scalarized neutron star binaries
Eur. Phys. J. C
77641201710.1140/epjc/s10052-017-5214-xReceived: 29 May 2017 / Accepted: 11 September 2017Regular Article -Theoretical Physics
In analogy with spontaneous magnetization of ferromagnets below the Curie temperature, a neutron star (NS), with a compactness above a certain critical value, may undergo spontaneous scalarization and exhibit an interior nontrivial scalar configuration. Consequently, the exterior spacetime is changed, and an external scalar field appears, which subsequently triggers a scalarization of its companion. The dynamical interplay produces a gravitational scalar counterpart of tensor gravitational waves. In this paper, we resort to scalar-tensor theory and demonstrate that the gravitational scalar counterpart from a double neutron star (DNS) and a neutron star-white dwarf (NS-WD) system become massive. We report that (1) a gravitational scalar background field, arising from convergence of external scalar fields, plays the role of gravitational scalar counterpart in scalarized DNS binary, and the appearance of a mass-dimensional constant in a Higgs-like gravitational scalar potential is responsible for a massive gravitational scalar counterpart with a mass of the order of the Planck scale; (2) a dipolar gravitational scalar radiated field, resulting from differing binding energies of NS and WD, plays the role of a gravitational scalar counterpart in scalarized orbital shrinking NS-WDs, which oscillates around a local and scalar-energy-density-dependent minimum of the gravitational scalar potential and obtains a mass of the order of about 10 −21 eV/c 2 .
Introduction
Although it is clear that Einstein's general relativity has been so far a sound theory in describing the dynamics of neutron star (NS) binary systems, several observations indicated that the orbital decay in the Hulse-Taylor system, PSR 1913+16, is mildly more rapid than that predicted by the general relativistic quadrupole formula [1,2]. The long baseline of precise timing observations for PSR J1738+0333 [3] have also a e-mail: [email protected] indicated an excess orbital decay, which directly translates to a dipole radiation constraint on deviations from the quadruple formula, according to the Lunar Laser Ranging experiments. It was proposed that this can be relieved by considering that a nontrivial scalar configuration comes about in the strongfield regime [4]. By making an analogy with the spontaneous magnetization of ferromagnets below the Curie temperature, a NS, with a compactness of Gm Rc 2 1 above a certain critical value, will exhibit a nontrivial configuration, and a scalar field settles in the interior [5], i.e. spontaneous scalarization occurs for the NS. The neutron star-white dwarf (NS-WD) binaries usually contain a massive recycled NS [6][7][8], owing to the recycling process [9], which thus in greater measure tends to undergoing a spontaneous scalarization. It was indicated that NS in a binary pulsar, with a mass of 1.4 M , would develop strong scalar charges even in the absence of external scalar solicitation for strong couplings and with vanishing asymptotic value [5]. The spontaneously scalarized component modifies the exterior spacetime and contributes to an external scalar field ϕ ss around it, which produces a scalar asymptotic solution. In the meantime, a scalarization of NS suffers from a change of compactness [10], which enhances the gravitational interaction with its companion. As a result, the companion star is also scalarized, which is assigned to an induced scalarization [11], and the other external scalar field ϕ is subsequently appears around the secondly scalarized component.
The dynamical interplay between ϕ ss and ϕ is is governed by the following relation [11]:
(n+1) ϕ ss = (0) ϕ ss + (n) ϕ is r , (n+1) ϕ is = (0) ϕ is + (n) ϕ ss r ,(1)
where (0) ϕ ss and (0) ϕ is are the external scalar fields initially produced by the spontaneously scalarized NS "ss" and induced scalarized companion star "is", respectively, (n) ϕ ss and (n) ϕ is represent the nth induced external scalar fields around the scalarized components "ss" and "is", respectively, and r is the distance from the center of the binary. The feedback mechanism described by Eq. (1) results in an iteratively induced scalarization of two components, which enhances the strength of external scalar fields, as well as the gravitational interaction between two scalarized stars. Accordingly, the Newtonian gravitational interaction of the binary is modified according to [5]
V int = −G m ss m is R ss−is − G ω ss ω is R ss−is ,(2)
where m ss and m is represent the masses of the spontaneously scalarized NS and the induced scalarized companion, ω ss and ω is denote the scalar charges of corresponding components with the definition of ω ss,is = − ∂ ln m ss,is (ϕ ss,is ) ∂ϕ ss,is [12], and R ss-is is the orbital separation of the binary. The local Newtonian gravitational constant is accordingly modified as
G eff = G(1 + ω ss ω is + · · · ),(3)
which is assigned to the effective gravitational constant of the scalarized NS binary system. The second term in the brackets, ω ss ω is , is for the first-and second-order post-Newtonian corrections to the dynamical system, and the · · · denotes the terms of dissipative corrections to the Newtonian dynamics that accounts of the backreaction of gravitational-wave emission. Either the continual enhancement of external scalar fields or the different scalar charges carried by two components that sources an emission of dipolar gravitational scalar radiation will contribute to a gravitational scalar counterpart in an in-spiraling NS binary. We assign the in-spiraling NS binaries with both tensor gravitational-wave radiation and dipolar gravitational scalar counterparts to be the scalarized systems. As a consequently, the dynamics of scalarized inspiraling NS binary is encoded not only by the gravitational tensor metric g μν , but also by a gravitational scalar field φ. It was shown that "spontaneous scalarization" leads to very significant deviations from Einstein's general relativity in conditions involving binary-pulsar systems [10], which do not necessarily vanish when the weak-field scalar coupling tends to zero. The non-perturbative strong-field deviations away from general relativity due to the appearance of scalar fields, measured by a dimensional scalar coupling factor [10], could have a significant impact on the emission of gravitational waves in NS systems [4]. The equations of motion for scalarized NSs binary systems have been modified, which produce dipolar gravitational scalar counterparts of gravitational tensor waves, depending on the coupling strength between scalar fields and the star matter [11]. In this paper, we resort to the scalar-tensor theory [12] of gravity to describe the dynamics of scalarized in-spiraling NS binary systems and investigate the gravitational scalar counterpart of gravitational waves in scalarized double neutron star (DNS) binaries and NS-WD systems, respectively. We demonstrate that the gravitational scalar counterpart becomes massive in these systems, resulting from different mechanisms. It is pointed out that the appearance of a mass-dimensional constant in the Higgs-like gravitational scalar potential of scalarized DNS systems, arising from the dynamical couplings between the gravitational scalar field φ and the external scalar fields ϕ ss and ϕ is , as well as the self-coupling of φ, contributes to spontaneous symmetry breaking and thus the mass of the gravitational scalar counterpart. During this process, the gravitational scalar fluctuations, because of the iterative interplay between ϕ ss and ϕ is , play a role in the Higgs-like field. In NS-WD systems, a monotonically gravitational scalar potential, resulting from the self-couplings of gravitational scalar counterparts, combined with its dynamically couplings to the scalarized stars, makes the gravitational scalar counterparts become massive. We also estimate the masses of the gravitational scalar counterparts in scalarized DNS binaries and scalarized NS-WD systems, respectively. In Sect. 2, we investigate the role of the gravitational scalar counterpart in scalarized DNS binaries and the mechanism that makes it become massive. A different scenario in post-Newtonian corrected in-spiraling scalarized NS-WD binaries is discussed in Sect. 3. Finally, we give a summary in Sect. 4.
Massive gravitational scalar counterpart of GWs in DNS binary
The feedback effects between ϕ ss and ϕ is , described by Eq. (1), contribute to a continuous enhancement of scalar configurations inside two components, as well as the external scalar fields. As a consequence, convergence of (n) ϕ ss and (n) ϕ is occurs, which produces a gravitational scalar background field φ B . Therefore, the system is immersed in the gravitational scalar background field φ B , and the dynamics of the DNS binary deviates from Einstein's general relativity, which has influence on its orbital evolution [12][13][14].
The dynamics of a scalarized in-spiraling DNS binary is then encoded not only by the gravitational tensor metric g μν , but also by a gravitational scalar background field φ B , which naturally renders the scalar-tensor theory [12] of gravity an alternative to Einstein's general relativity describing the scalarized binary system. Because of a very approximate compactness of the two components in DNS binary, we can neglect the effects of the differences in the couplings between scalar field and the NS matter. Therefore, the scalar-tensor action that describes the scalarized DNS binary can be writ-ten as
S = d 4 x √ −g M 2 pl 2 R − 1 2 g μν ∂ μ φ B ∂ ν φ B − V DNS (φ B ) .(4)
Here, M pl = √ 1/8π G is the reduced Planck constant. R and g are the Ricci scalar and the determinant of the gravitational tensor metric g μν , respectively. V DNS (φ B ) is the gravitational scalar potential of DNS, which consists of the dynamical coupling of φ B to ϕ ss,is and a self-coupling term of φ B ,
V DNS (φ B ) = α 2 ϕ ss ϕ is φ 2 B + λ 4 φ 4 B .(5)Here, α ≡ −M pl d log m ss,is dφ B
is a dimensionless coupling constant and characterizes the coupling strength between φ B and the matter in the scalarized stars, whose value depends on the compactness of stars consisting of the binary [4,5,12]. λ is the self-coupling constant, which is roughly of the order of unity.
The iterative interplay and convergence of (n) ϕ ss,is perturb φ B and cause small gravitational scalar background fluctuations σ (σ φ B ). The background fluctuating field also has effects on both the gravitational tensor metric and the gravitational background scalar field, via an exponential transformation e λσ , which follows the couplings [4,10,11]
g * μν = e −2λσ g μν , −g * = e 4λσ √ −g,(6)φ * B = e −λσ φ B ,(7)
where g * μν , g * , and φ * B are the transformed gravitational tensor metric and its determinant, and the transformed gravitational background scalar field. By expanding the transformed metric g * μν about a Minkowski background in terms of Eq. (6), we express them as
g * μν = η μν + h * μν , h * μν = h μν + 2η μν λσ,(8)
where |h μν |, |h * μν | 1. Equation (6) remains unchanged. Under the transformation of σ , we find, using Eqs. (6) and (7), that the kinetic term in the action (4) is transformed into a canonical kinetic term,
− 1 2 √ −gg μν ∂ μ φ B ∂ ν φ B = − 1 2 −g * g * μν D μ φ * B D ν φ * B , (9) D μ ≡ ∂ μ + λ∂ μ σ,(10)
which is scale invariant. The transformed action then reads
S * = d 4 x −g * M 2 pl 2 R * − 1 2 g * μν D μ φ * B D ν φ * B −V DNS (φ * B ) .(11)
Now we consider the solution. In the process of performing a conformal transformation, the solutions of external scalar fields ϕ ss and ϕ is with mass dimensions [10] involves a dimensional constant μ with the Planck mass scale, which appears in the transformed gravitational scalar poten-
tial V DNS (φ * B ), V DNS (φ * B ) = α 2 μ 2 φ * 2 B + λ 4 φ * 4 B .(12)
The Planck-scale constant μ = √ 1/8π G eff here appears to be related to the scalar charges of the scalarized NSs via the effective gravitational constant G eff according to Eq. (3) [12]. It is the appearance of the mass-dimensional constant μ, which is responsible for a spontaneous breaking of symmetry, which allows us to apply a similar recipe to the Higgs mechanism in the standard model. Thus the gravitational scalar background field becomes a massive one.
Actual NSs observed in DNS binaries, with important deviations from general relativity in the strong-field regime, would develop strong scalar charges in the absence of an external scalar field for sufficiently negative values of α, i.e. α < 0 [4,5,11]. The self-coupling constant λ is of the order of unity, i.e. λ > 0. By considering that the interplay between ϕ ss and ϕ is is a long-range force, the behavior of a transformed gravitational background scalar field φ * B near spatial infinity endows it with a vacuum expectation value (VEV) v φ * B ,
(v φ * B ) 2 = − αμ 2 2λ ,(13)
which is obtained from the condition
dV (φ * B ) d(φ * B ) | (φ * B ) min = 0.
Therefore, the gravitational scalar background field φ * B is a combination of its VEV v φ * B and the approximate value of the fluctuating field at spatial infinity. Substituting the VEV (13) into the Lagrangian of φ * B extracted from Eq. (11), we get the mass of φ * B ,
(m DNS s ) 2 = −αμ 2 .(14)
It was proven that non-perturbative strong-gravitationalfield effects develop in NSs for a dimensionless coupling constant α −4, which causes order-of-unity deviations from general relativity [4]. The general properties of binary systems consisting of scalarized NSs can be described by α − 4.5, because of binary-pulsar measurements [3,8,15]. For α −5, NSs in a binary pulsar, with a mass of 1.4 M , would develop strong scalar charges even in the absence of external scalar solicitation, and a more negative value of α corresponds to a less compact NS [5]. Most of the measured more massive NSs in detected DNS systems have masses of ∼ 1.3−1.44M [6]. Consequently, the coupling constant is in the range of α = − 5 to − 6 in a quadratic coupling model described in Eq. (12) [5]. The scalar charges mildly vary with the compactness of NSs [11] and will be ∼ 1 only in the last stages of the evolution of NS binaries or close transient encounters. For NSs in nine so far detected DNS systems, the scalar charges are around 0.2 within the solarsystem bound [16] in the Fierz-Jordan-Brans-Dicke theory, by considering its dependence on the "sensitivities" s ∼ 0.2 [17,18]. Accordingly, the gravitational scalar counterpart of gravitational waves in scalarized in-spiraling DNS binary is of the order of the Planck-mass scale.
Massive gravitational scalar counterpart of GWs from NS-WD system
It is well known that NS is a more compact object than WD. Consequently, the strength of the coupling between the scalar configuration inside stars and the NS/WD matter is different.
A distinct dependence of masses on the scalar fields for NS and WD actually is the source of an emission of dipolar gravitational scalar radiation in a post-Newtonian in-spiraling scalarized binary [11], in addition to the quadruple tensor gravitational waves. Accordingly, the dynamics of a scalarized in-spiraling NS-WD system is governed by a gravitational scalar radiated field φ r , together with the gravitational tensor metric g μν . The scalar charge of a scalarized NS-WD binary can be extracted from the behavior of the gravitational scalar radiated field near spatial infinity [12], i.e.
φ r = φ 0 r + φ 1 r r + O 1 r 2 ,(15)
where the iterative interplay and convergence of the external scalar fields ϕ ss and ϕ is around NS and WD are considered, and φ 0 r is the asymptotic value of the gravitational scalar radiated field at spatial infinity. Accordingly, the dynamics of an in-spiraling scalarized NS-WD binary system, suffering from the post-Newtonian corrections, is described by the following scalar-tensor action:
S = d 4 x √ −g R 16π G − 1 2 g μν ∂ μ φ r ∂ ν φ r −V NS-WD (φ r ) + n γ n ds m (ss,is) n (φ r ).(16)
The gravitational scalar potential of NS-WD binary V NS-WD (φ r ) results from two interactions, i.e. the selfinteractions of φ r and the interactions between φ r and matter fields of NS and WD. The gravitational scalar radiated field is associated with the non-perturbative strongfield effects [4], which contributes to the potential of the runaway form [19] that satisfies lim
φ r →∞ V NS-WD (φ r ) → 0, lim φ r →∞ V NS-WD (φ r ) V NS-WD (φ r ) → 0, lim φ r →∞ V NS−WD (φ r ) V NS-WD (φ r ) → 0, ...,
as well as lim
φ r →0 V NS-WD (φ r ) → ∞, lim φ r →0 V NS-WD (φ r ) V NS-WD (φ r ) → ∞, lim φ r →0 V NS-WD (φ r ) V NS-WD (φ r ) → ∞, ... (V NS-WD (φ r ) ≡ dV dφ r , and V NS-WD (φ r ) ≡ d 2 V dφ 2 r
, etc.). Thus, the self-interactions of gravitational scalar radiated field, whose behavior is described by Eq. (15), lead to a monotonically decreasing potential,
V φ r = ν 5 /φ r ,(17)
where ν has the unit of mass. The NS/WD matter interacts directly with the gravitational scalar radiated field φ r through a conformal coupling of the form e −α ss,is φ r /μ . The values of α ss,is are also usually negative for WDs [20]. So the exponential coupling function is an increasing function of φ r . The combined effects of self-interactions of φ r described by Eq. (17) and the conformal coupling give us the form of the scalar potential V NS-WD (φ r ) in Eq. (16),
V NS-WD (φ r ) = ν 5 φ r + ε ϕ e −α ss,is φ r /μ .(18)
It can be found that V NS-WD (φ r ) is an explicit function of the energy density ε ϕ of the external scalar fields ϕ ss,is , which depends on the masses of the stars (a function of the density for each star ρ ss,is ) and the coupling strength between interior scalar configuration and matter components of NS/WD [5]. The summation part of Eq. (16) describes the action of the matter components making up the NS and the WD. In the sum over n we give the world line action for any number of species of matter and particles consisting in the NS and the WD and we use γ n to represent the integral of the matter action along the world line. The couplings of matter components inside the stars to the scalar field arise from the dependence of the masses m ss,is on φ r . The NS/WD matter couples to the gravitational tensor metric g μν via the conformal transformation e −α ss,is φ r /μ , according to the rescaling relation,
g * μν = e −2α ss,is φ r /μ g μν .(19)
The combined gravitational scalar potential V NS-WD (φ r ), Eq. (18) in the NS-WD system, consisting of a monotonically decreasing potential (17) and a monotonically increasing interaction e −α ss,is φ r /μ , actually displays a minimum. By minimizing the differentiation of the gravitational scalar potential with respect to φ r , i.e.
V NS-WD (φ r ) − ss,is α ss,is μ ε ϕ e −α ss,is φ r /μ = 0,(20)
we can get the minimum value of φ r at the minimum potential φ min r . Around this minimum, the gravitational scalar radiated field acquires an effective mass, which is obtained by evaluating the second derivative of the potential at φ min r ,
(m NS-WD s ) 2 = V NS-WD (φ r ) | φ min r + ss,is α 2 i, j μ 2 ε ϕ e −α ss,is φ r /μ | φ min r .(21)
Equations (20) and (21) imply that both the local value of the gravitational scalar radiated field φ min r and the mass of the scalar counterpart depend on the local energy density of external scalar fields produced by two scalarized components. It can be found, from Eq. (21), that the gravitational scalar radiated field becomes more massive in a higher ε ϕ environment.
The gravitational scalar interaction between NS and WD, mediated by a massive gravitational scalar radiated field, typically acquires an exponential Yukawa suppression, which results in a finite range of the Yukawa type of potential energy,
U (r ) = −2α ss α is Gm ss m is r e −m NS-WD s r .(22)
Here the product 2α ss α is is referred to as the interaction strength. The inverse of the range λ of a Yukawa potential e −r/λ /r characterizes the mass of the gravitational radiated scalar field λ −1 ≡ m NS-WD s . Most of the NS-WD binaries have a very small orbital eccentricity, ∼ 10 −5 − 10 −6 [6], i.e. approximately circular orbits. Accordingly, the scalarized NS and WD orbit with each other and form a ringconfiguration-orbit on the binary plane. The distance a from the center of the binary plane to the outer boundary of the ring configuration corresponds to the semi-separation of an NS-WD binary, which is of the order of ∼ 10 9 m [6], and the central thickness of the ring approximately equals the diameter of the WD, i.e. a ∼ 10 6 m. By comparing the radius of NS and WD with the separation between them, we can get a a ∼ 10 −3 1. Accordingly, the orbit of the NS-WD binary can be assigned to a thin-ring orbit. The gravitational scalar interactions between NS and WD are therefore screened in the thin-ring configuration, with an interaction range of the same order as the orbital width λ ∼ 10 6 m. Consequently, the corresponding mass of the gravitational radiated scalar field in NS-WD binary is estimated as m NS-WD s ≡ λ −1 ∼ a −1 ∼ 10 −21 eV /c 2 .
Summary and discussions
In this work, we resort to the scalar-tensor theory of gravity, describing the dynamics of scalarized orbital shrinking NS binary systems, and we investigate the gravitational scalar counterpart of tensor gravitational waves. It was found that a massive NS will develop a nontrivial scalar configuration in the strong-field regime [4], which is consistent with the observations of a relativistic binary-pulsar system, e.g. the DNS system PSR B1913+16 [1,2] and a NS-WD binary PSR J1738+0333 [3]. The spontaneously scalarized NS, with interior scalar configurations, produces a scalar field in its exterior, on the one hand. On the other hand, the external scalar field will change the interactions between two components and the dynamics of the binary, which induces a scalarization of its companion, as well as a second external scalar field around the companion star. The gravitational interaction between the two components is then enhanced, due to the scalar corrections to the Newtonian one (Eq. (2)), which leads to iteratively induced scalarizations. Therefore, the external scalar fields are strengthened continually. Either the interior mass-dependent scalar configuration or the dynamical interplay of external scalar fields causes a gravitational scalar counterpart of quadruple gravitational radiation in inspiraling NS binaries. Hence, the NS binaries are encoded not only by the tensor metric, but also by a gravitational scalar field, which modifies the dynamics of the binaries and makes the scalar-tensor theory of gravity a natural alternative theory to Einstein's general relativity, describing the in-spiraling systems.
Note that the spontaneous scalarization takes place in the interior of a NS, located in a binary system, when its compactness Gm Rc 2 is above a certain critical threshold even in the absence of scalar sources. The subsequently induced scalarizations also occur in each companion star of the spontaneously scalarized NSs. That is to say, both spontaneous scalarization and induced scalarization occur in the interior of a single star. So the components in DNS binaries and NS-WD systems undergo scalarizations via the same mechanisms. However, the binding energies of NS and WD are different, which contributes to differences in the dependence of the masses of NS/WD on the scalar configurations (i.e. scalar charges). An obvious difference in the dependence of the masses on the scalar field of the two components in one binary system actually is the source of emission of dipolar gravitational scalar radiation [11]. As a result, the causes of the gravitational scalar field φ are distinct in DNS and NS-WD systems. In in-spiraling DNS binaries, the two NS components possess very similar binding energies [6], and the scalar charges ω ss and ω is are very close to each other. Accordingly, the dipolar gravitational radiation is negligible. With the iteratively interplay, the strengths of the external scalar field around each component is enhanced, and convergence finally occurs. As a consequence, a gravitational scalar background field appears, which plays the role of the gravitational scalar counterpart of quadruple gravitational tensor waves. In in-spiraling NS-WD systems, owing to a different binding energy of NS and WD, the dependence of masses of NS/WD on the scalar configurations is different. Therefore, the two components in the NS-WD binary carry different scalar charges, which is responsible for the dipolar gravitational scalar radiation. Therefore, the gravitational scalar radiated field plays the role of the gravitational scalar counterpart for quadruple gravitational tensor waves in a post-Newtonian corrected in-spiraling scalarized NS-WD system.
Consequently, the scalarized in-spiraling DNS and NS-WD systems, suffering from gravitational scalar counterparts, are immersed in gravitational scalar potentials, resulting from different mechanisms, which contribute to distinct physical processes. In the in-spiraling scalarized DNS binaries, because of the iterative interplay of two external scalar fields, the gravitational scalar background field suffers from fluctuations. The scalar fluctuations couple to both tensor metrics and gravitational scalar background field, which transfer the couplings of scalar fields into the Higgs-like gravitational scalar potential Eq. (12), with the appearance of a mass-dimensional constant. It is the appearance of the Planck-scale mass-dimensional constant that is responsible for spontaneous breaking of symmetry. Thus the gravitational scalar background field becomes a massive one, in which the gravitational scalar fluctuation field is the massless field and plays the role of a Higgs-like field. Therefore, the mass of the gravitational scalar counterpart in in-spiraling scalarized DNS, expressed as Eq. (14), is of the order of the Planck mass scale, which depends on the coupling strength between the gravitational background scalar field and NS matter. In inspiraling scalarized NS-WD binaries, the gravitational scalar potential is then consisting of a monotonically decreasing self-interaction of the gravitational scalar radiated field and an scalar-energy-density-dependent exponential increasing the coupling to the NS/WD matter. The non-monotonical potential displays a minimum, which contributes to a massive gravitational scalar counterpart. The reason why the gravitational scalar counterpart in the NS-WD system becomes massive is that the gravitational scalar radiated field oscillates around a local minimum of the gravitational scalar potential, with high scalar-energy density. By considering the Yukawasuppression effects on an environment of high scalar-energy density, we estimate the mass of the dipolar gravitational scalar counterpart of quadruple tensor gravitational waves in NS-WD binaries, expressed by Eq. (21), to be of the order of ∼ 10 −21 eV/c 2 , which depends on the orbital scale of the binary.
The gravitational waves radiated from in-spiraling DNS and NS-WD binaries, which possess a wide separation with orbital periods in units of days, are located in the typical lowfrequency band of around 10 −4 Hz. The amplitudes are of the order of 10 −24 . So it is very unlikely that one will be able to detect this currently by LIGO. However, the first spacebased gravitational-wave observatory, LISA, is expect to detect space-born low-frequency gravitational waves, whose sensitivity can be reduced to 10 −24 . Therefore, we would expect the gravitational waves from in-spiraling scalarized DNS and NS-WD binaries and the scalar counterparts to be detected and constrained potentially by LISA/eLISA in the near future. Currently, we just can expect and try to constrain our results from binary-pulsar observations, which is work under way.
m and R are the mass and radius of NS, respectively. G is the Newtonian gravitational constant.
J H Taylor, J M Weisberg, A new test of general relativitygravitational radiation and the binary pulsar PSR 1913+16. 253908J.H. Taylor, J.M. Weisberg, A new test of general relativity- gravitational radiation and the binary pulsar PSR 1913+16. Astro- phys. J. 253, 908 (1982)
Timing measurements of the relativistic binary pulsar PSR B1913+16. J M Weisberg, D J Nice, J H Taylor, Astrophys. J. 7221030J.M. Weisberg, D.J. Nice, J.H. Taylor, Timing measurements of the relativistic binary pulsar PSR B1913+16. Astrophys. J. 722, 1030 (2010)
The relativistic pulsar-white dwarf binary PSR J1738+0333-II. The most stringent test of scalar-tensor gravity. P C C Freire, N Wex, G Esposito-Farèse, J P W Verbiest, M Bailes, B A Jacoby, M Kramer, I H Stairs, J Antoniadis, G H Janssen, Mon. Not. R. Astron. Soc. 4233328P.C.C. Freire, N. Wex, G. Esposito-Farèse, J.P.W. Verbiest, M. Bailes, B.A. Jacoby, M. Kramer, I.H. Stairs, J. Antoniadis, G.H. Janssen, The relativistic pulsar-white dwarf binary PSR J1738+0333-II. The most stringent test of scalar-tensor gravity. Mon. Not. R. Astron. Soc. 423, 3328 (2012)
Nonperturbative strong field effects in tensor-scalar theories of gravitation. T Damour, G Esposito-Farese, Phys. Rev. Lett. 702220T. Damour, G. Esposito-Farese, Nonperturbative strong field effects in tensor-scalar theories of gravitation. Phys. Rev. Lett. 70, 2220 (1993)
Tensor-scalar gravity and binary pulsar experiments. T Damour, G Esposito-Farese, Phys. Rev. D. 541474T. Damour, G. Esposito-Farese, Tensor-scalar gravity and binary pulsar experiments. Phys. Rev. D 54, 1474 (1996)
Study of measured pulsar masses and their possible conclusions. C M Zhang, J Wang, Y H Zhao, H X Yin, L M Song, D P Menezes, D T Wickramasinghe, L Ferrario, P Chardonnet, Astron. Astrophys. 52783C.M. Zhang, J. Wang, Y.H. Zhao, H.X. Yin, L.M. Song, D.P. Menezes, D.T. Wickramasinghe, L. Ferrario, P. Chardonnet, Study of measured pulsar masses and their possible conclusions. Astron. Astrophys. 527, A83 (2011)
A two-solar-mass neutron star measured using Shapiro delay. P B Demorest, T Pennucci, S M Ransom, M S E Roberts, J W T Hessels, Nature. 4671081P.B. Demorest, T. Pennucci, S.M. Ransom, M.S.E. Roberts, J.W.T. Hessels, A two-solar-mass neutron star measured using Shapiro delay. Nature 467, 1081 (2010)
J Antoniadis, P C C Freire, N Wex, T M Tauris, R S Lynch, M H Van Kerkwijk, M Kramer, C Bassa, V S Dhillon, T Driebe, J W T Hessels, V M Kaspi, V I Kondratiev, N Langer, T R Marsh, M A Mclaughlin, T T Pennucci, S M Ransom, I H Stairs, J Van Leeuwen, J P W Verbiest, D G Whelan, A massive pulsar in a compact relativistic binary. 340448J. Antoniadis, P.C.C. Freire, N. Wex, T.M. Tauris, R.S. Lynch, M.H. van Kerkwijk, M. Kramer, C. Bassa, V.S. Dhillon, T. Driebe, J.W.T. Hessels, V.M. Kaspi, V.I. Kondratiev, N. Langer, T.R. Marsh, M.A. McLaughlin, T.T. Pennucci, S.M. Ransom, I.H. Stairs, J. van Leeuwen, J.P.W. Verbiest, D.G. Whelan, A massive pulsar in a compact relativistic binary. Science 340, 448 (2013)
Spin period evolution of a recycled pulsar in an accreting binary. J Wang, C M Zhang, Y H Zhao, Y Kojima, H X Yin, L M Song, Astron. Astrophys. 52688J. Wang, C.M. Zhang, Y.H. Zhao, Y. Kojima, H.X. Yin, L.M. Song, Spin period evolution of a recycled pulsar in an accreting binary. Astron. Astrophys. 526, A88 (2011)
On spontaneous scalarization. M Salgado, D Sudarsky, U Nucamendi, Phys. Rev. D. 58124003M. Salgado, D. Sudarsky, U. Nucamendi, On spontaneous scalar- ization. Phys. Rev. D 58, 124003 (1998)
Dynamical scalarization of neutron stars in scalar-tensor gravity theories. C Palenzuela, E Barausse, M Ponce, L Lehner, Phys. Rev. D. 89444024C. Palenzuela, E. Barausse, M. Ponce, L. Lehner, Dynamical scalar- ization of neutron stars in scalar-tensor gravity theories. Phys. Rev. D 89(4), 044024 (2014)
Tensor multiscalar theories of gravitation. T Damour, G Esposito-Farese, Class. Quant. Grav. 92093T. Damour, G. Esposito-Farese, Tensor multiscalar theories of grav- itation. Class. Quant. Grav. 9, 2093 (1992)
Discovery of a pulsar in a binary system. R A Hulse, J H Taylor, Astrophys. J. Lett. 19551R.A. Hulse, J.H. Taylor, Discovery of a pulsar in a binary system. Astrophys. J. Lett. 195, L51 (1975)
Gravitational radiation, close binary systems, and the Brans-Dicke theory of gravity. C M Will, H W Zaglauer, Astrophys. J. 346366C.M. Will, H.W. Zaglauer, Gravitational radiation, close binary systems, and the Brans-Dicke theory of gravity. Astrophys. J 346, 366 (1989)
Gravitational wave versus binarypulsar tests of strong field gravity. T Damour, G Esposito-Farese, Phys. Rev. D. 5842001T. Damour, G. Esposito-Farese, Gravitational wave versus binary- pulsar tests of strong field gravity. Phys. Rev. D 58, 042001 (1998)
Observable effects of a scalar gravitational field in a binary pulsar. D M Eardley, Astrophys. J. Lett. 19659D.M. Eardley, Observable effects of a scalar gravitational field in a binary pulsar. Astrophys. J. Lett. 196, L59 (1975)
Testing gravity to second post-Newtonian order: a field theory approach. T Damour, G Esposito-Farèse, Phys. Rev. D. 535541T. Damour, G. Esposito-Farèse, Testing gravity to second post- Newtonian order: a field theory approach. Phys. Rev. D 53, 5541 (1996)
Compact binary systems in scalar-tensor gravity: equations of motion to 2.5 post-Newtonian order. S Mirshekari, C M Will, Phys. Rev. D. 8784070S. Mirshekari, C.M. Will, Compact binary systems in scalar-tensor gravity: equations of motion to 2.5 post-Newtonian order. Phys. Rev. D 87, 084070 (2013)
. J Khoury, A Weltman, Chameleon cosmology. Phys. Rev. D. 6944026J. Khoury, A. Weltman, Chameleon cosmology. Phys. Rev. D 69, 044026 (2004)
Dark matter, time varying G, and a dilaton field. T Damour, G W Gibbons, C Gundlach, Phys. Rev. Lett. 64123T. Damour, G.W. Gibbons, C. Gundlach, Dark matter, time varying G, and a dilaton field. Phys. Rev. Lett. 64, 123 (1990)
| []
|
[
"Combinatorial Topic Models using Small-Variance Asymptotics",
"Combinatorial Topic Models using Small-Variance Asymptotics"
]
| [
"Ke Jiang \nDept. of Computer Science and Engineering\nInstitute of Technology\nDept. of Electrical & Computer Engineering and Dept. of Computer Science\nLab. for Information and Decision Systems Massachusetts\nOhio State University\nBoston University\n\n",
"Suvrit Sra [email protected] \nDept. of Computer Science and Engineering\nInstitute of Technology\nDept. of Electrical & Computer Engineering and Dept. of Computer Science\nLab. for Information and Decision Systems Massachusetts\nOhio State University\nBoston University\n\n",
"Brian Kulis [email protected] \nDept. of Computer Science and Engineering\nInstitute of Technology\nDept. of Electrical & Computer Engineering and Dept. of Computer Science\nLab. for Information and Decision Systems Massachusetts\nOhio State University\nBoston University\n\n"
]
| [
"Dept. of Computer Science and Engineering\nInstitute of Technology\nDept. of Electrical & Computer Engineering and Dept. of Computer Science\nLab. for Information and Decision Systems Massachusetts\nOhio State University\nBoston University\n",
"Dept. of Computer Science and Engineering\nInstitute of Technology\nDept. of Electrical & Computer Engineering and Dept. of Computer Science\nLab. for Information and Decision Systems Massachusetts\nOhio State University\nBoston University\n",
"Dept. of Computer Science and Engineering\nInstitute of Technology\nDept. of Electrical & Computer Engineering and Dept. of Computer Science\nLab. for Information and Decision Systems Massachusetts\nOhio State University\nBoston University\n"
]
| []
| Topic models have emerged as fundamental tools in unsupervised machine learning. Most modern topic modeling algorithms take a probabilistic view and derive inference algorithms based on Latent Dirichlet Allocation (LDA) or its variants. In contrast, we study topic modeling as a combinatorial optimization problem, and propose a new objective function derived from LDA by passing to the smallvariance limit. We minimize the derived objective by using ideas from combinatorial optimization, which results in a new, fast, and high-quality topic modeling algorithm. In particular, we show that our results are competitive with popular LDA-based topic modeling approaches, and also discuss the (dis)similarities between our approach and its probabilistic counterparts. | null | [
"https://arxiv.org/pdf/1604.02027v2.pdf"
]
| 17,660,790 | 1604.02027 | ff5bf174bf46eccf10a2090c3acfa3641bfcb673 |
Combinatorial Topic Models using Small-Variance Asymptotics
Ke Jiang
Dept. of Computer Science and Engineering
Institute of Technology
Dept. of Electrical & Computer Engineering and Dept. of Computer Science
Lab. for Information and Decision Systems Massachusetts
Ohio State University
Boston University
Suvrit Sra [email protected]
Dept. of Computer Science and Engineering
Institute of Technology
Dept. of Electrical & Computer Engineering and Dept. of Computer Science
Lab. for Information and Decision Systems Massachusetts
Ohio State University
Boston University
Brian Kulis [email protected]
Dept. of Computer Science and Engineering
Institute of Technology
Dept. of Electrical & Computer Engineering and Dept. of Computer Science
Lab. for Information and Decision Systems Massachusetts
Ohio State University
Boston University
Combinatorial Topic Models using Small-Variance Asymptotics
Topic models have emerged as fundamental tools in unsupervised machine learning. Most modern topic modeling algorithms take a probabilistic view and derive inference algorithms based on Latent Dirichlet Allocation (LDA) or its variants. In contrast, we study topic modeling as a combinatorial optimization problem, and propose a new objective function derived from LDA by passing to the smallvariance limit. We minimize the derived objective by using ideas from combinatorial optimization, which results in a new, fast, and high-quality topic modeling algorithm. In particular, we show that our results are competitive with popular LDA-based topic modeling approaches, and also discuss the (dis)similarities between our approach and its probabilistic counterparts.
Introduction
Topic modeling has long been fundamental to unsupervised learning on large document collections. Though the roots of topic modeling date back to latent semantic indexing [12] and probabilistic latent semantic indexing [16], the arrival of Latent Dirichlet Allocation (LDA) [9] was a turning point that transformed the community's thinking about topic modeling. LDA led to several followups that address some limitations of the original model [8,31], and also helped pave the way for subsequent advances in Bayesian learning methods, including variational inference methods [29], nonparametric Bayesian models [7,28], among others.
The LDA family of topic models are almost exclusively cast as probabilistic models. Consequently, the vast majority of techniques developed for topic modeling-collapsed Gibbs sampling [15], variational methods [9,29], and "factorization" approaches with theoretical guarantees [1,3,6]-are centered around performing inference for underlying probabilistic models. By limiting ourselves to a purely probabilistic viewpoint, we may be missing important opportunities grounded in combinatorial thinking. This realization leads us to the central question of this paper: Can we obtain a combinatorial topic model that competes with LDA?
We answer this question in the affirmative. In particular, we propose a combinatorial optimization formulation for topic modeling, derived using small-variance asymptotics (SVA) on the LDA model. SVA produces limiting versions of various probabilistic learning models, which can then be solved as combinatorial optimization problems. An analogy worth keeping in mind here is how k-means solves the combinatorial problem that arises upon letting variances go to zero in Gaussian mixtures. SVA techniques have proved quite fruitful recently, e.g., for cluster evolution [11], hidden Markov models [26], feature learning [10], supervised learning [32], hierarchical clustering [22], and others [17,33]. A common theme in these examples is that computational advantages and good empirical performance of k-means carry over to richer SVA based models. Indeed, in a compelling example, [11] demonstrate how a hard cluster evolution algorithm obtained via SVA is orders of magnitude faster than competing sampling-based methods, while still being significantly more accurate than competing probabilistic inference algorithms on benchmark data.
But merely using SVA to obtain a combinatorial topic model does not suffice. We need effective algorithms to optimize the resulting model. Unfortunately, a direct application of greedy combinatorial procedures on the LDA-based SVA model fails to compete with the usual probabilistic LDA methods. This setback necessitates a new idea. Surprisingly, as we will see, a local refinement procedure combined with an improved word assignment technique transforms the SVA approach into a competitive topic modeling algorithm.
Contributions. In summary the main contributions of our paper are the following:
• We perform SVA on the standard LDA model and obtain through it a combinatorial topic model.
• We develop an optimization procedure for optimizing the derived combinatorial model by utilizing local refinement and ideas from the facility location problem.
• We show how our procedure can be implemented to take O(N K) time per iteration to assign each word token to a topic, where N is the total number of word tokens and K the number of topics.
• We demonstrate that our approach competes favorably with existing state-of-the-art topic modeling algorithms; in particular, our approach is orders of magnitude faster than sampling-based approaches, with comparable or better accuracy.
Before proceeding to outline the technical details, we make an important comment regarding evaluation of topic models. The connection between our approach and standard LDA may be viewed analogously to the connection between k-means and a Gaussian mixture model. As such, evaluation is nontrivial; most topic models are evaluated using predictive log-likelihood or related measures. In light of the "hard-vs-soft" analogy, a predictive log-likelihood score can be a misleading way to evaluate performance of the k-means algorithm, so clustering comparisons typically focus on ground-truth accuracy (when possible). Due to the lack of available ground truth data, to assess our combinatorial model we must resort to synthetic data sampled from the LDA model to enable meaningful quantitative comparisons; but in line with common practice we also present results on real-world data, for which we use both hard and soft predictive log-likelihoods.
Related Work
LDA Algorithms. Many techniques have been developed for efficient inference for LDA. The most popular are perhaps MCMC-based methods, notably the collapsed Gibbs sampler (CGS) [15], and variational inference methods [9,29]. Among MCMC and variational techniques, CGS typically yields excellent results and is guaranteed to sample from the desired posterior with sufficiently many samples. Its running time can be slow and many samples may be required before convergence.
Since topic models are often used on large (document) collections, significant effort has been made in scaling up LDA algorithms. One recent example is [23] that presents a massively distributed implementation. Such methods are outside the focus of this paper, which focuses more on our new combinatorial model that can quatitatively compete with the probabilistic LDA model. Ultimately, our model should be amenable to fast distributed solvers, and obtaining such solvers for our model is an important part of future work.
A complementary line of algorithms starts with [3,2], who consider certain separability assumptions on the input data to circumvent NP-Hardness of the basic LDA model. These works have shown performance competitive to Gibbs sampling in some scenarios while also featuring theoretical guarantees. Other recent viewpoints on LDA are offered by [1,24,6].
Small-Variance Asymptotics (SVA). As noted above, SVA has recently emerged as a powerful tool for obtaining scalable algorithms and objective functions by "hardening" probabilistic models. Similar connections are known for instance in dimensionality reduction [25], multi-view learning, classification [30], and structured prediction [27]. Starting with Dirichlet process mixtures [21], one thread of research has considered applying SVA to richer Bayesian nonparametric models. Applications include clustering [21], feature learning [10], evolutionary clustering [11], infinite hidden Markov models [26], Markov jump processes [17], infinite SVMs [32], and hierarchical clustering methods [22]. A related thread of research considers how to apply SVA methods when the data likelihood is not Gaussian, which is precisely the scenario under which LDA falls. In [19], it is shown how SVA may be applied as long as the likelihood is a member of the exponential family of distributions. Their work considers topic modeling as a potential application, but does not develop any algorithmic tools, and without these SVA fails to succeed on topic models; the present paper fixes this by using a stronger word assignment algorithm and introducing local refinement.
Combinatorial Optimization. In developing effective algorithms for topic modeling, we will borrow some ideas from the large literature on combinatorial optimization algorithms. In particular, in the k-means community, significant effort has been made on how to improve upon the basic k-means algorithm, which is known to be prone to local optima; these techniques include local search methods [14] and good initialization strategies [4]. We also borrow ideas from approximation algorithms, most notably algorithms based on the facility location problem [18].
SVA for Latent Dirichlet Allocation
We now detail our combinatorial approach to topic modeling. We start with the derivation of the underlying objective function that is the basis of our work. This objective is derived from the LDA model by applying SVA, and contains two terms. The first is similar to the k-means clustering objective in that it seeks to assign words to topics that are, in a particular sense, "close." The second term, arising from the Dirichlet prior on the per-document topic distributions, places a penalty on the number of topics per document.
Recall the standard LDA model. We choose topic weights for each document as θ j ∼ Dir(α), where j ∈ {1, ..., M }. Then we choose word weights for each topic as ψ i ∼ Dir(β), where i ∈ {1, ..., K}. Then, for each word i in document j, we choose a topic z jt ∼ Cat(θ j ) and a word w jt ∼ Cat(ψ z jt ). Here α and β are scalars (i.e., we are using a symmetric Dirichlet distribution). Let W denote the vector of all words in all documents, Z the topic indicators of all words in all documents, θ the concatenation of all the θ j variables, and ψ the concatenation of all the ψ i variables. Also let N j be the total number of word tokens in document j. The θ j vectors are each of length K, the number of topics. The ψ i vectors are each of length D, the size of the vocabulary. We can write down the full joint likelihood p(W, Z, θ, ψ|α, β) of the model in the factored form
K i=1 p(ψ i |β) M j=1 p(θ j |α) N j t=1 p(z jt |θ j )p(w jt |ψ z jt ),
where each of the probabilities is as specified by the LDA model. Following standard LDA manipulations, we can eliminate variables to simplify inference by integrating out θ to obtain p(Z, W, ψ|α, β) = θ p(W, Z, θ, ψ|α, β)dθ.
(1)
After integration and some simplification, (1) becomes
K i=1 p(ψ i |β) M j=1 N j t=1 p(w jt |ψ z jt ) × M j=1 Γ(αK) Γ( K i=1 n i j· + αK) K i=1 Γ(n i j· + α) Γ(α) .(2)
Here n i j· is the number of word tokens in document j assigned to topic i. Now, following [10], we can obtain the SVA objective by taking the (negative) logarithm of this likelihood and letting the variance go to zero. Given space considerations, we will summarize this derivation; full details are available in Appendix A.
Consider the first bracketed term of (2). Taking logs yields a sum over terms of the form log p(ψ i |β) and terms of the form log p(w jt |ψ z jt ). Noting that the latter of these is a multinomial distribution, and thus a member of the exponential family, we can appeal to the results in [5,19] to introduce a new parameter for scaling the variance. In particular, we can write p(w jt |ψ z jt ) in its Bregman divergence form exp(−KL(w jt , ψ z jt )), where KL refers to the discrete KL-divergence, andw jt is an indicator vector for the word at token w jt . It is straightforward to verify that KL(w jt , ψ z jt ) = − log ψ z jt ,w jt . Next, introduce a new parameter η that scales the variance appropriately, and write the resulting distribution as proportional to exp(−η · KL(w jt , ψ z jt )). As η → ∞, the expected value of the distribution remains fixed while the variance goes to zero, exactly what we require.
After this, consider the second bracketed term of (2). We scale α appropriately as well; this ensures that the hierarchical form of the model is retained asymptotically. In particular, we write α = exp(−λ · η). After some manipulation of this distribution, we can conclude that the negative log of the Dirichlet multinomial term becomes asymptotically ηλ(K j+ − 1), where K j+ is the number of topics i in document j where n i j· > 0, i.e., the number of topics currently used by document j. (The maximum value for K j+ is K, the total number of topics.) To formalize, let f (x) ∼ g(x) denote that f (x)/g(x) → 1 as x → ∞. Then we have the following (see Appendix A for a proof):
Lemma 1. Consider the likelihood p(Z|α) = M j=1 Γ(αK) Γ( K i=1 n i j· + αK) K i=1 Γ(n i j· + α) Γ(α) .
If α = exp(−λ · η), then asymptotically as η → ∞, the negative log-likelihood satisfies
− log p(Z|α) ∼ ηλ M j=1 (K j+ − 1).
Now we put the terms of the negative log-likelihood together. The − log p(ψ i |β) terms vanish asymptotically since we are not scaling β (see the note below on scaling β). Thus, the remaining terms in the SVA objective are the ones arising from the word likelihoods and the Dirichlet-multinomial. Using the Bregman divergence representation with the additional η parameter, we conclude that the negative log-likelihood asymptotically yields the following:
− log p(Z, W, ψ|α, β) ∼ η M j=1 N j t=1 KL(w jt , ψ z jt ) + λ M j=1 (K j+ − 1) ,
which leads to our final objective function
min Z,ψ M j=1 N j t=1 KL(w jt , ψ z jt ) + λ M j=1 K j+ .(3)
We remind the reader that KL(w jt , ψ z jt ) = − log ψ z jt ,w jt . Thus, we obtain a k-means-like term that says that all words in all documents should be "close" to their assigned topic in terms of KL-divergence, but that we should also not have too many topics represented in each document. Note. We did not scale β to obtain a simple objective with only one parameter (other than the total number of topics), but let us say a few words about scaling β. A natural approach is to further integrate out ψ of the joint likelihood, as is done in the collapsed Gibbs sampler. One would obtain additional Dirichlet-multinomial distributions, and properly scaling as discussed above would yield a simple objective that places penalties on the number of topics per document as well as the number of words in each topic. Optimization would then be performed with respect to the topic assignment matrix. Future work will consider effectiveness of such an objective function for topic modeling.
Algorithms
With our combinatorial objective in hand, we are ready to develop algorithms that optimize it. In particular, we discuss a locally-convergent algorithm similar to k-means and the hard topic modeling algorithm [19]. Then, we introduce two more powerful techniques: (i) a word-level assignment method that arises from connections between our proposed objective function and the facility location problem; and (ii) an incremental topic refinement method that is inspired by local-search methods developed for k-means. Despite the apparent complexity of our algorithms, we show that the per-iteration time matches that of the collapsed Gibbs sampler (while empirically converging in just a few iterations, as opposed to the thousands typically required for Gibbs sampling).
We first describe a basic iterative algorithm for optimizing the combinatorial hard LDA objective derived in the previous section (see Appendix A for pseudo-code). The basic algorithm follows the k-means style-we perform alternate optimization by first minimizing with respect to the topic indicators for each word (the Z values) and then minimizing with respect to the topics (the ψ vectors).
Consider first minimization with respect to ψ, with Z fixed. In this case, the penalty term of the objective function for the number of topics per document is not relevant to the minimization. Therefore the minimization can be performed in closed form by computing means based on the assignments, due to known properties of the KL-divergence; see Proposition 1 of [5]. In our case, the topic vectors will be computed as follows: entry ψ iu corresponding to topic i and word u will simply be equal to the number of occurrences of word u assigned to topic i normalized by the total number of word tokens assigned to topic i.
Next consider minimization with respect to Z with fixed ψ. We follow a strategy similar to DP-means [21]. In particular, we compute the KL-divergence between each word token w jt and every topic i via − log(ψ i,w jt ). Then, for any topic i that is not currently occupied by any word token in document j, i.e., z jt = i for all tokens t in document j, we penalize the distance by λ. Next we obtain new assignments by reassigning each word token to the topic corresponding to its smallest divergence (including any penalties). We continue this alternating strategy until convergence. The running time of the batch algorithm can be shown to be O(N K) per iteration, where N is the total number of word tokens and K is the number of topics. One can also show that this algorithm is guaranteed to converge to a local optimum, similar to k-means and DP-means.
Improved Word Assignments
The basic algorithm has the advantage that it achieves local convergence. However, it is quite sensitive to initialization, analogous to standard k-means. In this section, we discuss and analyze an alternative assignment technique for Z, which may be used as an initialization to the locally-convergent basic algorithm or to replace it completely. (4). Let f i = 0 and mark all tokens in T . Assign z jt = i for all t ∈ T . end while end for Output: Assignments Z.
Algorithm 1 details the alternate assignment strategy for tokens. The inspiration for this greedy algorithm arises from the fact that we can view the assignment problem for Z, given ψ, as an instance of the uncapacitated facility location (UFL) problem [18]. Recall that the UFL problem aims to open a set of facilities from a set F of potential locations. Given a set of clients D, a distance function d : D × F → R + , and a cost function f : F → R + for the set F , the UFL problem aims to find a subset S of F that minimizes
i∈S f i + j∈D (min i∈S d ij ).
To map UFL to the assignment problem in combinatorial topic modeling, consider the problem of assigning word tokens to topics for some fixed document j. The topics correspond to the facilities and the clients correspond to word tokens. Let f i = λ for each facility, and let the distances between clients and facilities be given by the corresponding KL-divergences as detailed earlier. Then the UFL objective corresponds exactly to the assignment problem for topic modeling. Algorithm 1 is a greedy algorithm for UFL that has been shown to achieve constant factor approximation guarantees when distances between clients and facilities forms a metric [18] (this guarantee does not apply in our case, as KL-divergence is not a metric).
The algorithm, must select, among all topics and all unmarked tokens T , the minimizer to
f i + t∈T KL(w jt , ψ i ) |T | .(4)
This algorithm appears to be computationally expensive, requiring multiple rounds of marking where each round requires us to find a minimizer over exponentially-sized sets. Surprisingly, under mild assumptions we can use the structure of our problem to derive an efficient implementation of this algorithm that runs in total time O(N K). The details of this efficient implementation are presented in Appendix B.
Incremental Topic Refinement
Unlike traditional clustering problems, topic modeling is hierarchical: we have both word level assignments and "mini-topics" (formed by word tokens in the same document which are assigned to the same topic). Explicitly refining the mini-topics should help in achieving better word-coassignment within the same document. Inspired by local search techniques in the clustering literature [14], we take a similar approach here. However, traditional approaches [13] do not directly apply in our setting; we therefore adapt local search techniques from clustering to the topic modeling problem. More specifically, we consider an incremental topic refinement scheme that works as follows. For a given document, we consider swapping all word tokens assigned to the same topic within that document to another topic. We compute the change in objective function that would occur if we both updated the topic Algorithm 2 Incremental Topic Refinements for Z Input: Words: W, Number of topics: K, Topic penalty: λ, Assignment: Z, Topics: ψ randomly permute the documents. for every document j do for each mini-topic S, where z js = i ∀s ∈ S for some topic i do for every other topic i = i do Compute ∆(S, i, i ), the change in the obj. function when re-assigning z js = i ∀s ∈ S. end for Let i * = argmin i ∆(S, i, i ). Reassign tokens in S to i * if it yields a smaller obj. Update topics ψ and assignments Z. end for end for Output: Assignments Z and Topics ψ.
assignments for those tokens and then updated the resulting topic vectors. Specifically, for document j and its mini-topic S formed by its word tokens assigned to topic i, the objective function change can be computed by
∆(S, i, i ) = −(n i ·· − n i j· )φ(ψ − i ) − (n i ·· + n i j· )φ(ψ + i ) + n i ·· φ(ψ i ) + n i ·· φ(ψ i ) − λI[i ∈ T j ],
where n i j· is the number of tokens in document j assigned to topic i, n i ·· is the total number of tokens assigned to topic i, ψ − i and ψ + i are the updated topics, T j is the set of all the topics used in document j, and φ(ψ i ) = w ψ iw log ψ iw .
We accept the move if min i =i ∆(S, i, i ) < 0 and update the topics ψ and assignments Z accordingly. Then we continue to the next mini-topic, hence the term "incremental". Note here we accept all moves that improve the objective function instead of just the single best move as in traditional approaches [13]. Since ψ and Z are updated in every objective-decreasing move, we randomly permute the processing order of the documents in each iteration. This usually helps in obtaining better results in practice. See Algorithm 2 for details.
At first glance, it appears that this incremental topic refinement strategy may be computationally expensive. However, computing the global change in objective function ∆(S, i, i ) can be performed in O(|S|) time, if the topics are maintained by count matrices. Only the counts involving the words in the mini-topic and the total counts are affected. Since we compute the change across all topics, and across all mini-topics S, the total running time of the incremental topic refinement can be seen to be O(N K), as in the basic batch algorithm and the facility location assignment algorithm.
Experiments
In this section, we compare the algorithms proposed above with their probabilistic counterparts.
Synthetic Documents
Our first set of experiments is on simulated data. We compare three versions of our algorithms-Basic Batch (Basic), Improved Word Assignment (Word), and Improved Word with Topic Refinement (Word+Refine)with the collapsed Gibbs sampler (CGS) 1 [15], the standard variational inference algorithm (VB) 2 [9], and the recent Anchor method 3 [2].
Methodology. Due to a lack of ground truth data for topic modeling, following [2], we benchmark on synthetic data. We train all algorithms on the following data sets. For the collapsed Gibbs sampler, we collect 10 samples with 30 iterations of thinning after 3000 burn-in iterations. The variational inference runs for 100 iterations. The Word algorithm replaces basic word assignment with the improved word assignment step within the batch algorithm, and Word+Refine further alternates between improved word and incremental topic refinement steps. The Word and Word+Refine are run for 20 and 10 iterations respectively. For Basic, Word and Word+Refine, we run experiments with λ ∈ {6, 7, 8, 9, 10, 11, 12}, and the best results are presented if not stated otherwise. In contrast, the true α, β parameters are provided as input to the LDA algorithms, whenever applicable. We note that we have heavily handicapped our methods by this setup, since the LDA algorithms are designed specifically for data from the LDA model. Assignment accuracy. Both the Gibbs sampler and our algorithms provide word-level topic assignments. Thus we can compare the training accuracy of these assignments, which is shown in Table 1. The result of the Gibbs sampler is given by the highest among all the samples selected. The accuracy is shown in terms of the normalized mutual information (NMI) score and the adjusted Rand index (ARand), which are both in the range of [0,1] and are standard evaluation metrics for clustering problems. From the plots, we can see that the performance of Word+Refine matches or slightly outperforms the Gibbs sampler for a wide range of λ values.
Topic reconstruction error. Now we look at the reconstruction error between the true topic-word distributions and the learned distributions. In particular, given a learned topic matrixψ and the true matrix ψ, we use the Hungarian algorithm [20] to align topics, and then evaluate the 1 distance between each pair of topics. Figure 1 presents the mean reconstruction errors per topic of different learning algorithms for varying number of documents. As a baseline, we also include the results from the k-means algorithm with KL-divergence [5] where each document is assigned to a single topic. We see that, on this data, the Anchor and Word+Refine methods perform the best; see Appendix C for further results and discussion. Table 2: The predictive word log-likelihood on new documents for Enron (K = 100 topics) and NYTimes (K = 100 topics) datasets with fixed α value. "hard" is short for hard predictive word log-likelihood which is computed using the word-topic assignments inferred by the Word algorithm, "original" is short for original predictive word log-likelihood which is computed using the document-topic distributions inferred by the sampler, and "KL" is short for symmetric KL-divergence.
the Word+Refine algorithm are the word assignments via facility location and the local refinement step (the other steps of the algorithm are lower-order). The relative runnings times improve as the data set sizes gets larger and, on large data sets, an iteration of Refine is roughly equivalent to one Gibbs iteration while an iteration of Word is roughly equivalent to two Gibbs iterations. Since one typically runs thousands of Gibbs iterations (while ours runs in 10 iterations even on very large data sets), we can observe several orders of magnitude improvement in speed by our algorithm. Further, running times could be significantly enhanced by noting that the facility location algorithm trivially parallellizes. In addition to these results, we found our per-iteration running times to be consistently faster than VB. See Appendix C for further results on synthetic data, including on using our algorithm as initialization to the collapsed Gibbs sampler.
Real Documents
We consider two real-world data sets with different properties: a random subset of the Enron emails (8K documents, vocabulary size 5000), and a subset of the New York Times articles 4 (15K documents, vocabulary size 7000). 1K documents are reserved for predictive performance assessment for both datasets. We use the following metrics: a "hard" predictive word log-likelihood and the standard probabilistic predictive word CGS art, artist, painting, museum, century, show, collection, history, french, exhibition W+R painting, exhibition, portrait, drawing, object, photograph, gallery, flag, artist CGS plane, flight, airport, passenger, pilot, aircraft, crew, planes, air, jet W+R flight, plane, passenger, airport, pilot, airline, aircraft, jet, planes, airlines CGS money, million, fund, donation, pay, dollar, contribution, donor, raising, financial W+R fund, raising, contribution, donation, raised, donor, soft, raise, finance, foundation CGS car, driver, truck, vehicles, vehicle, zzz ford, seat, wheel, driving, drive W+R car, driver, vehicles, vehicle, truck, wheel, fuel, engine, drive, zzz ford Table 3: Example topics pairs learned from NYTimes dataset.
log-likelihood on new documents. To get the topic assignments for new documents, we can either perform one iteration of the Word algorithm which can be used to compute the "hard" predictive log-likelihood, or use MCMC to sample the assignments with the learned topic matrix. Our hard log-likelihood can be viewed as the natural analogue of the standard predictive log-likelihood to our setting. We also compute the symmetric KL-divergence between learned topics. To make fair comparisons, we tune the λ value such that the resulting number of topics per document is comparable to that of the sampler. We remind the reader of issues raised in the introduction, namely that our combinatorial approach is no longer probabilistic, and therefore would not necessarily be expected to perform well on a standard likelihood-based score. Table 5 shows the results on the Enron and NYTimes datasets. We can see that our approach excels in the "hard" predictive word log-likelihood while lags in the standard mixture-view predictive word log-likelihood, which is in line with the objectives and reminiscent to the differences between k-means and GMMs. Table 3 further shows some sample topics generated by CGS and our method. See Appendix C for further results on predictive log-likelihood, including comparisons to other approaches than CGS.
Conclusions
Our goal has been to lay the groundwork for a combinatorial optimization view of topic modeling as an alternative to the standard probabilistic framework. Small-variance asymptotics provides a natural way to obtain an underlying objective function, using the k-means connection to Gaussian mixtures as an analogy. Potential future work includes distributed implementations for further scalability, adapting k-means-based semi-supervised clustering techniques to this setting, and extensions of k-means++ [4] to derive explicit performance bounds for this problem.
where each of the probabilities are given as specified in the above model. Now, following standard LDA manipulations, we can eliminate variables to simplify inference by integrating out θ to obtain p(Z, W, ψ|α, β) = θ p(W, Z, θ, ψ|α, β)dθ.
After simplification, we obtain p(Z, W, ψ|α,
β) = K i=1 p(ψ i |β) M j=1 N j t=1 p(w jt |ψ z jt ) × M j=1 Γ(αK) Γ( K i=1 n i j· + αK) K i=1 Γ(n i j· + α) Γ(α) .
Here n i j· is the number of word tokens in document j assigned to topic i. Now, following [10], we can obtain the SVA objective by taking the log of this likelihood and observing what happens when the variance goes to zero. In order to do this, we must be able to scale the likelihood categorical distribution, which is not readily apparent. Here we use two facts about the categorical distribution. First, as discussed in [5], we can equivalently express the distribution p(w jt |ψ z jt ) in its Bregman divergence form, which will prove amenable to SVA analysis. In particular, example 10 from [5] details this derivation. In our case we have a categorical distribution, and thus we can write the probability of token w jt as:
p(w jt |ψ z jt ) = exp(−d φ (1, ψ z jt ,w jt )).(5)
d φ is the unique Bregman divergence associated with the categorical distribution which, as detailed in example 10 from [5], is the discrete KL divergence and ψ z jt ,w jt is the entry of the topic vector associated with the topic indexed by z jt at the entry corresponding to the word at token w jt . This KL divergence will correspond to a single term of the form x log(x/y), where x = 1 since we are considering a single token of a word in a document. Thus, for a particular token, the KL divergence simply equals − log ψ z jt ,w jt . Note that when plugging in − log ψ z jt ,w jt into (5), we obtain exactly the original probability for word token w jt that we had in the original multinomial distribution. We will write the KL-divergence d φ (1, ψ z jt ,w jt ) as KL(w jt , ψ z jt ), wherew jt is an indicator vector for the word at token w jt .
Although it may appear that we have gained nothing by this notational manipulation, there is a key advantage of expressing the categorical probability in terms of Bregman divergences. In particular, the second step is to parameterize the Bregman divergence by an additional variance parameter. As discussed in Lemma 3.1 of [19], we can introduce another parameter, which we will call η, that scales the variance in an exponential family while fixing the mean. This new distribution may be represented, using the Bregman divergence view, as proportional to exp(−η · KL(w jt , ψ z jt )). As η → ∞, the mean remains fixed while the variance goes to zero, which is precisely what we require to perform small-variance analysis.
We will choose to scale α appropriately as well; this will ensure that the hierarchical form of the model is retained asymptotically. In particular, we will write α = exp(−λ · η). Now we consider the full negative log-likelihood:
− log p(Z, W, ψ|α, β).
Let us first derive the asymptotic behavior arising from the Dirichlet-multinomial distribution part of the likelihood, for a given document j:
Γ(αK) Γ( K i=1 n i j· + αK) K i=1 Γ(n i j· + α) Γ(α) .
In particular, we will show the following lemma.
Lemma 2. Consider the likelihood
p(Z|α) = M j=1 Γ(αK) Γ( K i=1 n i j· + αK) K i=1 Γ(n i j· + α) Γ(α) .
If α = exp(−λ · η), then asymptotically as η → ∞ we have
− log p(Z|α) ∼ ηλ M j=1 (K j+ − 1).
Proof. Note that N j = K i=1 n i j· . Using standard properties of the Γ function, we have that the negative log of the above distribution is equal to
N j −1 n=0 log(αK + n) − K i=1 n i j· −1 n=0 log(α + n).
All of the logarithmic summands converge to a finite constant whenever they have an additional term besides α or αK inside. The only terms that asymptotically diverge are those of the form log(αK) or log(α), that is, when n = 0. The first term always occurs. Terms of the type log(α) occur only when, for the corresponding i, we have n i j· > 0. Recalling that α = exp(−λ · η), we can conclude that the negative log of the Dirichlet multinomial term becomes asymptotically ηλ(K j+ − 1), where K j+ is the number of topics i in document j where n i j· > 0, i.e., the number of topics currently utilized by document j. (The maximum value for K j+ is K, the total number of topics.)
The rest of the negative log-likelihood is straightforward. The − log p(ψ i |β) terms vanish asymptotically since we are not scaling β (see the note below on scaling β). Thus, the remaining terms in the SVA objective are the ones arising from the word likelihoods which, after applying a negative logarithm, become − M j=1 N j t=1 log p(w jt |ψ z jt ).
Using the Bregman divergence representation, we can conclude that the negative log-likelihood asymptotically yields the objective − log p(Z, W, ψ|α, β) ∼
η M j=1 N j t=1 KL(w jt , ψ z jt ) + λ M j=1 (K j+ − 1) ,
where f (x) ∼ g(x) denotes that f (x)/g(x) → 1 as x → ∞. This leads to the objective function
min Z,ψ M j=1 N j t=1 KL(w jt , ψ z jt ) + λ M j=1 K j+ .(6)
We remind the reader that KL(w jt , ψ z jt ) = − log ψ z jt ,w jt . Thus we obtain a k-means-like term that says that all words in all documents should be "close" to their assigned topic in terms of KL-divergence, but that we should also not have too many topics represented in each document. Note that we did not scale β, to obtain a simple objective with only one parameter (other than the total number of topics), but let us say a few words about scaling β. A natural approach is to further integrate out ψ of the joint likelihood, as is done with the collapsed Gibbs sampler. One would obtain additional Dirichlet-multinomial distributions, and properly scaling as discussed above would yield a simple objective that places penalties on the number of topics per document as well as the number of words in each topic. Optimization would be performed only with respect to the topic assignment matrix. Future work would consider the effectiveness of such an objective function for topic modeling.
Algorithm 3 Basic Batch Algorithm
Input: Words: W, Number of topics: K, Topic penalty: λ Initialize Z and topic vectors ψ 1 , ..., ψ K . Compute initial objective function (6) using Z and ψ. repeat //Update assignments: for every word token t in every document j do Compute distance d(j, t, i) to topic i: − log(ψ i,w jt ).
If z jt = i for all tokens t in document j, add λ to d(j, t, i).
Obtain assignments via Z jt = argmin i d(j, t, i). end for //Update topic vectors: for every element ψ iu do ψ iu = # occ. of word u in topic i / total # of word tokens in topic i. end for Recompute objective function (6) using updated Z and ψ. until no change in objective function. Output: Assignments Z.
A.1 Further Details on the Basic Algorithm
Pseudo-code for the basic algorithm is given as Algorithm 3. We briefly elaborate on a few points raised in the main text.
First, the running time of the batch algorithm can be shown to be O(N K) per iteration, where N is the total number of word tokens and K is the number of topics. This is because each word token must be compared to every topic, but the resulting comparison can be done in constant time. Updating topics is performed by maintaining a count of the number of occurrences of each word in each topic, which also runs in O(N K) time. Note that the collapsed Gibbs sampler runs in O(N K) time per iteration, and thus has a comparable running time per iteration.
Second, one can also show that this algorithm is guaranteed to converge to a local optimum, similar to k-means and DP-means. The argument follows along similar lines to k-means and DP-means, namely that each updating step cannot increase the objective function. In particular, the update on the topic vectors must improve the objective function since the means are known to be the best representatives for topics based on the results of [5]. The assignment step must decrease the objective since we only re-assign if the distance goes down. Further, we only re-assign to a topic that is not currently used by the document if the distance is more than λ greater than the distance to the current topic, thus accounting for the additional λ that must be paid in the objective function.
Next we make three observations about the sorting procedure. First, the KL-divergence between a word and a topic depends purely on counts of words within topics; recall that it is of the form − log ψ iu , where ψ iu equals the number of occurrences of word u in topic i divided by the total number of word tokens assigned to i. Thus, for a given topic, the sorted words are obtained exactly by sorting word counts within a topic in decreasing order.
Second, because the word counts are all integers, we can use a linear-time sorting algorithm such as counting sort or radix sort to efficiently sort the items. In the case of counting sort, for instance, if we have n integers whose maximum value is k, the total running time is O(n + k); the storage time is also O(n + k). In our case, we perform many sorts. Each sort considers, for a fixed document j, sorting word counts to some topic i. Suppose there are n i j· tokens with non-zero counts to the topic, with maximum word count m i j . Then the running time of this sort is O(n i j· + m i j ). Across the document, we do this for every topic, making the running time scale as O( i (n i j· + m i j )) = O(n · j· K), where n · j· is the number of word tokens in document j. Across all documents this sorting then takes O(N K) time.
Third, we note that we need only sort once per run of the algorithm. Once we have sorted lists for words to topics, if we mark some set T , we can efficiently remove these words from the sorted lists and keep the updated lists in sorted order. Removing an individual word from a single sorted list can be done in constant time by maintaining appropriate pointers. Since each word token is removed exactly once during the algorithm, and must be removed from each topic, the total time to update the sorted lists during the algorithm is O(N K).
At this point, we still do not have a procedure that runs in O(N K) time. In particular, we must find the minimum of
f i + t∈T KL(w jt , ψ i ) |T |
at each round of marking. Naively this is performed by traversing the sorted lists and accumulating the value of the above score via summation. In the worst case, each round would take a total of O(N K) time across all documents, so if there are R rounds on average across all the documents, the total running time would be O(N KR). However, we can observe that we need not traverse entire sorted lists in general. Consider a fixed document, where we try to find the best set T by traversing all possible sizes of T . We can show that, as we increase the size of T , the value of the score function monotonically decreases until hitting the minimum value, and then monotonically increases afterward. We can formalize the monotonicity of the scoring function as follows:
Proposition 1. Let s ni be the value of the scoring function (4) for the best candidate set T of size t for topic i. If s t−1,i ≤ s ti , then s ti ≤ s t+1,i .
Proof. Recall that the KL-divergence is equal to the negative logarithm of the number of occurrences of the corresponding word token divided by the total number occurrences of tokens in the topic. Write this as log n i ·· − log c i , where n i ·· is the number of occurrences of tokens in topic i and c i is the count of the -th highest-count word in topic i. Now, by assumption s t−1,i ≤ s ti . Plugging the score functions into this inequality and cancelling the log n i ·· terms, we have
− 1 t − 1 t−1 =1 log c i + f i t − 1 ≤ − 1 t t =1 log c i + f i t .
Multiplying by t(t − 1) and simplifying yields the inequality Now, assuming this holds for s t−1,i and s t,i , we must show that this inequality also holds for s t,i and s t+1,i , i.e. that
f i + t log c it ≤ t =1 log c i .f i + (t + 1) log c i,t+1 ≤ t+1 =1 log c i .
Simple algebraic manipulation and the fact that the counts are sorted, i.e., log c i,t+1 ≤ log c it , shows the inequality to hold.
In words, the above proof demonstrates that, once the scoring function stops decreasing, it will not decrease any further, i.e., the minimum score has been found. Thus, once the score function starts to increase as T gets larger, we can stop and the best score (i.e., the best set T ) for that topic i has been found. We do this for all topics i until we find the overall best set T for marking. Under the mild assumption that the size of the chosen minimizer T is similar (specifically, within a constant factor) to the average size of the best candidate sets T across the other topics (an assumption which holds in practice), then it follows that the total time to find all the sets T takes O(N K) time.
Putting everything together, all the steps of this algorithm combine to cost O(N K) time. "Random" means random initialization, and "lambda=6" means initializing with the assignment earned using Word+Refine algorithm with λ = 6 (best viewed in color).
Iterations
Test log-likelihood
Initialization
Random
W+R Init
Test Log-Likelihood v.s. Iterations Figure 3: The evolution of the predictive word log-likelihood of the Enron dataset with different initializations: "Random" means random initialization, and "W+R Init" means initializing with the assignment learned using Word+Refine algorithm. Table 5: The predictive word log-likelihood on new documents for Enron (K = 100 topics) and NYTimes (K = 100 topics) datasets with fixed α value. "hard" is short for hard predictive word log-likelihood which is computed using the word-topic assignments inferred by the Word algorithm, "original" is short for original predictive word log-likelihood which is computed using the document-topic distributions inferred by the sampler, and "KL" is short for symmetric KL-divergence.
approach does well with respect to the hard log-likelihood score but less well on the original log-likelihood score. We omit the results of the Anchor method since it cannot adjust its result on different combinations of α and β values 5 . In Figure 3, we show the evolution of predictive heldout log-likelihood of the Gibbs sampler initialized with the Word+Refine optimized assignment for 3 iterations for the Enron dataset. With these semi-optimized initializations, we also observed significant speed-up compared to random initializations.
Figure 1 :
1Left: Running time comparison per iteration (in secs) of CGS to the facility location improved word algorithm (Word) and local refinement (Refine), on data sets of different sizes. Word/CGS and Refine/CGS refer to the ratio of Word and Refine to CGS. For larger datasets, Word takes roughly 2 Gibbs iterations and Refine takes roughly 1 Gibbs iteration. Right: Comparison of topic reconstruction errors of different algorithms with different sizes of SynthB.
(A) documents sampled from an LDA model with α = 0.04, β = 0.05, with 20 topics and having vocabulary size 2000. Each document has length 150. (B) documents sampled from an LDA model with α = 0.02, β = 0.01, 50 topics and vocabulary size 3000. Each document has length 200.
Running Time. SeeFigure 1for comparisons of our approach to CGS. The two most expensive
Figure 2 :
2The evolution of topic reconstruction 1 errors of Gibbs sampler with different initializations:
Algorithm 1 Improved Word Assignments for Z Input: Words: W, Number of topics: K, Topic penalty: λ, Topics: ψ for every document j do Let f i = λ for all topics i. Initialize all word tokens to be unmarked. while there are unmarked tokens do Pick the topic i and set of unmarked tokens T that minimizes
Table 1 :
1The NMI scores and Adjusted Rand Index (best results in bold) for word assignments of our algorithms for both synthetic datasets with 5000 documents (top: SynthA, bottom: SynthB).Enron
β = 0.1
β = 0.01
β = 0.001
hard original
KL
hard original
KL
hard original
KL
CGS
-5.932 -8.583 3.899 -5.484 -10.781 7.084 -5.091 -13.296 10.000
W+R
-5.434 -9.843 4.541 -5.147 -11.673 7.225 -4.918 -13.737 9.769
NYT
β = 0.1
β = 0.01
β = 0.001
hard original
KL
hard original
KL
hard original
KL
CGS
-6.594 -9.361 4.374 -6.205 -11.381 7.379 -5.891 -13.716 10.135
W+R
-6.105 -10.612 5.059 -5.941 -12.225 7.315 -5.633 -14.524 9.939
Table 4 :
4Optimized combinatorial topic modeling objective function values for different algorithms with λ = 10.
http://psiexp.ss.uci.edu/research/programs data/toolbox.htm 2 http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.LatentDirichletAllocation.html
http://www.cs.nyu.edu/∼halpern/code.html
http://archive.ics.uci.edu/ml/machine-learning-databases/bag-of-words/
We also observed that there are 0 entries in the learned topic matrix, which makes it difficult to compute the predictive log-likelihood.
Appendix A Full Derivation of the SVA ObjectiveRecall the standard Latent Dirichlet Allocation (LDA) model:• Choose θ j ∼ Dir(α), where j ∈ {1, ..., M }.• Choose ψ i ∼ Dir(β), where i ∈ {1, ..., K}.• For each word t in document j:-Choose a topic z jt ∼ Cat(θ j ).-Choose a word w jt ∼ Cat(ψ z jt ).Here α and β are scalar-valued (i.e., we are using a symmetric Dirichlet distribution). Denote W as the vector denoting all words in all documents, Z as the topic indicators of all words in all documents, θ as the concatenation of all the θ j variables, and ψ as the concatenation of all the ψ i variables. Also let N j be the total number of word tokens in document j. The θ j vectors are each of length K, the number of topics. The ψ i vectors are each of length D, the number of words in the dictionary. We can write down the full joint likelihood of the model as p(W, Z, θ, ψ|α,Appendix B An Efficient Facility Location Algorithm for Improved Word AssignmentsIn this section, we describe an efficient O(N K) algorithm based on facility location for obtaining the word assignments. Recall the algorithm, given earlier in Algorithm 1. Our first observation is that, for a fixed size of T and a given i, the best choice of T is obtained by selecting the |T | closest tokens to ψ i in terms of the KL-divergence. Thus, as a first pass, we can obtain the correct points to mark by appropriately sorting KL-divergences of all tokens to all topics, and then searching over all sizes of T and topics i.Appendix C Additional Experimental ResultsObjective optimization.Table 4shows the optimized objective function values for all three proposed algorithms. We can see that the Word algorithm significantly reduces the objective value when compared with the Basic algorithm, and the Word+Refine algorithm reduces further. As pointed out in[34]in the context of other SVA models, the Basic algorithm is very sensitive to initializations. However, this is not the case for the Word and Word+Refine algorithms and they are quite robust to initializations. From the objective values, the improvement from Word+Refine to Word seems to be marginal. But we will show in the following that the incorporation of the topic refinement is crucial for learning good topic models. Evolution of the Gibbs Sampler. The Gibbs sampler can easily become trapped in a local optima area and needs many iterations on large data sets, which can be seen fromFigure 2. Since our algorithm outputs Z, we can use this assignment as initialization to the sampler. InFigure 2, we also show the evolution of topic reconstruction 1 error initialized with the Word+Refine optimized assignment for 3 iterations with varying values of λ. With these semi-optimized initializations, we observe more than 5-fold speed-up compared to random initializations.Topic Reconstruction Error. In the main text, we observed that the Anchor method is the most competitive with Word+Refine on larger synthetic data sets, but that Word+Refine still outperforms Anchor for these larger data sets. We found this to be true as we scale up further; for instance, for 20,000 documents from the SynthB data, Anchor achieves a topic reconstruction score of 0.103 while Word+Refine achieves 0.095.Log likelihood comparisons on real data.Table 5contains further predictive log-likelihood results on the Enron and NYTimes data sets. Here we also show results on VB, which also indicate (as expected) that our
A spectral algorithm for latent Dirichlet allocation. A Anandkumar, Y Liu, D J Hsu, D P Foster, S M Kakade, NIPS. A. Anandkumar, Y. Liu, D. J. Hsu, D. P. Foster, and S. M. Kakade. A spectral algorithm for latent Dirichlet allocation. In NIPS, pages 917-925, 2012.
A practical algorithm for topic modeling with provable guarantees. S Arora, R Ge, Y Halpern, D Mimno, A Moitra, D Sontag, Y Wu, M Zhu, ICML. S. Arora, R. Ge, Y. Halpern, D. Mimno, A. Moitra, D. Sontag, Y. Wu, and M. Zhu. A practical algorithm for topic modeling with provable guarantees. In ICML, 2013.
Learning topic models-going beyond SVD. S Arora, R Ge, A Moitra, Foundations of Computer Science (FOCS). IEEES. Arora, R. Ge, and A. Moitra. Learning topic models-going beyond SVD. In Foundations of Computer Science (FOCS), pages 1-10. IEEE, 2012.
k-means++: The advantages of careful seeding. D Arthur, S Vassilvitskii, ACM-SIAM Symposium on Discrete Algorithms (SODA). D. Arthur and S. Vassilvitskii. k-means++: The advantages of careful seeding. In ACM-SIAM Symposium on Discrete Algorithms (SODA), 2007.
Clustering with Bregman divergences. A Banerjee, S Merugu, I S Dhillon, J Ghosh, Journal of Machine Learning Research. 6A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with Bregman divergences. Journal of Machine Learning Research, 6:1705-1749, 2005.
A provable SVD-based algorithm for learning topics in dominant admixture corpus. T Bansal, C Bhattacharyya, R Kannan, NIPS. T. Bansal, C. Bhattacharyya, and R. Kannan. A provable SVD-based algorithm for learning topics in dominant admixture corpus. In NIPS, pages 1997-2005, 2014.
Hierarchical topic models and the nested Chinese restaurant process. D M Blei, M I Jordan, T L Griffiths, J B Tenenbaum, NIPS. D. M. Blei, M. I. Jordan, T. L. Griffiths, and J. B. Tenenbaum. Hierarchical topic models and the nested Chinese restaurant process. In NIPS, 2004.
Correlated topic models. D M Blei, J D Lafferty, NIPS. D. M. Blei and J. D. Lafferty. Correlated topic models. In NIPS, 2006.
Latent Dirichlet allocation. D M Blei, A Y Ng, M I Jordan, Journal of Machine Learning Research. 34-5D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3(4-5):993-1022, 2003.
MAD-Bayes: MAP-based asymptotic derivations from Bayes. T Broderick, B Kulis, M I Jordan, ICML. T. Broderick, B. Kulis, and M. I. Jordan. MAD-Bayes: MAP-based asymptotic derivations from Bayes. In ICML, 2013.
Dynamic clustering via asymptotics of the dependent Dirichlet process. T Campbell, M Liu, B Kulis, J How, L Carin, NIPS. T. Campbell, M. Liu, B. Kulis, J. How, and L. Carin. Dynamic clustering via asymptotics of the dependent Dirichlet process. In NIPS, 2013.
Indexing by latent semantic analysis. S Deerwester, S Dumais, T Landauer, G Furnas, R Harshman, Journal of the American Society of Information Science. 416S. Deerwester, S. Dumais, T. Landauer, G. Furnas, and R. Harshman. Indexing by latent semantic analysis. Journal of the American Society of Information Science, 41(6):391-407, 1990.
Information theoretic clustering of sparse co-occurrence data. I S Dhillon, Y Guan, IEEE International Conferece on Data Mining (ICDM). I. S. Dhillon and Y. Guan. Information theoretic clustering of sparse co-occurrence data. In IEEE International Conferece on Data Mining (ICDM), 2003.
Iterative clustering of high dimensioanl text data augmented by local search. I S Dhillon, Y Guan, J Kogan, IEEE International Conference on Data Mining (ICDM). I. S. Dhillon, Y. Guan, and J. Kogan. Iterative clustering of high dimensioanl text data augmented by local search. In IEEE International Conference on Data Mining (ICDM), 2002.
Finding scientific topics. T L Griffiths, M Steyvers, Proceedings of the National Academy of Sciences. 101T. L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy of Sciences, 101:5228-5235, 2004.
Probabilistic latent semantic indexing. T Hofmann, Proc. SIGIR. SIGIRT. Hofmann. Probabilistic latent semantic indexing. In Proc. SIGIR, 1999.
JUMP-means: Small-variance asymptotics for Markov jump processes. J H Huggins, K Narasimhan, A Saeedi, V K Mansinghka, ICML. J. H. Huggins, K. Narasimhan, A. Saeedi, and V. K. Mansinghka. JUMP-means: Small-variance asymptotics for Markov jump processes. In ICML, 2015.
Greedy facility location algorithms analyzed using dual fitting with factor-revealing LP. K Jain, M Mahdian, E Markakis, A Saberi, V V Vazirani, Journal of the ACM. 506K. Jain, M. Mahdian, E. Markakis, A. Saberi, and V. V. Vazirani. Greedy facility location algorithms analyzed using dual fitting with factor-revealing LP. Journal of the ACM, 50(6):795-824, 2003.
Small-variance asymptotics for exponential family Dirichlet process mixture models. K Jiang, B Kulis, M I Jordan, NIPS. K. Jiang, B. Kulis, and M. I. Jordan. Small-variance asymptotics for exponential family Dirichlet process mixture models. In NIPS, 2012.
The Hungarian method for the assignment problem. H W Kuhn, Naval Research Logistics Quarterly. 21-2H. W. Kuhn. The Hungarian method for the assignment problem. Naval Research Logistics Quarterly, 2(1-2):83-97, 1955.
Revisiting k-means: New algorithms via Bayesian nonparametrics. B Kulis, M I Jordan, ICML. B. Kulis and M. I. Jordan. Revisiting k-means: New algorithms via Bayesian nonparametrics. In ICML, 2012.
Bayesian hierarchical clustering with exponential family: Small-variance asymptotics and reducibility. J Lee, S Choi, Artificial Intelligence and Statistics (AISTATS). J. Lee and S. Choi. Bayesian hierarchical clustering with exponential family: Small-variance asymptotics and reducibility. In Artificial Intelligence and Statistics (AISTATS), 2015.
Reducing the sampling complexity of topic models. A Q Li, A Ahmed, S Ravi, A J Smola, ACM SIGKDD. ACMA. Q. Li, A. Ahmed, S. Ravi, and A. J. Smola. Reducing the sampling complexity of topic models. In ACM SIGKDD, pages 891-900. ACM, 2014.
Rethinking LDA: moment matching for discrete ica. A Podosinnikova, F Bach, S Lacoste-Julien, NIPS. A. Podosinnikova, F. Bach, and S. Lacoste-Julien. Rethinking LDA: moment matching for discrete ica. In NIPS, pages 514-522, 2015.
EM algorithms for PCA and SPCA. S Roweis, NIPS. S. Roweis. EM algorithms for PCA and SPCA. In NIPS, 1997.
Small-variance asymptotics for hidden Markov models. A Roychowdhury, K Jian, B Kulis, NIPS. A. Roychowdhury, K. Jian, and B. Kulis. Small-variance asymptotics for hidden Markov models. In NIPS, 2013.
A discriminative latent variable model for online clustering. R Samdani, K-W Chang, D Roth, ICML. R. Samdani, K-W. Chang, and D. Roth. A discriminative latent variable model for online clustering. In ICML, 2014.
Hierarchical Dirichlet processes. Y W Teh, M I Jordan, M J Beal, D M Blei, Journal of the American Statistical Association (JASA). 101476Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association (JASA), 101(476):1566-1581, 2006.
A collapsed variational Bayesian inference algorithm for latent Dirichlet allocation. Y W Teh, D Newman, M Welling, NIPS. Y. W. Teh, D. Newman, and M. Welling. A collapsed variational Bayesian inference algorithm for latent Dirichlet allocation. In NIPS, 2006.
Restricted Bayes optimal classifiers. S Tong, D Koller, AAAI. S. Tong and D. Koller. Restricted Bayes optimal classifiers. In AAAI, 2000.
Spatial latent Dirichlet allocation. X Wang, E Grimson, NIPS. X. Wang and E. Grimson. Spatial latent Dirichlet allocation. In NIPS, 2007.
Small-variance asymptotics for Dirichlet process mixtures of SVMs. Y Wang, J Zhu, Proc. Twenty-Eighth AAAI Conference on Artificial Intelligence. Twenty-Eighth AAAI Conference on Artificial IntelligenceY. Wang and J. Zhu. Small-variance asymptotics for Dirichlet process mixtures of SVMs. In Proc. Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014.
DP-space: Bayesian nonparametric subspace clustering with small-variance asymptotics. Y Wang, J Zhu, ICML. Y. Wang and J. Zhu. DP-space: Bayesian nonparametric subspace clustering with small-variance asymptotics. In ICML, 2015.
A convex exemplar-based approach to MAD-Bayes Dirichlet process mixture models. I E H Yen, X Lin, K Zhang, P Ravikumar, I S Dhillon, ICML. I. E. H. Yen, X. Lin, K. Zhang, P. Ravikumar, and I. S. Dhillon. A convex exemplar-based approach to MAD-Bayes Dirichlet process mixture models. In ICML, 2015.
| []
|
[
"Dijet azimuthal decorrelations for ∆φ dijet < 2π/3 in perturbative QCD",
"Dijet azimuthal decorrelations for ∆φ dijet < 2π/3 in perturbative QCD",
"Dijet azimuthal decorrelations for ∆φ dijet < 2π/3 in perturbative QCD",
"Dijet azimuthal decorrelations for ∆φ dijet < 2π/3 in perturbative QCD"
]
| [
"M Wobisch [email protected] ",
"K Rabbertz [email protected] ",
"\nDepartment of Physics\nLouisiana Tech University\n600 Dan Reneau DrRustonLAUSA\n",
"\nInstitut für Experimentelle Kernphysik\nKIT\n6980, D-76128KarlsruhePostfachGermany\n",
"M Wobisch [email protected] ",
"K Rabbertz [email protected] ",
"\nDepartment of Physics\nLouisiana Tech University\n600 Dan Reneau DrRustonLAUSA\n",
"\nInstitut für Experimentelle Kernphysik\nKIT\n6980, D-76128KarlsruhePostfachGermany\n"
]
| [
"Department of Physics\nLouisiana Tech University\n600 Dan Reneau DrRustonLAUSA",
"Institut für Experimentelle Kernphysik\nKIT\n6980, D-76128KarlsruhePostfachGermany",
"Department of Physics\nLouisiana Tech University\n600 Dan Reneau DrRustonLAUSA",
"Institut für Experimentelle Kernphysik\nKIT\n6980, D-76128KarlsruhePostfachGermany"
]
| []
| We point out an inconsistency in perturbative QCD predictions previously used for dijet azimuthal decorrelations for azimuthal angles of ∆φ dijet < 2π/3 between the two jets. We show how the inconsistency arises and how the calculations can be modified to provide more accurate results that exhibit a smaller scale dependence and give a better description of the data than the inconsistent results. We also explain how the quality of the predictions strongly depends on a perceivedly minor detail in the definition of the dijet phase space and give recommendations for future measurements. | 10.1007/jhep12(2015)024 | [
"https://arxiv.org/pdf/1505.05030v1.pdf"
]
| 119,213,903 | 1505.05030 | f44e106cb36908c1aab3099bbd3f4befd95f9b76 |
Dijet azimuthal decorrelations for ∆φ dijet < 2π/3 in perturbative QCD
19 May 2015
M Wobisch [email protected]
K Rabbertz [email protected]
Department of Physics
Louisiana Tech University
600 Dan Reneau DrRustonLAUSA
Institut für Experimentelle Kernphysik
KIT
6980, D-76128KarlsruhePostfachGermany
Dijet azimuthal decorrelations for ∆φ dijet < 2π/3 in perturbative QCD
19 May 2015Prepared for submission to JHEPJets, Hadronic Colliders
We point out an inconsistency in perturbative QCD predictions previously used for dijet azimuthal decorrelations for azimuthal angles of ∆φ dijet < 2π/3 between the two jets. We show how the inconsistency arises and how the calculations can be modified to provide more accurate results that exhibit a smaller scale dependence and give a better description of the data than the inconsistent results. We also explain how the quality of the predictions strongly depends on a perceivedly minor detail in the definition of the dijet phase space and give recommendations for future measurements.
Introduction
Measurements of dijet azimuthal decorrelations in hadron-hadron collisions provide a unique testing ground for the predictions of perturbative quantum chromodynamics (pQCD). The dijet azimuthal decorrelation studies the production rates of dijet events as a function of the azimuthal angular separation between the two jets in an event that define the dijet system, ∆φ dijet = |φ jet1 − φ jet2 |. The measured quantity, labeled P in this article and originally proposed by the DØ collaboration [1], is the dijet differential cross section, dσ dijet /d∆φ dijet , normalized by the inclusive dijet cross section, σ dijet , integrated over ∆φ dijet :
P = 1 σ dijet · dσ dijet d∆φ dijet .
(1.1)
The range of kinematically accessible values in ∆φ dijet is indicated in figure 1 for processes with final states of different jet multiplicities. In 2 → 2 processes ∆φ dijet has always the largest possible value of ∆φ dijet = π (figure 1 a). If ∆φ dijet is significantly below π, then the quantity P is probing hard 2 → 3 and 2 → 4 processes, i.e. three-jet and four-jet production. Following the DØ measurement, the quantity P was also measured by the CMS and ATLAS collaborations [2,3]. In all measurements the data are fairly well described by the theory predictions at next-to-leading order (NLO) pQCD for 3π/4 ∆φ dijet < π. For smaller ∆φ dijet , in particular for ∆φ dijet < 2π/3, the theory predictions exhibit a large renormalization scale dependence and lie significantly below the data.
In this article, we focus on the comparison of fixed-order pQCD predictions and data in the kinematic region of ∆φ dijet < 2π/3. In section 2 we introduce and compare the phase space definitions in the different analyses and discuss their effects on the kinematic constraints in 2 → 3 processes. In section 3 we show that the pQCD calculations by two of the experimental collaborations [1,2] for the region of ∆φ dijet < 2π/3 are inconsistent, and demonstrate how a correct treatment provides pQCD predictions with a reduced scale dependence. The results of these calculations also give a better description of the experimental data, as shown in section 4. In section 5 we discuss how a particular choice in the selection of the dijet phase space in the third experimental analysis [3] renders fixed-order pQCD predictions less accurate and how this can be improved in future measurements by a small modification in the dijet phase space definition.
Phase space and kinematic constraints
For a given process (e.g. pp or pp collisions) and center-of-mass energy, the measured quantity P , defined in equation (1.1), depends on additional choices, including the jet algorithm with its parameters, and the requirements on the jet rapidities y and the transverse jet momenta p T with respect to the beam direction. The initial jet selection may be carried out in a limited y region, with |y| < y initial (where y initial can be adapted to the detector acceptance). The dijet system is then defined by the two jets with the highest p T inside this region; here, these are labeled "jet1" and "jet2". The final phase space for the rapidities y 1,2 of jet1 and jet2 is then further constrained by |y 1,2 | < y final . Furthermore, the p T of jet2 is required to be above a given threshold, p Tmin , and the analysis results are presented in different regions of the p T of jet1, p Tmax . An overview of the choices for these parameters in the analyses by the DØ, CMS, and ATLAS experiments is given in table 1.
The main difference between the three scenarios regarding the scope of this article is the choice of y initial . In the DØ scenario, the y region for the initial jet selection is unlimited (y initial = ∞), while the ATLAS and CMS scenarios are limited to y initial = 2.8 and 5.0, Figure 1. Sketches of the azimuthal angular separation ∆φ dijet between the two jets leading in p T in an event for 2 → 2, 2 → 3, and 2 → 4 processes. Also indicated is the kinematically accessible range in ∆φ dijet for the three configurations.
∆φ dijet = π 2π/3 ≤ ∆φ dijet ≤ π 0 ≤ ∆φ dijet ≤ π
respectively [3,4]. As a consequence of the choices for y initial and p Tmin , the three scenarios then have different kinematic constraints for 2 → 3 processes as explained below:
• Kinematic constraints for an unlimited y region, y initial = ∞ For y initial = ∞, the selected jets, jet1 and jet2, are always the two jets leading in p T of the entire event. This selection criterion results in the kinematic constraint that the smallest possible ∆φ dijet value in a 2 → 3 process (i.e. in a three-jet final state) is ∆φ dijet = 2π/3 (cf. figure 1 b), while angles of ∆φ dijet < 2π/3 are only accessible in final states with four or more jets (cf. figure 1 c). 1 Therefore, for y initial = ∞, the dijet cross section for ∆φ dijet < 2π/3 is a four-jet quantity, meaning that the lowest order pQCD contributions are from the four-jet tree-level matrix elements.
• Kinematic constraints for a limited y region, y initial < ∞ If the y region for the initial jet selection is limited, it is possible that the two jets, selected for the dijet system, are not the two jets leading in p T of the whole event. Table 2 gives an example for the ATLAS scenario, in which the leading jet in the event has |y| > y initial . In this case, the dijet system is made of the second and third leading jets, which are the two highest p T jets inside the limited y region. Since there is no kinematic constraint for the azimuthal angular separation between the second and third leading jet, the region ∆φ dijet < 2π/3 is also populated by three-jet final states. If such configurations are not prohibited by other phase space constraints, the dijet cross section for ∆φ dijet < 2π/3 is a three-jet quantity.
It depends on the requirements on y initial , (p Tmax / √ s), and (p Tmin / √ s), whether a leading jet is kinematically allowed outside the region |y| < y initial and, as a consequence, three-jet configurations can populate the region of ∆φ dijet < 2π/3. This can be tested Table 2. The topology of an exclusive three-jet event, with the jet variables p T , y, and φ (left) and the event quantities ∆φ 2,3 , three-jet invariant mass M 3-jet , and the momentum fractions x 1 and x 2 for a center-of-mass energy of √ s = 7 TeV. In this event, the highest p T jet is produced at large rapidity. If the dijet selection is restricted to jets with |y| < y initial = 2.8 (as in the ATLAS scenario, see text), the selected dijet system does not include the highest p T jet. This enables the azimuthal angular separation of the jets in the dijet system (here, ∆φ dijet is determined by the azimuthal angle between the second and the third jet, ∆φ 2,3 ) to fall below the limit of ∆φ dijet = 2π/3. using a cross section calculation based on tree-level 2 → 3 matrix elements as e.g. in NLO-Jet++ [5,6]. We have used NLOJet++ to compute the dijet differential cross section dσ dijet /d∆φ dijet for all three scenarios. The results for the ATLAS scenario are shown in figure 2 and it is observed that up to and including the p Tmax region of 400-500 GeV, the dijet differential cross section dσ dijet /d∆φ dijet receives non-zero contributions at ∆φ dijet < 2π/3 from three-jet final states. Therefore, in the ATLAS scenario, dσ dijet /d∆φ dijet is a three-jet quantity for all ∆φ dijet in the p Tmax regions with p Tmax < 500 GeV. Only in the higher p Tmax regions it becomes a four-jet quantity. In those regions, however, ATLAS has not published any measurement for ∆φ dijet < 2π/3.
jet 1 jet 2 jet 3 p T (GeV) 405 401 101 y 2.805 −0.75 −0.75 φ (radians) 0.000 · π 0.920 · π 1.448 · π ∆φ 2,3 (radians) 0.528 · π M 3-jet (TeV) 2.745 x 1 (for √ s = 7 TeV) 0.990 x 2 (for √ s = 7 TeV) 0.155
Like ATLAS, the CMS scenario also has a limited y region for the initial jet selection, with lower requirements for p Tmax and p Tmin , but with a larger value of y initial = 5.0. We have computed dσ dijet /d∆φ dijet for the CMS scenario as well and find that in all p Tmax regions the 2 → 3 tree-level predictions for dσ dijet /d∆φ dijet are zero for ∆φ dijet < 2π/3. In other words, in both the CMS and the DØ scenarios dσ dijet /d∆φ dijet is a four-jet quantity for ∆φ dijet < 2π/3. We summarize our findings as follows:
• The denominator of P , σ dijet , is the inclusive dijet cross section, which is a two-jet quantity in all scenarios.
• For ∆φ dijet ≥ 2π/3, the numerator of P , dσ dijet /d∆φ dijet , is a three-jet quantity in all scenarios.
• For ∆φ dijet < 2π/3, the numerator of P is a four-jet quantity, if the initial y region is unlimited (y initial = ∞) as in the DØ scenario, or if the y initial and p T requirements prohibit the two jets with the highest p T 's in an event from having |y| > y initial , as in the CMS scenario.
• If the y initial and p T requirements allow one of the two jets leading in p T to have |y| > y initial , then the numerator of P is a three-jet quantity for all ∆φ dijet . This is the case in the ATLAS scenario for the p Tmax regions up to 400-500 GeV in p Tmax . s ) contributions to bins with ∆φ dijet < 2π/3 and p Tmax < 500 GeV are small but non-zero.
Perturbative QCD calculations for cross section ratios
The pQCD prediction for a ratio R of two cross sections σ A and σ B in a given relative order of α s (e.g. LO or NLO) can be computed from the ratio of the pQCD predictions for σ A and σ B . For this purpose, both must be computed at the same relative order, which is not necessarily the same absolute order in α s . A LO result is then given by
R LO = σ LO A /σ LO B and a NLO result by R NLO = σ NLO A /σ NLO B
. If numerator and denominator are calculated in different relative orders, cancellation effects between theoretical uncertainties can be compromised leading to an artificially increased renormalization scale dependence of the results as discussed with respect to jet shapes in sections 3.1 and 4 of reference [7].
For two-jet quantities, the LO and NLO pQCD predictions are given by calculations to order O(α 2 s ) and O(α 3 s ), respectively. For each additional jet required for the final state, the respective powers of α s increase by one, so that for example the LO (NLO) predictions for three-jet quantities are given by pQCD calculations to order O(α 3 s ) (O(α 4 s )). Combined with the findings from section 2, we obtain the rules for the calculation of the LO and NLO results for the quantity P in the three scenarios and in the different regions of ∆φ dijet . These rules are listed in table 3 and compared to the computational procedures applied in the experimental publications [1][2][3]. The theory results published by DØ and CMS for ∆φ dijet < 2π/3 and labeled "NLO" in references [1,2] are inconsistent, because they mix relative orders for the numerator (LO) and denominator (NLO
< 2π/3 LO O(α 2 s ) O(α 4 s ) O(α 2 s ) numerator: LO O(α 4 s ) CMS denominator: NLO O(α 3 s ) NLO O(α 3 s ) O(α 5 s ) O(α 3 s )
(inconsistent, using mixed relative orders) Table 3. Correspondence between absolute orders in α s in the calculations of numerator and denominator and the relative order in the quantity P . The right column comments on the calculations used in the experimental publications. the correct LO result for P below ∆φ dijet = 2π/3. Alternatively, the correct NLO results at ∆φ dijet < 2π/3 can be obtained by replacing the four-jet LO (O(α 4 s )) results by results based on the four-jet matrix elements at NLO pQCD (O(α 5 s )), which have become available in the last years [8,9].
Results
Following the prescriptions in table 3 we have computed the LO and NLO pQCD predictions for P in the DØ, CMS, and ATLAS scenarios in the different ∆φ dijet regions. For comparison, we also derive the inconsistent "mixed-order" results for P as published by DØ and CMS.
All calculations are made in the MS-scheme [10] and for five massless quark flavors, using NLOJet++ [5,6] interfaced to fastNLO [11,12]. The results are obtained for renormalization and factorization scales of µ R = µ F = p Tmax , with the MSTW2008NLO [13] parameterization of the parton distribution functions of the proton, and with α s evolved from a value of α s (M Z ) = 0.120 according to the two-loop solution of the renormalization group equation. The uncertainty due to the scale dependence is computed from the variations of the ratio P for correlated variations of the scales in the numerator and denominator of µ R = µ F = p Tmax /2 and µ R = µ F = 2 p Tmax . The ATLAS collaboration has published non-perturbative corrections [14,15], which are applied to the pQCD results to get the final theory prediction. These corrections are typically below 1% and never larger than 3%. The DØ and CMS collaborations have not provided non-perturbative corrections. In these cases, the pQCD results are directly compared to the data. 2 The theoretical calculations in this study differ slightly from the calculations used in the CMS and DØ publications due to different choices of the parton distribution functions and α s (M Z ). Furthermore, the DØ collaboration chose different renormalization and factorization scales of µ R = µ F = p Tmax /2, and the CMS collaboration applied non-perturbative corrections. For the purpose of the following discussion, these differences are negligible.
The experimental results from the DØ, CMS, and ATLAS measurements are displayed in figure 3 over the entire ∆φ dijet range. The data are compared to theory at NLO or LO, depending on the ∆φ dijet range and the scenario. Over the whole range of p Tmax and ∆φ dijet , the theoretical predictions are in agreement with the data, except for the ATLAS data at small ∆φ dijet . The region of small ∆φ dijet , including the transition at ∆φ dijet = 2π/3 and the effects of the inconsistent mixed-order predictions, are further investigated in the following. Figure 4 shows the ratios of data over the different theory predictions for ∆φ dijet 3π/4. The ratios are computed for the NLO results, the LO results, and the inconsistent results from mixed relative orders. Also shown are the uncertainty bands due to the scale dependence of the different theoretical calculations. For ∆φ dijet > 2π/3, in all scenarios the NLO pQCD predictions are compared to the data. For ∆φ dijet > 3π/4, these give a good description of the data within scale uncertainties, which are below 5-10%. In the range 2π/3 < ∆φ dijet < 3π/4, the O(α 4 s ) (i.e. three-jet NLO) calculation for the numerator is running out of phase space for three-jet final states as ∆φ dijet → 2π/3. This causes the O(α 4 s ) calculation to effectively become a four-jet LO calculation. In this ∆φ dijet range the NLO prediction still describes the data, but with an increasing scale dependence of up to 30% as ∆φ dijet → 2π/3.
For the CMS and DØ scenarios at ∆φ dijet < 2π/3, we first focus on the inconsistent mixed-order calculations as published by the experiments. Figure 4 shows that over most of the range (and in particular towards lower ∆φ dijet ) these predictions are significantly below the data even outside their large scale dependence, and they do not describe the ∆φ dijet dependence of the data. Compared to the inconsistent mixed-order calculations, the correct LO predictions have a significantly reduced scale dependence, and they give a much better description of the data. While they still do not reproduce the ∆φ dijet dependence, almost all individual data points agree with the LO prediction within the reduced scale uncertainty.
Although, for the ATLAS scenario the pQCD predictions for ∆φ dijet < 2π/3 are technically still of NLO, their scale dependence is as large as that of the mixed-order predictions for the CMS scenario, and the description of the data by both are equally poor.
Recommendations for future measurements
In section 3 we pointed out that for the numerator of P in the ATLAS scenario the threejet NLO cross section calculations formally are of NLO also for ∆φ dijet < 2π/3. The results presented in section 4, however, demonstrate that these NLO predictions exhibit a larger scale dependence and that they give a worse description of the data than the LO predictions for the DØ and CMS results. The difference between the ATLAS and the DØ and CMS scenarios was traced back to the choice of y initial in the dijet selection as explained in section 2. In contrast to the DØ and CMS scenarios, the kinematic constraints in the ATLAS scenario do allow 2 → 3 processes to give small, but non-zero contributions to the dijet cross section for ∆φ dijet < 2π/3. Therefore, in this ∆φ dijet range, while formally being a NLO pQCD prediction, the O(α 4 s ) calculation for the numerator effectively is only a LO prediction, since the O(α 3 s ) terms contribute less than one percent. This "formally NLO but effectively LO" calculation for the numerator exhibits the typical large scale dependence of a LO calculation while the NLO predictions for the denominator have a reduced scale dependence, as typical for NLO calculations. As a consequence, the NLO prediction for the ratio P has a scale dependence, which is similar to that of the mixed-order calculations and larger than that of the LO predictions for the DØ and CMS scenarios.
Therefore, we strongly recommend that future measurements of dijet azimuthal decorrelations use values of y initial that, together with the p Tmin and p Tmax requirements, do not leave any phase space for 2 → 3 processes below ∆φ dijet = 2π/3. Technically, this can be investigated by using a phase space generator or a three-jet pQCD LO calculation for the numerator of P .
Summary and conclusion
Measurements of dijet azimuthal decorrelations at hadron colliders continue to be a testing ground for pQCD predictions at higher orders, beyond what is probed in inclusive jet and inclusive dijet production. In particular in the phase space region of ∆φ dijet < 2π/3, dijet azimuthal correlations are sensitive to the dynamics of final states with four or more jets. In all previous publications of azimuthal decorrelations, based on the quantity P = (1/σ dijet ) · (dσ dijet /d∆φ dijet ), this region was poorly described by theoretical predictions. In this article we have identified two reasons for this shortcoming.
In the publications by DØ [1] and CMS [2], the poor theoretical description of the data is related to the inconsistent mixing of different relative orders in α s in the predictions for the ratio P . We have performed a consistent LO calculation by computing both, numerator and denominator, at LO. This correct LO pQCD prediction not only exhibits a smaller scale dependence, but also gives a better description of the experimental data for ∆φ dijet < 2π/3.
The improvement due to the consistent LO calculation can, however, only be achieved for definitions of the dijet phase space that ensure the two jets of the dijet system to be also the two leading p T jets in the events. We strongly recommend for future measurements of dijet azimuthal decorrelations at small ∆φ dijet to perform the initial dijet selection accordingly.
If this is taken into account, the future usage of four-jet NLO calculations will provide NLO pQCD predictions for the whole ∆φ dijet range, extending precision phenomenology for dijet azimuthal decorrelations to the region ∆φ dijet < 2π/3. Since in this ∆φ dijet region the quantity P is proportional to α 2 s , future measurements with higher statistical precision can also be used for novel α s determinations. This recommendation also applies to measurements of dijet azimuthal decorrelations based on the quantity R ∆φ [17,18] when this is measured for ∆φ max ≤ 2π/3.
Figure 2 .
2The pQCD predictions of order O(α 3 s ) for the dijet differential cross section dσ/d∆φ dijet , as a function of ∆φ dijet in different regions of p Tmax for all analysis bins of the ATLAS measurement. The figure demonstrates that the O(α 3
Figure 3 .
3Measurements of dijet azimuthal decorrelations at hadron colliders from the DØ, CMS, and ATLAS experiments (from left to right) are displayed as a function of the azimuthal opening angle ∆φ dijet of the dijet system for different requirements of the leading jet p T (different markers). The measurements are compared to theoretical predictions based on NLO (solid lines) or LO pQCD (dashed lines), depending on whether the measured quantity is a three-jet or four-jet variable, respectively. The scale dependence of the pQCD calculation is indicated by the shaded areas.
Figure 4 .
4Ratios of data from different experiments (columns) to fixed-order predictions as a function of ∆φ dijet , from low p Tmax (bottom) to high p Tmax (top). The ratios are shown in different regions of ∆φ dijet for the pQCD predictions at NLO (open circles) and LO (full circles), and also for the case of mixing different orders in numerator and denominator (triangles) for ∆φ dijet < 2π/3. For better visibility the full circles have been slightly shifted towards smaller values of ∆φ dijet . The scale dependence of the different pQCD calculations is indicated by the corresponding lines.
Table 1. Summary of the parameters defining the dijet phase space in the DØ, CMS, and ATLAS measurements of dijet azimuthal decorrelations[1][2][3]. Variables are defined in the text.Experiment (reaction and center-of-mass energy)
Parameter
DØ (pp, 1.96 TeV ) CMS (pp, 7 TeV) ATLAS (pp, 7 TeV)
jet algorithm
Run II cone
anti-k t
anti-k t
jet radius
R cone = 0.7
R = 0.5
R = 0.6
y initial
∞
5.0
2.8
y final
0.5
1.1
0.8
p Tmin
40 GeV
30 GeV
100 GeV
p Tmax ranges
75-100 GeV
80-110 GeV
110-160 GeV
100-130 GeV
110-140 GeV
160-210 GeV
130-180 GeV
140-200 GeV
210-260 GeV
>180 GeV
200-300 GeV
260-310 GeV
>300 GeV
310-400 GeV
400-500 GeV
500-600 GeV
600-800 GeV
>800 GeV
An event with exactly three jets can have ∆φ dijet = 2π/3 only in a "Mercedes Star" configuration, where the jets have pT1 = pT2 = pT3 and ∆φ1,2 = ∆φ1,3 = ∆φ2,3 = 2π/3. If the two jets leading in pT in a three-jet event (with pT1 ≥ pT2 ≥ pT3) had ∆φ dijet < 2π/3, the vector sum of their transverse momenta could only be balanced, if the third jet had pT3 > pT2, which would, however, contradict the assumption that pT2 ≥ pT3.
In reference[16] non-perturbative corrections for the DØ results are shown to be typically below 2% and never larger than 4%. In the CMS publication[2] the non-perturbative corrections are quoted to vary between −13% at ∆φ dijet = π/2 and +4% at ∆φ dijet = π.
AcknowledgmentsWe thank our colleagues in the ATLAS, CMS, and D0 collaborations for many fruitful discussions. The work of M.W. is supported by grants DE-FG02-10ER46723 and DE-SC0009859 from the U.S. Department of Energy. M.W. also wishes to thank the Louisiana Board of Regents Support Fund for the support through the Eva J. Cunningham Endowed Professorship.
Measurement of dijet azimuthal decorrelations at central rapidities in pp collisions at √ s = 1.96 TeV. V Abazov, D0 Collaborationhep-ex/0409040Phys. Rev. Lett. 94221801D0 Collaboration, V. Abazov et al., Measurement of dijet azimuthal decorrelations at central rapidities in pp collisions at √ s = 1.96 TeV, Phys. Rev. Lett. 94 (2005) 221801, [hep-ex/0409040].
Dijet azimuthal decorrelations in pp collisions at √ s = 7 TeV. V Khachatryan, CMS CollaborationarXiv:1101.5029Phys. Rev. Lett. 106122003CMS Collaboration, V. Khachatryan et al., Dijet azimuthal decorrelations in pp collisions at √ s = 7 TeV, Phys. Rev. Lett. 106 (2011) 122003, [arXiv:1101.5029].
Measurement of dijet azimuthal decorrelations in pp collisions at √ s = 7 TeV. G Aad, ATLAS CollaborationarXiv:1102.2696Phys. Rev. Lett. 106ATLAS Collaboration, G. Aad et al., Measurement of dijet azimuthal decorrelations in pp collisions at √ s = 7 TeV, Phys. Rev. Lett. 106 (2011) 172002, [arXiv:1102.2696].
. K Rabbertz, private communicationK. Rabbertz. private communication.
Next-to-leading order calculation of three jet observables in hadron hadron collision. Z Nagy, hep-ph/0307268Phys. Rev. D. 68Z. Nagy, Next-to-leading order calculation of three jet observables in hadron hadron collision, Phys. Rev. D 68 (2003) 094002, [hep-ph/0307268].
Three jet cross-sections in hadron hadron collisions at next-to-leading order. Z Nagy, hep-ph/0110315Phys. Rev. Lett. 88122003Z. Nagy, Three jet cross-sections in hadron hadron collisions at next-to-leading order, Phys. Rev. Lett. 88 (2002) 122003, [hep-ph/0110315].
Jet shapes in hadron collisions: Higher orders, resummation and hadronization. M H Seymour, hep-ph/9707338Nucl. Phys. B. 513M. H. Seymour, Jet shapes in hadron collisions: Higher orders, resummation and hadronization, Nucl. Phys. B 513 (1998) 269, [hep-ph/9707338].
Four-jet production at the large hadron collider at next-to-leading order in QCD. Z Bern, G Diana, L Dixon, F Cordero, S Höche, arXiv:1112.3940Phys. Rev. Lett. 10942001Z. Bern, G. Diana, L. Dixon, F. Febres Cordero, S. Höche, et al., Four-jet production at the large hadron collider at next-to-leading order in QCD, Phys. Rev. Lett. 109 (2012) 042001, [arXiv:1112.3940].
NLO QCD corrections to multi-jet production at the LHC with a centre-of-mass energy of √ s = 8 TeV. S Badger, B Biedermann, P Uwer, V Yundin, arXiv:1209.0098Phys. Lett. B. 718S. Badger, B. Biedermann, P. Uwer, and V. Yundin, NLO QCD corrections to multi-jet production at the LHC with a centre-of-mass energy of √ s = 8 TeV, Phys. Lett. B 718 (2013) 965, [arXiv:1209.0098].
Deep inelastic scattering beyond the leading order in asymptotically free gauge theories. W A Bardeen, A Buras, D Duke, T Muta, Phys. Rev. D. 183998W. A. Bardeen, A. Buras, D. Duke, and T. Muta, Deep inelastic scattering beyond the leading order in asymptotically free gauge theories, Phys. Rev. D 18 (1978) 3998.
fastNLO: Fast pQCD calculations for PDF fits. T Kluge, K Rabbertz, M Wobisch, hep-ph/060928514th International Workshop on Deep-Inelastic Scattering (DIS 2006). Tsukuba, Japan483T. Kluge, K. Rabbertz, and M. Wobisch, fastNLO: Fast pQCD calculations for PDF fits, in 14th International Workshop on Deep-Inelastic Scattering (DIS 2006), (Tsukuba, Japan, 20-24 Apr 2006), p. 483, 2006. hep-ph/0609285.
New features in version 2 of the fastNLO project. D Britzger, K Rabbertz, F Stober, M Wobisch, arXiv:1208.364120th International Workshop on Deep-Inelastic Scattering and Related Subjects (DIS 2012). Bonn, Germany217D. Britzger, K. Rabbertz, F. Stober, and M. Wobisch, New features in version 2 of the fastNLO project, in 20th International Workshop on Deep-Inelastic Scattering and Related Subjects (DIS 2012), (Bonn, Germany, March 26-30), p. 217, 2012. arXiv:1208.3641.
Parton distributions for the LHC. A D Martin, W J Stirling, R S Thorne, G Watt, arXiv:0901.0002Eur. Phys. J. C. 63A. D. Martin, W. J. Stirling, R. S. Thorne, and G. Watt, Parton distributions for the LHC, Eur. Phys. J. C 63 (2009) 189, [arXiv:0901.0002].
The durham-RAL high-energy physics databases: HEPDATA. M Whalley, Comput. Phys. Commun. 57M. Whalley, The durham-RAL high-energy physics databases: HEPDATA, Comput. Phys. Commun. 57 (1989) 536-537.
A Buckley, M Whalley, arXiv:1006.0517Hepdata reloaded: Reinventing the HEP data archive. 201067A. Buckley and M. Whalley, Hepdata reloaded: Reinventing the HEP data archive, PoS ACAT2010 (2010) 067, [arXiv:1006.0517].
M Wobisch, D0 Collaborationhep-ex/0411025Recent run II QCD results from D0, AIP Conf. Proc. 753D0 Collaboration, M. Wobisch, Recent run II QCD results from D0, AIP Conf. Proc. 753 (2005) 92, [hep-ex/0411025].
A new quantity for studies of dijet azimuthal decorrelations. M Wobisch, K Chakravarthula, R Dhullipudi, L Sawyer, M Tamsett, arXiv:1211.6773JHEP. 17201M. Wobisch, K. Chakravarthula, R. Dhullipudi, L. Sawyer, and M. Tamsett, A new quantity for studies of dijet azimuthal decorrelations, JHEP 01 (2013) 172, [arXiv:1211.6773].
Measurement of the combined rapidity and p t dependence of dijet azimuthal decorrelations in pp collisions at √ s = 1.96 TeV. V M Abazov, D0 CollaborationarXiv:1212.1842Phys. Lett. B. 721D0 Collaboration, V. M. Abazov et al., Measurement of the combined rapidity and p t dependence of dijet azimuthal decorrelations in pp collisions at √ s = 1.96 TeV, Phys. Lett. B 721 (2013) 212, [arXiv:1212.1842].
| []
|
[
"GRAND LEBESGUE NORM ESTIMATION FOR BINARY RANDOM VARIABLES, with applications",
"GRAND LEBESGUE NORM ESTIMATION FOR BINARY RANDOM VARIABLES, with applications"
]
| [
"Eugene Ostrovsky [email protected]:[email protected] \nBar-Ilan University\n59200Ramat GanISRAEL\n",
"Leonid Sirota \nBar-Ilan University\n59200Ramat GanISRAEL\n"
]
| [
"Bar-Ilan University\n59200Ramat GanISRAEL",
"Bar-Ilan University\n59200Ramat GanISRAEL"
]
| []
| We calculate the so-called Rademacher's Grand Lebesgue Space norm for a centered (shifted) indicator (Bernoulli's, binary) random variable.This norm is optimal for the centered and bounded random variables (r.v.) Using this result we derive a very simple bilateral sharp exponential tail estimates for sums of these variables, not necessary to be identical distributed, under non-standard norming, and give some examples to show the exactness of our estimates.Key words and phrases:Random variables (r.v.), centering, indicator and Bernoulli's r.v., natural norm, Rademacher's random variables and norms, Grand Lebesgue Spaces (GLS) and norms, Legendre or Young-Fenchel transform, subgaussian norm, moment generating function, martingales, bilateral sharp exponential tail inequalities.The next facts about the B(φ) spaces are proved in [13], [17], p. 19 -40. | null | [
"https://arxiv.org/pdf/1507.07576v1.pdf"
]
| 119,689,484 | 1507.07576 | 3dd7c282aa3b6a168062f5076f5f0f18f453c4bc |
GRAND LEBESGUE NORM ESTIMATION FOR BINARY RANDOM VARIABLES, with applications
27 Jul 2015
Eugene Ostrovsky [email protected]:[email protected]
Bar-Ilan University
59200Ramat GanISRAEL
Leonid Sirota
Bar-Ilan University
59200Ramat GanISRAEL
GRAND LEBESGUE NORM ESTIMATION FOR BINARY RANDOM VARIABLES, with applications
27 Jul 2015arXiv:1507.07576v1 [math.PR]and phrases: Random variables (rv)centeringindicator and Bernoulli's rvnatural normRademacher's random variables and normsGrand Lebesgue Spaces (GLS) and normsLegendre or Young-Fenchel transformsubgaus- sian normmoment generating functionmartingalesbilateral sharp exponential tail inequalities
We calculate the so-called Rademacher's Grand Lebesgue Space norm for a centered (shifted) indicator (Bernoulli's, binary) random variable.This norm is optimal for the centered and bounded random variables (r.v.) Using this result we derive a very simple bilateral sharp exponential tail estimates for sums of these variables, not necessary to be identical distributed, under non-standard norming, and give some examples to show the exactness of our estimates.Key words and phrases:Random variables (r.v.), centering, indicator and Bernoulli's r.v., natural norm, Rademacher's random variables and norms, Grand Lebesgue Spaces (GLS) and norms, Legendre or Young-Fenchel transform, subgaussian norm, moment generating function, martingales, bilateral sharp exponential tail inequalities.The next facts about the B(φ) spaces are proved in [13], [17], p. 19 -40.
Introduction. Notations. Statement of problem
In order to formulate our result, we need to introduce some notations and conditions. Let {Ω, B, P} be certain non-trivial probability space. Let also φ = φ(λ), λ ∈ (−λ 0 , λ 0 ), λ 0 = const ∈ (0, ∞] be some even strong convex which takes positive values for positive arguments twice continuous differentiable function, such that φ(0) = 0, φ // (0) > 0, ∃ lim λ→λ 0 φ(λ)/λ > 0, (1.1) including the case when the last limit is equal to infinity. We denote the set of all these function as Φ; Φ = {φ(·)}. We say by definition that the centered (mean zero) random variable (r.v) ξ = ξ(ω) belongs to the Banach space B(φ), if there exists some non-negative constant τ ≥ 0 such that
∀λ ∈ (−λ 0 , λ 0 ) ⇒ E exp(λξ) ≤ exp[φ(λ τ )].
(1.2).
These spaces appears at first in the article [13]. The complete description and investigation of such a spaces may be found in a monograph [17], chapter 1, section 1, p. 22-24. In particular, it was proved that these spaces are really complete Banach spaces.
The function λ → E exp(λξ) is said to be a moment generating function for the r.v. ξ, if there exists at least for one non zero value λ.
The value λ 0 in the considered in this report examples will be equal to infinity: λ 0 = ∞. Denote φ R (λ) = ln cosh(λ/2); then η ∈ Bφ R and ||η|Bφ R = 1. The function φ R (λ) = ln cosh(λ/2) one can named as a natural function for the Rademacher's random variable; see exact definition further.
Evidently,
φ R (λ) ∼ λ 2 /8, λ → 0; φ R (λ) ∼ |λ|/2, |λ| → ∞.
The minimal value τ satisfying (1.2) is called a B(φ) norm of the variable ξ, write
||ξ||B(φ) = inf{τ, τ > 0 : ∀λ ⇒ E exp(λξ) ≤ exp(φ(λ τ ))}. (1.3)
The correspondent Grand Lebesgue Space norm ||ξ||Gφ R will be named Rademacher's norm.
These spaces are very convenient for the investigation of the r.v. having a exponential decreasing tail of distribution, for instance, for investigation of the limit theorem, the exponential bounds of distribution for sums of random variables, other non-asymptotical properties of the random vectors and processes, problem of continuous of random fields, study of Central Limit Theorem in the Banach space etc.
The space B(φ) with respect to the norm || · ||B(φ) and ordinary algebraic operations is a Banach space which is isomorphic to the subspace consisted on all the centered variables of Orliczs space (Ω, F, P), N(·) with N − function
N(u) = exp(φ * (u)) − 1, φ * (u) = sup λ (λu − φ(λ)).
(1.4)
The transform φ → φ * is called Young-Fenchel, or Legendre transform. The proof of considered assertion used the properties of saddle-point method and theorem of Fenchel-Moraux:
φ * * = φ. 1. ξ ∈ B(φ) ⇔ Eξ = 0, and ∃C = const > 0, U(ξ, x) ≤ exp(−φ * (Cx)), x ≥ 0, (1.5)
where U(ξ, x) denotes in this article the tail of distribution of the r.v. ξ :
U(ξ, x) = max (P(ξ > x), P(ξ < −x)) , x ≥ 0,
and this estimation is in general case asymptotically exact.
Here and further C, C j , C(i) will denote the non-essentially positive finite "constructive" constants.
More exactly, if λ 0 = ∞, then the following implication holds:
lim λ→∞ φ −1 (log E exp(λξ))/λ = K ∈ (0, ∞) (1.6a) if and only if lim x→∞ (φ * ) −1 (| log U(ξ, x)|)/x = 1/K, (1.6b)
see [1].
Here and further f −1 (·) denotes the inverse function to the function f on the left-side half-line (C, ∞).
The function φ(·) may be constructive introduced by the formula
φ(λ) = φ 0 (λ) def = log sup t∈T E exp(λξ(t)),(1.7)
if obviously the family of the centered r.v. {ξ(t), t ∈ T } satisfies the uniform Kramers condition:
∃µ ∈ (0, ∞), sup t∈T U(ξ(t), x) ≤ exp(−µ x), x ≥ 0. (1.8)
In this case, i.e. in the case the choice the function φ(·) by the formula (1.7), we will call the function φ(λ) = φ 0 (λ) a natural function for the family of the centered r.v. {ξ(t), t ∈ T }.
We say that the centered: Eξ = 0 numerical random variable (r.v.) ξ = ξ(ω), ω ∈ Ω is subgaussian, or equally, belongs to the space Sub(Ω), if there exists some non-negative constant τ ≥ 0 such that
∀λ ∈ R ⇒ E exp(λξ) ≤ exp[λ 2 τ 2 ].
(1.9)
The minimal value τ satisfying (1.1) is called a subgaussian norm of the variable ξ, write ||ξ|| Sub = inf{τ, τ > 0 : ∀λ ∈ R ⇒ E exp(λξ) ≤ exp(λ 2 τ 2 )}.
Evidently,
||ξ|| Sub = sup λ =0 ln E exp(λξ)/|λ| . (1.10) So, the space Sub = Sub(Ω) is the particular case of the general B(φ) spaces with φ(λ) = φ 2 (λ) = λ 2 , λ ∈ R.
This important notion was introduced before the appearing of the general theory of B(φ) spaces by J.P.Kahane [10]; V.V.Buldygin and Yu.V.Kozatchenko [4] proved that the set Sub(Ω) relative the norm || · || is complete Banach space which is isomorphic to subspace consisting only from the centered variables of Orlicz's space over (Ω, B, P ) with N − Orlicz-Young function N(u) = exp(u 2 ) − 1 [13].
If ||ξ|| Sub = τ ∈ (0, ∞), then max[P(ξ > x), P(ξ < −x)] ≤ exp(−x 2 /(4τ 2 )), x ≥ 0;
and the last inequality is in general case non-improvable. It is sufficient for this to consider the case when the r.v. ξ has the centered Gaussian non-degenerate distribution.
Conversely, if Eξ = 0 and if for some positive finite constant K
max[P(ξ > x), P(ξ < −x)] ≤ exp(−x 2 /K 2 ), x ≥ 0,
then ξ ∈ Sub(Ω) and ||ξ|| Sub < 4K. The subgaussian norm in the subspace of the centered r.v. is equivalent to the following Grand Lebesgue Space (GLS) norm:
|||ξ||| := sup s≥1 |ξ| s √ s , |ξ| s def = [E|ξ| s ] 1/s .
For the non -centered r.v. ξ the subgaussian norm may be defined as follows:
||ξ|| Sub := {||ξ − Eξ|| Sub} 2 + (Eξ) 2 1/2 .
More detail investigation of these spaces see in the monograph [17], chapter 1. We denote as usually by I(A) = I(A; ω), ω ∈ Ω, A ∈ B the indicator function of event A. Further, let p be arbitrary number from the set [0, 1] : 0 < p < 1 and let A(p) be any event such that P(A(p)) = p. Denote also η p = I(A(p)) − p; the centering of the r.v. I(A(p)); then Eη p = 0 and
P(η p = 1 − p) = p; P(η p = −p) = 1 − p. (1.11)
The case p = 1 − p = 1/2 correspondent to the considered before case of Rademacher's random variable.
Our goal in this short report is to investigate the value of the Rademacher's norm for the random variable η p .
We derive in the third section a very simple non-asymptotical bilateral tail estimates for sums of these variables, not necessary to be identical distributed, under non-standard norming.
Let us describe briefly some previous works. Define the following non-negative continuous on the closed segment p ∈ [0, 1] function
Q(p) = 1 − 2p 4 ln((1 − p)/p) , (1.12) so that Q(0 + 0) = Q(1 − 0) = 0 and Q 2 (1/2) = 1/8 (Hospital's rule). Note also p → 0+ ⇒ Q(p) ∼ 0.5 | ln p| , p → 1 − 0 ⇒ Q(p) ∼ 0.5 | ln(1 − p)| . (1.13)
The last circumstance play a very important role in the non-parametrical statistics, see [8], [12].
It is known [11], [3], [26], [5], [18], [19] that
||η p || Sub = Q(p).
Applications of these estimates in the non-parametrical statistics may be found in the articles [8], [12]. Other application is described in [6].
Another approach and applications see in the works [2], [3], [23], [24], [26], [27], [28] etc.
2
Auxiliary result.
Recall first of all that y = y(z) := cosh −1 z = ln(z ± √ z 2 − 1), z ≥ 1.
We agree to take only the following branch of these function
y(z) = cosh −1 z = ln(z + √ z 2 − 1), z ≥ 1.
Note that
z → 1 + 0 ⇒ y(z) ∼ 2(z − 1), (2.0a) z → ∞ ⇒ y(z) ∼ ln z. (2.0b)
The natural function for the family of the (centered) r.v. {η r }, 0 < r < 1 has a form
β r (λ) def = Ee ληr = re λ(1−r) + (1 − r)e −rλ , λ ∈ (−∞, ∞), 0 < r < 1,(2.1)
so that β 1/2 (λ) = 0.5 e λ/2 + e −λ/2 = cosh(λ/2).
Evidently,
λ → ∞ ⇒ β r (λ) ∼ re λ(1−r) , r = const ∈ (1/2, 1), λ → 0 ⇒ β r (λ) ∼ 1 + 0.5λ 2 r(1 − r), r = const ∈ (0, 1),
Introduce an important function, which may be named as Rademacher's norm of the binary random variable,
g R (r) = g(r) def = sup λ =0 cosh −1 [β r (λ)] |λ|/2 = sup λ =0 cosh −1 (re λ(1−r) + (1 − r)e −rλ ) |λ|/2 , r ∈ (0, 1). (2.2)
Proposition 2.1. It follows immediately from the direct definition of the B(φ R ) norm that ||η r ||Bφ R = g(r), 0 < r < 1.
(2.3)
Let us itemize now some important for us properties of introduced function g = g(r) = g R (r), 0 ≤ r ≤ 1. All this properties may be easily obtained from the known asymptotical behavior of both the functions y(z) and β r (λ).
1. This function is bounded and continuous on the closed interval [0, 1]. More detail: the inequality 0 < g(r) ≤ 2 is obvious. Moreover
g(0+) = g(1 − 0) = 2.
Note that the last equality stand in contradiction to the analogous fact (1.13) for the subgaussian norm for at the same binary random variable.
As a consequence: the function g = g(r) can be defined as a continuous positive function on the closed interval [0, 1] such that g(0) = g(1) = 2.
So, max r∈[0,1] g(r) = 2 = g(0) = g(1).
2.
On the other hand, we obtain after some calculations
g(r) ≥ lim λ→0 cosh −1 [β r (λ)] |λ|/2 = 2 r(1 − r), r ∈ (0, 1) (2.4)
3. Evidently, g(1 − r) = g(r), (symmetry), so that it is enough to investigate this function only on the interval 1/2 ≤ r ≤ 1.
4.
It is easy to calculate g(1/2) = 1.
Note in addition
g(r) ≥ lim |λ|→∞ cosh −1 [β r (λ)]
|λ|/2 = 2 max(r, 1 − r), r ∈ (0, 1), but the last function is less than 2 r(1 − r).
The following rough estimate will be practically used in the next section. Ee ληr = cosh(λ/2).
(2.5a)
Proof.
1. It is sufficient to consider for reasons of symmetry only the cases r ∈ [1/2, 1] and analogously λ ≥ 0.
Further, we have proved the following equivalent elementary inequality
β r (λ) = re λ(1−r) + (1 − r)e −λr ≤ cosh(λ/2), r ∈ (1/2, 1),(2.6)
wherein λ ≥ 0; the cases r = 1/2, r = 1 and r = 0 are trivial.
3.
Put for simplicity r = 1/2 + δ, δ ∈ (0, 1/2), then we deduce after some calculations β = e −λδ [cosh(λ/2) + 2δ sinh(λ/2)] .
Our inequalities (2.5) and (2.6) takes the form
[cosh(λ/2) + 2δ sinh(λ/2)] ≤ cosh(λ/2) or equally 2δ sinh(λ/2) ≤ cosh(λ/2) e λδ − 1 .
(2.7)
The inequality (2.7) follows in turn taking into account the positivity of the value of product λδ from the one of the form sinh µ ≤ 2µ cosh µ, µ = 2λ > 0.
(2.8)
5.
The last inequality (2.8) may be elementary proved by means of juxtapositions of correspondent Taylor's members. 6. The equality in the assertion of proposition 2.2 is reached for example for the value r = 1/2 as well as as λ → ∞ and as r → 1 − 0 or equally as r → 0 + .
This completes the proof of proposition 2.2.
Main result: tail estimations for sums of independent indicators under non -standard norming.
Let p(i), i = 1, 2, . . . , n be positive numbers such that 0 < p(i) < 1, and let A(i) be independent events for which P(A(i)) = p(i). Introduce a sequence of two -values (binary, generalized Rademacher's independent random variables ζ := {ζ(i)}, ζ(i) = I(A(i)) − p(i), and define its sum
S(n) := w(n) −1 n i=1 ζ(i), (3.1)
where the norming function w = w(n) is any deterministic strictly increasing to infinity numerical sequence such that w(1) = 1.
We intend in this section to derive the bilateral uniform exponential bounds for the tail of distribution
T w (u) def = sup n sup {ζ} max {P(S(n) > u), P(S(n) < −u)} , u > 1, (3.2)
where sup {ζ} is calculated over all the centered Rademacher's independent random variables ζ := {ζ(i)}. The normalization w 1/2 (n) = √ n can be considered a classic, see e.g. [13], [
max {P(ζ(i) > u), P(ζ(i) < −u)} ≤ exp −u k , u ≥ 0, k = const > 0, then T w 1/2 (u) ≤ exp −C(k) u min(k,2) , 0 < C(k) = const < ∞,
and the last estimate is essentially non-improvable. It is known for instance, see [17], chapter 1, section 1.6 that
|| n i=1 ζ(i)|| Sub ≤ n i=1 (||ζ(i)|| Sub) 2 .
Therefore, it is reasonable to suppose Thus, we must exclude also both these cases.
The exact formulating using for us assumptions will be specified below.
We shall touch briefly earlier work in the considered here problem. The particular case of our statement of problem, even for the sequence of martingale differences, may be found in the articles [7], [14], [15], [16], [20], [21].
A very interest application to the investigation of the free energy of directed polymers in random environment is described in the article belonging to Liu Q. and Watbled F. [16].
Let us now itemize some conditions imposed on the norming function w = w(n).
A1. There exists a strictly decreasing twice continuous differentiable function, defied on the set λ ≥ 1, which we will denote also w = w(λ), such that w(λ)/ λ=n = w(n), n = 1, 2, 3, . . . .
A2
.
λ → ∞ ⇒ w(λ) λ ↓ 0. (3.3) A3. λ → ∞ ⇒ w(λ) √ λ ↑ ∞. (3.4) A4. The inverse function λ → w −1 (λ), λ ≥ 1 is convex. A5. The function λ → w(λ), λ ≥ 1 satisfies the ∆ 2 − condition sup λ>1 w(2λ) w(λ) < ∞. (3.5)
Define also a new function
v(u) = v w (u) := w −1 * (u), u ≥ 1.
Recall that the transformation f → f * is named Young-Fenchel, or Legendre transform, see (1.4).
Theorem 3.1. Let all the conditions A1 -A5 be satisfied. We assert then as u → ∞ | ln T w (u) | ≍ v w (u).
(3.6)
Proof.
1. Let us calculate (and evaluate) first of all the moment generating function for the sequence of r.v. S(n). We have using the independence of the r.v. ζ(i)
Ee λS(n) = n i=1 Ee λζ(i)/w(n) = n i=1 β p(i) λ w(n) .
2. One can use the proposition 2.2, more exactly, the estimate (2.5a):
Ee λS(n) ≤ cosh n λ w(n) = exp n ln cosh λ w(n)
,
(3.7)
wherein the last inequality (3.7) is sharp: is achievable for instance when all the r.v. ζ(i) are Rademacher's. On the other words, we can and will suppose all the independent variables ζ(i) have the ordinary symmetrized Rademacher' distribution, 3. Therefore sup n Ee λS(n) ≤ sup n exp n ln cosh λ w(n)
, (3.8) and it is easily to derive using the known properties of the function w(·) ln sup n Ee λS(n) ≍ w −1 (λ), |λ| > 1; (3.9) the case |λ| ≤ 1 is simple.
In particular, if we choose in the right hand (3.8) w(n) = λ, then ln sup n Ee λS(n) ≥ C 1 (w) · w −1 (λ), |λ| > 1. (3.9a) In detail, let λ > 1. There exists an unique value n 0 = n 0 (w, λ) such that
w −1 (λ) ≤ n 0 < w −1 (λ) + 1.
Then sup n Ee λS(n) ≥ Ee λS(n 0 ) , and we have consequently n 0 ≥ w −1 (λ);
λ w(n 0 ) ≥ λ w(w −1 (λ) + 1) ≥ λ λ + w(1) ≥ 1 1 + w(1) ,
we exploited the convexity of the function w −1 (·), condition A4. Thus, one can to choose in (3.10)
C 1 (w) = 1 1 + w(1) .
We turn now to the withdrawal of the upper bound for the value Z := sup n Ee λS(n) . Define an absolute constant C = e + 1/e − 2 ≈ 1.0862 . . . .
For our purpose we estimate:
ln cosh λ ≤ λ, λ ≥ 1; |λ| < 1 ⇒ cosh λ = 1 + λ 2 2! + λ 4 4! + λ 6 6! + . . . ≤ 1 + λ 2 2 × 1 + 2 · 1 4! + 1 6! + . . . = 1 + λ 2 2 × [1 + 2 · (cosh 1 − 3/2)] = 1 + C · λ 2 2 ; |λ| < 1 ⇒ ln cosh λ ≤ C · λ 2 2 .
We find combining the obtained estimates for the positive values λ :
ln Z = ln sup n Ee λS(n) ≤ λ · n w(n) · I(n ≤ w −1 (λ))+ C · λ 2 n w 2 (λ) · I(n ≥ w −1 (λ)),
where I(A) denotes the indicator function for the predicate A.
Obviously,
ln Z ≤ λ · µ w(µ) · I(1 ≤ µ < w −1 (λ))+ C · λ 2 µ w 2 (λ) · I(µ ≥ w −1 (λ)).
It follows immediately from the assumptions A1 -A5 that the function of the variable µ, µ ≥ 1 in the right-hand side of the last inequality achieved its maximal value at the point µ = w −1 (λ) and herewith
Z ≤ exp Cw −1 (λ) , |λ| > 1.
Totally, we obtained the following uniform bilateral estimates for the moment generating function of the random sequence S(n) :
exp C 1 (w) w −1 (λ) ≤ sup n Ee λS(n) ≤ exp Cw −1 (λ) , |λ| > 1.
(3.10)
4. The proposition of theorem 3.1 follows now from (3.10) and from the main result of the article [1], see also [17], chapter 1, section 1.4. It follows from the inequality (3.13) by means of Chernov's inequality only unilateral inequality sup n max(P(S(n) > u), P(S(n) < −u)) ≤ e −θ * (u) , u ≥ 0, (3.14) still without all the conditions A1 -A5. 4 Concluding remarks.
A. It is known, see [25], that (after commensuration) if X be a mean zero r.v. X : EX = 0 and is bounded a.e.: |X| ≤ 1/2, then Ee λX ≤ cosh(λ/2), λ ∈ R.
On the other words, || X ||Bφ R ≤ 1, herewith the equality in the last estimate is achieved only in the case when X has the (symmetrical) Rademacher's binary distribution. Therefore, all the results of theorem 3.1 remains true for the arbitrary sequence of independent centered such a variables, not necessary be identical distributed.
This proposition can be considered as some complement to the classical theorem of W.Hoeffding [9], see also [2].
B. The case of sums of weakly dependent binary r.v., including sums of martingale differences, is investigated in the recent article [22].
Example 1 . 1 .
11Let η be a (renormed) Rademacher's random variable:
lim n→∞ w(n)/ √ n = ∞;other case is trivial for us. On the on the other hand,
An inversion of Tchebyshev's inequality. D R Bagdasarov, E I Ostrovskii, Theory of Probability Applications. 4Bagdasarov D.R., Ostrovskii E.I. An inversion of Tchebyshev's inequal- ity. Theory of Probability Applications, (1995), V. 40, Issue 4, 873-878.
On Hoeffdings inequalities. V Bentkus, The Annals of Probability. 322Bentkus V. On Hoeffdings inequalities. The Annals of Probability 32(2), 1650- 1673, (2004).
On the concentration of the missing mass. D Berend, A Kontorovich, Electron. Commun. Probab. 18317Berend D. and Kontorovich A. On the concentration of the missing mass. Electron. Commun. Probab., 18(3):17, 2013.
V. About subgaussian random variables. V V Buldygin, Kozatchenko Yu, Ukrainian Math. Journal. 326Buldygin V.V., Kozatchenko Yu.V. About subgaussian random variables. Ukrainian Math. Journal, 1980, 32, N o 6, 723-730.
The sub-Gaussian norm of a binary random variable. V V Buldygin, K K Moskvichova, Theor. Probability and Math. Statist. Vip. 8686Buldygin V.V., Moskvichova K.K. The sub-Gaussian norm of a binary random variable. Theor. Probability and Math. Statist. Vip. 86, No. 86, 2013, Pages 33-49.
Statistical applications of the Poisson-binomial and conditional Bernoulli distributions. S X Chen, J S Liu, Statist. Sinica. 74S. X. Chen and J. S. Liu. Statistical applications of the Poisson-binomial and conditional Bernoulli distributions. Statist. Sinica, 7(4): 875-892, 1997.
Large deviations for martingales with exponential condition. X Fan, I Grama, Q Liu, arXiv:1111.1407v1math.PRX. Fan, I.Grama and Q.Liu. Large deviations for martingales with expo- nential condition. arXiv:1111.1407v1 [math.PR] 6 Nov 2011
Non-asymptotical estimate of deviation of multidimensional function of distribution. E I Gaivoronsky, E I Ostrovsky, Theory Probab. Applications. 36Gaivoronsky E.I., Ostrovsky E.I. Non-asymptotical estimate of deviation of multidimensional function of distribution. Theory Probab. Applications, 1991, 36, Issue 3, 111-115.
Probability inequalities for sums of bounded random variables. W Hoeffding, American Statistical Association Journal. 58Hoeffding W. Probability inequalities for sums of bounded random variables. American Statistical Association Journal, 58, 13-30, 1963.
Properties locales des fonctions a series de Fourier aleatoires. J P Kahane, Studia Math. 19Kahane J.P. Properties locales des fonctions a series de Fourier aleatoires. Studia Math. (1960), 19, N o 1, 1-25.
Large deviation methods for approximate probabilistic inference. M Kearns, L Saul, Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence. the Fourteenth conference on Uncertainty in artificial intelligenceMorgan Kaufmann Publishers IncKearns M. and Saul L. Large deviation methods for approximate probabilis- tic inference. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence, pages 311-319. Morgan Kaufmann Publishers Inc., 1998.
On large Deviations of the Empiric D.F. of vector chance variables and a Law of Iterated Logarithm. J Kiefer, Pacific J.Math. 112Kiefer J. On large Deviations of the Empiric D.F. of vector chance variables and a Law of Iterated Logarithm. Pacific J.Math., 1961, 11, N o 2, 649-660.
Banach spaces of random variables of subgaussian type. Kozatchenko V Yu, E I Ostrovsky, Theory Probab. And Math. Stat. in RussianKozatchenko Yu.V., Ostrovsky E.I. Banach spaces of random variables of subgaussian type. Theory Probab. And Math. Stat., Kiev, (1985), p. 42-56 (in Russian).
Large deviations for martingales. E Lesign, D Volny, Stochastic Processes and their Applications. 96Lesign E., Volny D. Large deviations for martingales. Stochastic Processes and their Applications, 96, 143-159, (2001).
A martingale inequality and large deviations. Y Li, Statist. Probab. Lett. 62Li Y. ( 2003.) A martingale inequality and large deviations. Statist. Probab. Lett., 62, 317-321.
Exponential inequalities for martingales and asymptotic properties of the free energy of directed polymers in random environment. Q Liu, F Watbled, arXiv:0812.1719v1[math.PR]9Liu Q., Watbled F. Exponential inequalities for martingales and asymptotic properties of the free energy of directed polymers in random environment. arXiv: 0812.1719v1 [math.PR] 9 Dec 2008.
Exponential Estimations for Random Fields. E I Ostrovsky, Moscow-Obninsk, OINPEin RussianOstrovsky E.I. Exponential Estimations for Random Fields. Moscow- Obninsk, OINPE, (1999), (in Russian).
Exact value for subgaussian norm of centered indicator random variable. E Ostrovsky, L Sirota, arXiv:1405.6749v1math.PROstrovsky E., Sirota L. Exact value for subgaussian norm of centered in- dicator random variable. arXiv:1405.6749v1 [math.PR] 26 May 2014
Subgaussian and stricktly subgaussian random variable. E Ostrovsky, L Sirota, arXiv:1406.3933v1[math.PR]16Ostrovsky E., Sirota L. Subgaussian and stricktly subgaussian random vari- able. arXiv:1406.3933v1 [math.PR] 16 Jun 2014
Moment and tail inequalities for polynomial martingales. The case of heavy tails. E Ostrovsky, L Sirota, arXiv:1112.2768v1math.PR] 13 DezOstrovsky E. and Sirota L. _ Moment and tail inequalities for polynomial martingales. The case of heavy tails. arXiv:1112.2768v1 [math.PR] 13 Dez 2011
Non-improved uniform tail estimates for normed sums of independent random variables with heavy tails. E Ostrovsky, L Sirota, arXiv:1110.4879v1[math.PR]21with applicationsOstrovsky E. and Sirota L. Non-improved uniform tail estimates for normed sums of independent random variables with heavy tails, with applica- tions. arXiv: 1110.4879v1 [math.PR] 21 Oct 2011.
Hoeffdings inequality for sums of weakly dependent random variables. Pelekis Cristos, Ramon Jan, arXiv:1507.06871v1math.PRPelekis Cristos, Ramon Jan. Hoeffdings inequality for sums of weakly de- pendent random variables. arXiv:1507.06871v1 [math.PR] 24 Jul 2015
Exact inequalities for sums of asymmetric random variables. Pinelis Iosif, arXiv:math/0602556v2[math.PR]24with applicationsPinelis Iosif. Exact inequalities for sums of asymmetric random variables, with applications. arXiv:math/0602556v2 [math.PR] 24 May 2006
M Raginsky, I Sason, Concentration of Measure Inequalities in Information Theory, Communications, and Coding. Foundations and Trends in Communications and Information Theory. 101246Raginsky M. and Sason I. Concentration of Measure Inequalities in In- formation Theory, Communications, and Coding. Foundations and Trends in Communications and Information Theory, vol. 10, no. 1-2, pp. 1246, 2013.
Subgaussian random variables: An expository note. Internet publication. Rivasplata Omar, PDF. Rivasplata Omar. Subgaussian random variables: An expository note. Inter- net publication, PDF, November 12, 2012.
E Schlemm, arXiv:1405.4496v1The Kearns-Saul inequality for Bernoully and Poisson-binomial distributions. math.PRSchlemm E. The Kearns-Saul inequality for Bernoully and Poisson-binomial distributions. arXiv:1405.4496v1 [math.PR] 18 May 2014
A full proof of universal inequalities for the distribution function of the binonial law. A A Serov, A M Zubkov, arXiv:1207.3838v1math.PRSerov A.A., Zubkov A.M. A full proof of universal inequalities for the dis- tribution function of the binonial law. arXiv:1207.3838v1 [math.PR] 16 Jul 2012
Bounds for the number of Boolean functions admitting affine approximations of a given accuracy. A M Zubkov, A A Serov, Discrete Math. Appl. 20Zubkov A.M., Serov A.A. Bounds for the number of Boolean functions ad- mitting affine approximations of a given accuracy. Discrete Math. Appl., 2010, 20, N o 5 -6, p. 467-486.
| []
|
[
"ZZ+jet production via gluon fusion at the LHC",
"ZZ+jet production via gluon fusion at the LHC"
]
| [
"Francisco Campanario [email protected] \nInstitute for Theoretical Physics\nKarlsruhe Institute of Technology\n76128KarlsruheGermany\n",
"Qiang Li \nPaul Scherrer Institut\nCH-5232Villigen PSISwitzerland\n\nState Key Laboratory of Nuclear Physics and Technology\nPeking University\n100871BeijingChina\n",
"Michael Rauch [email protected] \nInstitute for Theoretical Physics\nKarlsruhe Institute of Technology\n76128KarlsruheGermany\n",
"Michael Spira [email protected] \nPaul Scherrer Institut\nCH-5232Villigen PSISwitzerland\n"
]
| [
"Institute for Theoretical Physics\nKarlsruhe Institute of Technology\n76128KarlsruheGermany",
"Paul Scherrer Institut\nCH-5232Villigen PSISwitzerland",
"State Key Laboratory of Nuclear Physics and Technology\nPeking University\n100871BeijingChina",
"Institute for Theoretical Physics\nKarlsruhe Institute of Technology\n76128KarlsruheGermany",
"Paul Scherrer Institut\nCH-5232Villigen PSISwitzerland"
]
| []
| Pair production of Z bosons in association with a hard jet is an important background for Higgs particle or new physics searches at the LHC. The loop-induced gluon-fusion process gg → ZZg contributes formally only at the next-to-next-to-leading order. Nevertheless, it can get enhanced by the large gluon flux at the LHC, and thus should be taken into account in relevant experimental searches. We provide the details and results of this calculation, which involves the manipulation of rank-5 pentagon integrals. Our results show that the gluon-fusion process can contribute more than 10% to the next-to-leading order QCD result and increases the overall scale uncertainty. Moreover, interference effects between Higgs and non-Higgs contributions can become large in phase-space regions where the Higgs is far off-shell. | 10.1007/jhep06(2013)069 | [
"https://arxiv.org/pdf/1211.5429v1.pdf"
]
| 96,477,885 | 1211.5429 | 317c0ff344763d32b2387fd9a268453ee7f539df |
ZZ+jet production via gluon fusion at the LHC
23 Nov 2012
Francisco Campanario [email protected]
Institute for Theoretical Physics
Karlsruhe Institute of Technology
76128KarlsruheGermany
Qiang Li
Paul Scherrer Institut
CH-5232Villigen PSISwitzerland
State Key Laboratory of Nuclear Physics and Technology
Peking University
100871BeijingChina
Michael Rauch [email protected]
Institute for Theoretical Physics
Karlsruhe Institute of Technology
76128KarlsruheGermany
Michael Spira [email protected]
Paul Scherrer Institut
CH-5232Villigen PSISwitzerland
ZZ+jet production via gluon fusion at the LHC
23 Nov 2012Prepared for submission to JHEPZ BosonsStandard ModelHadronic Colliders
Pair production of Z bosons in association with a hard jet is an important background for Higgs particle or new physics searches at the LHC. The loop-induced gluon-fusion process gg → ZZg contributes formally only at the next-to-next-to-leading order. Nevertheless, it can get enhanced by the large gluon flux at the LHC, and thus should be taken into account in relevant experimental searches. We provide the details and results of this calculation, which involves the manipulation of rank-5 pentagon integrals. Our results show that the gluon-fusion process can contribute more than 10% to the next-to-leading order QCD result and increases the overall scale uncertainty. Moreover, interference effects between Higgs and non-Higgs contributions can become large in phase-space regions where the Higgs is far off-shell.
Introduction
The Large Hadron Collider (LHC) is presently running with a center-of-mass energy of 8 TeV and instantaneous luminosity surpassing already the Fermilab's Tevatron, with the possibility of being upgraded to the designed value, i.e., 14 TeV and 10 34 cm −2 s −1 , in 2014 [1]. The unprecedented high collision energy and luminosity are necessary for discovering Higgs particles and new physics beyond the Standard Model (SM). However, the higher the collision energy the more complex event topologies get involved. In particular, hadron collision events with multiple hard particles and large jet multiplicities deserve a careful treatment.
In this paper, we investigate ZZ + j production at the LHC, leading to the following event topologies pp → 4 leptons + jet + X , 2 leptons + E T + jet + X .
( Figure 1. Generic Feynman diagrams generated by Jaxodraw [8] for the partonic process gg → ZZg, corresponding to the 4 topological classes. Taking into account all possible permutations, one gets 12 diagrams for (a), 9 for (b), 3 for (c) and 3 for (d). In addition, one needs to sum over the fermion types and flow directions within the fermion loop.
A similar study for the W boson case shows contributions of about 10 percent to the next-toleading order QCD result [5]. Recently, also a study of the GF process appeared, but neglecting the Higgs contributions [6]. Moreover, the GF process (1.3) corresponds to part of the real emission contributions of the NLO QCD corrections to the loop-induced Z pair production via GF gg → ZZ and thus may be crucial in reducing the theoretical uncertainty for a precise measurement of Z boson pair production, and also for the inclusive Higgs search via gg → H → ZZ [7].
Based on the above mentioned motivations, we are reporting in this paper on the calculation and results of the 2 → 3 GF process (1.3) at the 7, 8 and 14 TeV LHC 1 . We, therefore, omit GF production channels with quarks in the initial and final state since they interfere with the LO contributions already at NLO 2 . Furthermore, since we are mostly interested in the effects of the integrated cross section of ZZ + j production in GF, which is not sensitive to the Z decays, in accordance with Ref. [3], the leptonic decay of the Z bosons and off-shell effects, e.g. γ * → ℓ + ℓ − , are not considered. The paper is organized as follows. In section 2 we describe the calculation. In section 3 we present numerical results and their discussion. Finally we conclude in section 4.
Calculation
We have implemented two independent Monte Carlo programs which rely on different approaches. Cross checks have been performed at the amplitude level for fixed phase-space points and also for integrated cross sections, getting agreement at the double-precision level and within statistical errors, respectively.
The relevant one-loop Feynman diagrams and amplitudes for the partonic process gg → ZZg are shown in Fig. 1. The diagrams are grouped in 4 topological classes. The two diagrams in the 1 As for the 1.96 TeV Tevatron we have checked that the GF production rates are tiny ( 1% of the qq NLO QCD ones) due to the small gluon flux and is thus not discussed here. 2 The interference effects of gq → ZZq production via GF with the LO contribution has been computed in [3] and are below 1% upper row correspond to continuum production of the two Z bosons, either via a pentagon or via a box diagram. The two diagrams in the bottom row both involve a, possibly virtual, Higgs boson, which then decays into a Z pair. This is mediated by either box or triangle graphs. The Higgs mass dependence of the latter ones and interference effects between Higgs and continuum diagrams will be discussed in Sec. 3.
For program 1, the Feynman amplitudes are generated with FeynArts 3.5 [9] and then manipulated with FormCalc 5.3 [10] 3 . The Fortran libraries 4 generated with FormCalc are linked with our Monte Carlo integration code for final use. The tensor integrals are evaluated with the help of the LoopTools-2.5 package [10,14], which employs the reduction method introduced in Ref. [15] for pentagon tensors up to rank 4, and Passarino-Veltman reduction for the lower point ones up to boxes [16]. In our case rank-5 pentagon tensor integrals are needed in addition, as can be inferred from the fact that the 5 external particles are all vector bosons. We thus have modified LoopTools-2.5 to implement the reduction method for pentagon tensor integrals up to rank 5 as proposed in Ref. [17]. Finally, the resulting regular scalar integrals are evaluated with the FF package [18]. The UV and IR divergent scalar integrals have already been encoded into this version of LoopTools within dimensional regularization, which we have explicitly cross-checked against QCDloop [19].
Although the reduction procedure in Ref. [17] can avoid inverse Gram determinants of external momenta in the reduction step from 5-point to 4-point integrals, the reduction of lower-point tensor integrals cannot within the Passarino-Veltman algorithm. The problem is more severe in our case which involves squared loop amplitudes.
To improve the numerical stability problem due to vanishing Gram determinants further modifications have been made. First, we have implemented in LoopTools the so called 'Alternative Passarino-Veltman reduction' for triangle and box tensor integrals, as introduced in Ref. [17], which changes the calculating order of tensor coefficients and results in better numerical convergence behavior than the conventional Passarino-Veltman reduction. Second, we have imposed a jet-measure-like cut to simply cut away a small dangerous region in phase space to avoid numerical problems. The total contribution of this region will turn out to be small.
min K i,j T , P i T , P j T > K cut T , (2.1) K i,j T ≡ min P i T , P j T ∆y 2 ij + ∆φ 2 ij 0.6 ,(2.2)
where i, j = 1, 2, 3 (i = j) run over the final state particles. Here y is the rapidity and φ is the azimuthal angle around the beam direction. In Sec. 3, we will discuss the dependence on the choice of K cut T . The program 2 is based on the structure of the Monte-Carlo program Vbfnlo [20]. We use the effective current approach [21], which allows us to compute only four master Feynman diagrams, corresponding to the diagrams appearing in Fig. 1. This calculation is performed with the in-house framework described in Ref. [22], which uses Mathematica [23] and FeynCalc [24]. We use generic vertices split into left-and right-handed components such that every physically allowed permutation can be constructed by contracting with the corresponding effective polarization vectors, which have been multiplied with the electroweak couplings beforehand. Each of the diagrams is split into a vector and an axial-vector part by isolating the γ 5 contributions. This allows to apply Fury's theorem independently for the vector and axial-vector components reducing the total number of diagrams to be computed by a factor two. Additionally, for each of the master diagrams, we build Ward identities replacing the generic vertices by their corresponding momenta. This allows us e.g. to reduce analytically a pentagon of rank five into a difference of two boxes and a remainder pentagon of rank four
P µ 1 ...µ 5 p i,µ i = B µ 1 ...μ i ...µ 5 1 − B µ 1 ...μ i ...µ 5 2 + P µ 1 ...μ i ...µ 5 rem , i = 1, . . . , 5 ,(2.3)
whereμ i means that the corresponding vertex has been replaced by its momentum p i . The remainder, in this case the pentagon P µ 1 ...μ i ...µ 5 rem , vanishes for massless propagators and purely vectorial couplings (gluon, photon) for the given contraction. These simplified analytical expressions are used to control the numerical accuracy of the code. We compare numerically the values given by the analytically simplified expressions with the master diagrams, where the polarization vector has been replaced by the corresponding four-momentum. We construct all possible Ward identities for each diagram and physical permutation, e.g. all five different ones for the pentagon P µ 1 ...µ 5 before. The deviation is then defined as absolute value of one minus the ratio between the numerically contracted and the analytically calculated diagram. Where more than one Ward identity is possible, we take the largest value. A point is identified as unstable when this value exceeds a given global value ǫ. In this case, the complete phase-space point is rejected and the amplitude set to zero. The dependence of the cross section on the required accuracy ǫ will be shown in the numerical analysis in the next section.
To reduce the CPU impact of the calculation of these identities, we factorize out the part that depends on the effective currents and the couplings, such that the loop dependent part, including these identities, is only computed for one helicity combination and re-used for the other ones. This reduces the time needed for the additional combinations by about a factor four. Nevertheless, for the final results we apply random helicity summation to sample more phase-space points. For the numerical evaluation of the tensor integrals, we apply the Passarino-Veltman approach of Ref. [16] up to boxes, and for a numerically stable implementation of five-point-coefficients we use the Denner-Dittmaier scheme laid out in Ref. [17] with the set-up and notation of Ref. [22]. Color factors have been computed by hand and cross-checked with the program Color [25].
Additionally, we have implemented a two-layer rescue system for phase-space points where the Ward identities of Eq. (2.3) are not satisfied. In the first step, we calculate the diagram again using dedicated subroutines for small Gram determinants. These involve the evaluation of threeand four-point functions up to Rank 11 and 9, respectively, following the notation of Ref. [22]. If at this point the Ward identities are still not satisfied, we perform the second step of the rescue system. Here the scalar integrals and tensor reduction routines are evaluated in quadruple precision. This requires reconstructing the external momenta in quadruple precision, so that global energy-momentum conservation is still fulfilled at the higher numerical accuracy while keeping external particles on their mass-shell. This is a crucial step for obtaining an improved behavior of the quadruple precision routines. These routines are only a factor 2-3 slower than the double precision ones, in contrast to the 10-20 factor one would obtain using quadruple precision for the complete diagram, thus reducing significantly the overall slowing factor of the rescue system. With this system we find the percentage of phase-space points that does not pass the Ward identities for a requested accuracy of ǫ = 10 −3 is completely negligible, see Table 1. The additional CPU time required is below 10% for this accuracy. A detailed discussion of the numerical impact is postponed to the following section.
Furthermore, in this approach the cut of Eq. (2.1) is not needed to obtain stable results since singular points are correctly identified by the Ward identities. Nevertheless, we have implemented the cut for comparing with program 1. Final results will be given with program 2 without imposing the K T cut and activating the two rescue systems demanding a global accuracy of the Ward identities of ǫ = 10 −3 .
In both programs, we have checked the cancellation of UV and IR divergences in our calculations.
Numerical Results
In this section, we present the integrated cross sections and differential distributions for ZZ + j production at the LHC with a center-of-mass energy of 7, 8 and 14 TeV. We impose the following set of cuts
|η j | < 4.5 , P j T > 50 GeV , (3.1)
to identify massless partons with jets. We set the top quark mass to m t = 171.3 GeV, the bottom quark mass to m b = 4.6 GeV, and the other light quark masses to zero. In accordance with Ref. [3], we set explicitly
M Z = 91.188 GeV , α(M Z ) = 0.00755391226 , sin θ 2 W = 0.222247 . (3.2)
We use a constant Higgs width and the default Higgs mass is chosen to be M H = 126 GeV. Results for M H = 120, 140, 200 and 400 GeV will also be shown. The corresponding Higgs decay widths are obtained by HDECAY [26]: for M H = 120, 126, 140, 200 and 400 GeV, respectively. Throughout our calculation, ZZ + j production rates via GF are calculated with CTEQ6L1 parton distribution functions (PDFs) [27] with the default strong coupling value α s (M Z ) = 0.130 using the implementation in LHAPDF [28]. Our canonical choice for the renormalization and factorization scales is µ R = µ F = M Z .
Γ
In program 2, we use Ward identities to identify problematic configurations and use a twolayer rescue system. In Fig. 2, we present the cross section for different values of the demanded accuracy ǫ. We show results without applying any rescue system as well as the one including only the small Gram determinant expansion and where both this and quadruple precision for scalar and tensor integrals have been switched on.
One can see that for an accuracy of the Ward identity test above 10 −2 the double precision results both with and without the dedicated tensor integrals for small Gram determinants agree with the quadruple precision ones better than 1%. This agreement is better than one would naively expect if all rejected points contributed with the same average value to the integrated cross section as the accepted ones. In this case, the cross section corrected for the missing phase-space points can be estimated as σ corr = σ MC 1−R failed , where R failed and σ MC are the relative rate of failed points and the output of the cross section obtained with the Monte-Carlo program, respectively, given in Table 1. With this prescription one obtains corrected cross sections which increasingly exceed the quadruple precision results as we go to smaller values of the gauge test parameter ǫ. This reflects the fact that the rejected points do not belong to any particular enhanced region of the phase-space. On the other hand, it is clearly visible that the rescue system for small Gram determinants does not solve the problem of the instabilities. This is due to the fact that for a given phase space point for which the Ward identity is not satisfied with the demanded accuracy, there is always some physical permutation that involves not only small Gram determinants but also small Cayley determinants X 0k and X ij , so that the expansion breaks down. Note, also, that our Ward identity check is very restrictive since we do not check whether the resulting diagram gives a numerically relevant contribution to the whole event, i.e, a global gauge check demanding the same accuracy will probably result into smaller failure rates since most of the configurations for which the Ward identity is not satisfied contribute little to the whole event. This can be inferred seeing the nice convergence to the right result even allowing deviations of the Ward identities of two orders of magnitude, ǫ = 10 2 .
Nevertheless, we have not investigated this further, and, instead, to solve the problem, we change to quadruple precision, which is now also supported in the latest version of gfortran and has a small impact in terms of CPU time in our program. After this step, the instabilities fall well below the per mille level for an accuracy of the Ward identity gauge test of 10 −6 despite the fact of only using quadruple precision for the scalar and tensor integrals. The small fraction of failed points left after this step points to the fact that the loss of precision is really due to the presence of small Gram determinants. Note also that when allowing large deviations of the Ward identities of three orders of magnitude, ǫ = 10 3 , some of the runs already start to feel the instabilities, which translates into large weights yielding larger cross sections and statistical errors. Without applying the Ward identities, ǫ → ∞, most of the runs with different random number seeds simply give arbitrarily huge values with corresponding huge error bars, if not initialized with an optimized grid. In the following, we stick to the set up to switch on the rescue system for an accuracy of the Ward identity of ǫ = 10 −3 . The CPU cost is at the 10% level even with the poor success rate of the first step, which we keep for academic reasons. As mentioned in Sec. 2, for program 1 we employ K cut T for simplicity to avoid the problem of numerical instability from vanishing Gram determinants. We show in Fig. 3 the K cut T dependence of the integrated cross section for ZZ + j production via GF with the cuts of Eq. (3.1) at the 14 TeV LHC for a Higgs mass of M H = 140 GeV. For both programs the speed of the code is about 4 CPU days on a state-of-the-art computer with the statistical error on the Monte Carlo integration better than 2 per mille. In the region of K cut T 3 GeV, all the points agree well with each other within statistical errors, which shows that the K cut T dependence is small. However, for K cut T smaller 5 GeV, the statistical errors on program 1 are hard to improve further, as expected, and the percentage of runs that gives nonsensical results using different initial seeds increases for decreasing values of the K cut T cut. With increasing K cut T 4 GeV, deviations from the small K cut T limit start to show up and become apparent as the phase space is reduced more and more. In the following, we use program 2 without applying any K cut T for the numerical results. In Tables 2, 3 Table 2. Dependence of the ZZ + j production rate via GF (in fb) on M H at the 7 TeV LHC and the interference effects between Higgs and non-Higgs contributions. Table 3. Same as Table 2, but for the 8 TeV LHC. Table 4. Same as Table 2, but for the 14 TeV LHC.
The results for Higgs masses different from 126 GeV are included to demonstrate the effect of interference effects for these masses. While we know from the experimental searches [29,30] that a Higgs boson with SM coupling strength is not possible there, one with reduced couplings, e.g. the CP-even partner in a Two-Higgs-doublet model, is still viable.
σ(cont + H) gives the integrated cross sections including both Higgs signal (σ(H)) and continuum (σ(cont)) contributions. For M H < ∼ 2M Z , the Higgs contributions are small compared to continuum production, as the intermediate Higgs is far off-shell. Note that we only consider production of on-shell Z bosons, which is the dominant region for the continuum contribution, where we will focus on in the current work. We see that for Higgs masses below the threshold, which includes the actually observed mass, there is a strong destructive interference between the continuum and the Higgs diagrams. This leads to integrated cross sections which are even below the continuum-only result. Compared to the naive sum, the full result is reduced by roughly 20%. In contrast for Higgs masses above the threshold, interference effects are small and reach at most 4%. Fig. 4 shows the dependence on the renormalization and factorization scales (µ = µ R = µ F ) of the gluon-fusion ZZ + j production rates at the 14 TeV LHC. The scale dependence is rather large. Varying the scale by a factor 2 downwards (upwards), the cross sections change by 54.2% (-32.5%), 59.4% (-34.4%) and 54.2% (-32.4%) for continuum diagrams, Higgs diagrams and the full result, respectively. Comparing these numbers with the NLO results for the continuum calculated in Ref. [3], the continuum gluon-fusion result gives an additional 9.7% contribution to the cross section at the central scale µ = M Z (13.2% compared to the LO cross section). This changes to 13.9% (18.0%) and 7.0% (10.0%) for decreasing and increasing the scale by a factor two, respectively. The total scale uncertainty of the cross section increases from +8% (+13%) and -6% (-11%) for the NLO QCD (LO) result to +12.1% (+17.8%) and -8.4% (-13.5%) for NLO+GF (LO+GF), again varying the scale by a factor two down-and upwards, respectively.
In Fig. 5 we show, in the left-hand panel, the differential cross section for the distribution of the transverse momentum of the jet p T,j . Both diagram types lead to increasingly lower cross sections as we go to higher transverse momenta, but the fall-off of the Higgs part is less steep than the continuum part. Compared to the tree-level process [3], the jet distribution here in gluon-fusion is softer due to the dilution effects of the fermion loop. This behavior is consistent with previous findings in Ref. [31] on Higgs+jet production. On the right-hand side of Fig. 5, we present the transverse-momentum distribution of the second Z boson with the smaller p T value. A similar behavior as for the jet-p T is observed, and the cross section for the Higgs-only diagrams stays almost constant over a large part of the shown range. For transverse momenta larger than about 200 GeV, it reaches the same order of magnitude as the continuum contribution and even exceeds the combined result. The origin of this behavior can be understood by looking at the invariant mass of the two Z bosons shown in Fig. 6. For M H = 126 GeV in the left-hand panel, the continuum diagrams show a peak directly above the threshold and a fall-off for larger invariant masses. These are dominated by the loop diagrams with massless quarks of the first two generations running in the loop. For the Higgs diagrams in contrast the top-quark loop dominates. Here the peak of the cross section is just above crossing the 2m t threshold owing to the P-wave suppression at threshold due to the CP-even nature of the Higgs boson. The Z bosons coming from virtual Higgs decays have larger momenta on average, leading to a harder p T spectrum. On the right-hand side we show in comparison the same plot, but now setting the Higgs mass to 400 GeV. The Higgs resonance in the ZZ invariant mass spectrum is clearly visible. While for mass values smaller than that constructive interference between the two diagram types appears, it becomes destructive for larger values. This is in particular visible at the large-mass end of the plot, where both contributions have similar size and the sum of the two is about a factor three smaller. Such a behavior has already been observed and explained in gg → ZZ production [32]. For large momenta the longitudinal polarizations of the Z bosons dominate, which for the continuum couple predominantly to the top-quark loop, as does the Higgs. For colliding two on-shell top quarks, unitarity restoration then immediately requires that for the longitudinal polarizations continuum t-and u-channel diagrams and the schannel Higgs diagram cancel at large invariant masses. This behavior is unchanged when closing the loop and integrating over the loop momentum.
Finally, we display the azimuthal angle separation between the two Z bosons in Fig. 7. Again, we see a significant difference between the continuum and the Higgs diagrams. In both cases, the two Z bosons are preferably emitted back-to-back, but the effect is more pronounced for the Higgs contribution. This is particularly visible in the right-hand panel, where the differential cross sections are individually normalized to the integrated one.
Summary
We have presented a calculation of the loop-induced gluon-fusion process of ZZ + j production at the LHC. Special attention has been paid to the numerical problem of vanishing Gram determinants to obtain stable results. We have studied distributions of the final-state particles. Here, the contribution of the Higgs diagrams develops larger transverse momenta of the final-state particles. This is due to the crossing of the top-quark pair production threshold, which increases the production for ZZ invariant masses above this value. Also, for invariant masses larger than the Higgs mass, destructive interference between Higgs and continuum diagrams appears, leading to a huge reduction of the differential cross section. The effect on the integrated cross section, however, is about 20% for a Higgs mass of 126 GeV and, when compared with the tree-level cross section of qq-induced ZZj production, small.
Additionally, when compared with the known NLO QCD result, the gluon-fusion part can contribute more than 10% of the NLO QCD cross section, especially at small scales µ and jet transverse momenta p T,j . Moreover, the GF results increase the scale uncertainties of the integrated ZZj cross section. For e.g. the benchmark point in Fig. 4, the scale dependence is increased by about a factor 1.5.
Figure 2 .
2Dependence of the ZZ + j cross section on the value of the requested Ward identity accuracy ǫ and the different steps of the rescue system. The cross sections are normalized to the average cross section σ 0 = 320.8(2) [fb] of the ǫ = 10 −6 to 10 −2 quadruple precision runs. Results are generated for the LHC at a center-of-mass energy of 14 TeV and a Higgs mass of 126 GeV. Left: All diagrams taken into account. Right: Only diagrams up to boxes considered.
Figure 3 .
3K cut T dependence of ZZ + j production rates via GF at the 14 TeV LHC. The statistical error bars for both programs are also shown. For cut values smaller than about 3 GeV the effect of the cut is below the integration error.
and 4, we present ZZ + j production rates via GF with on-shell Z bosons at the 7 TeV, 8 TeV and 14 TeV LHC, for M H =120, 126, 140, 200 and 400 GeV, respectively. 7 TeV LHC / M H 120 GeV 126 GeV 140 GeV 200
8
TeV LHC / M H 120 GeV 126 GeV 140 GeV 200 GeV 400 GeV σ(cont)
14 TeV LHC / M H 120 GeV 126 GeV 140 GeV 200 GeV 400 GeV σ(cont)
Figure 4 .Figure 5 .
45Scale dependence of the integrated cross sections for ZZ + j production at the 14 TeV LHC. Differential cross sections for the p T -distribution of the jet (left ) and the Z boson with the smaller p T (right ) for the 14 TeV LHC using M H = 126 GeV. The individual curves show the contribution of only continuum diagrams (dashed green lines), only Higgs diagrams (dotted blue) and both types including interferences (solid red).
Figure 6 .
6Differential cross section for the invariant mass of the two Z bosons for the 14 TeV LHC using M H = 126 GeV (left ) and M H = 400 GeV (right ). The individual curves show the contribution of only continuum diagrams (dashed green lines), only Higgs diagrams (dotted blue) and both types including interferences (solid red).
Figure 7 .
7Differential cross section for the invariant mass of the two Z bosons for the 14 TeV LHC using M H = 126 GeV. The individual curves show the contribution of only continuum diagrams (dashed green lines), only Higgs diagrams (dotted blue) and both types including interferences (solid red) for absolute (left ) and relative cross sections (right ).
Table 1. Percentage of unstable points depending on the accuracy of the Ward identity test, cf. Eq. (2.3) and cross section results for the given set up. Values are given without applying any rescue system (left columns), after applying an expansion for small Gram determinants (step 1, middle columns), and after calculating the loop integrals for still failing points in quadruple precision (step 2, right columns). Approximately 6 million phase-space points have been calculated for each entry.Accuracy ǫ
before step 1
after step 1
after step 2
failure rate σ MC [fb] failure rate σ MC [fb] failure rate
σ MC [fb]
10 −6
16.8 %
285.0(4) 11.1 %
295.3(4) 0.036%
320.4(4)
10 −5
9.9 %
301.8(4)
5.7 %
307.8(4) 7.3 · 10 −3 % 321.3(4)
10 −4
5.7 %
311.1(4)
2.9 %
315.0(4) 1.9 · 10 −3 % 320.6(4)
10 −3
3.1 %
316.2(4)
1.5 %
317.4(4) 3.9 · 10 −4 % 320.6(4)
10 −2
1.7 %
318.1(4)
0.75 %
319.6(4) 1.0 · 10 −4 % 321.0(4)
10 −1
0.94 %
319.3(4)
0.39 %
320.5(4) 1.7 · 10 −5 % 321.5(4)
10 0
0.54 %
319.6(4)
0.20 %
320.5(4) 0
320.9(5)
10 1
0.30 %
320.5(4)
0.10 %
321.3(4) 0
321.0(6)
10 2
0.19 %
321.2(5)
0.048 %
320.8(4) 0
320.6(5)
10 3
0.12 %
320.8(4)
0.026 %
321.9(5) 0
322.9(9)
Note the naive γ5 scheme[11] is employed in FormCalc. The discussion on its validity in practical one loop calculations in anomaly-free theories can be found e.g. in Refs.[12,13].4 The size of the resulting Fortran library for the helicity amplitude evaluation is about 300 Mb.
. T Binoth, arXiv:0801.1616PoS. 20078hep-phT. Binoth et al., PoS RADCOR2007 (2007) 008 [arXiv:0801.1616 [hep-ph]];
. G Sanguinetti, S Karg, arXiv:0806.1394hep-phG. Sanguinetti and S. Karg, [arXiv:0806.1394 [hep-ph]];
. T Binoth, arXiv:0807.0605hep-phT. Binoth et al., [arXiv:0807.0605 [hep-ph]].
. T Binoth, T Gleisberg, S Karg, N Kauer, G Sanguinetti, arXiv:0911.3181Phys. Lett. B. 683154hep-phT. Binoth, T. Gleisberg, S. Karg, N. Kauer and G. Sanguinetti, Phys. Lett. B 683, 154 (2010) [arXiv:0911.3181 [hep-ph]].
. D A Dicus, C Kao, W W Repko, Phys. Rev. D. 361570D. A. Dicus, C. Kao and W. W. Repko, Phys. Rev. D 36 (1987) 1570;
. E W N Glover, J J Van Der, Bij, Phys. Lett. B. 219488E. W. N. Glover and J. J. van der Bij, Phys. Lett. B 219 (1989) 488;
. E W N Glover, J J Van Der, Bij, Nucl. Phys. B. 321561E. W. N. Glover and J. J. van der Bij, Nucl. Phys. B 321 (1989) 561;
. C Kao, D A Dicus, Phys. Rev. D. 431555C. Kao and D. A. Dicus, Phys. Rev. D 43 (1991) 1555;
. T Matsuura, J J Van Der, Bij, Z. Phys. C. 51259T. Matsuura and J. J. van der Bij, Z. Phys. C 51 (1991) 259;
. C Zecher, T Matsuura, J J Van Der, Bij, arXiv:hep-ph/9404295Z. Phys. C. 64219C. Zecher, T. Matsuura and J. J. van der Bij, Z. Phys. C 64 (1994) 219 [arXiv:hep-ph/9404295];
. K L Adamson, D De Florian, A Signer, arXiv:hep-ph/0202132Phys. Rev. D. 6594041K. L. Adamson, D. de Florian and A. Signer, Phys. Rev. D 65 (2002) 094041 [arXiv:hep-ph/0202132];
. K L Adamson, D De Florian, A Signer, arXiv:hep-ph/0211295Phys. Rev. D. 6734016K. L. Adamson, D. de Florian and A. Signer, Phys. Rev. D 67 (2003) 034016 [arXiv:hep-ph/0211295];
. T Binoth, M Ciccolini, N Kauer, M Kramer, arXiv:hep-ph/0503094JHEP. 050365T. Binoth, M. Ciccolini, N. Kauer and M. Kramer, JHEP 0503 (2005) 065 [arXiv:hep-ph/0503094];
. T Binoth, M Ciccolini, N Kauer, M Kramer, arXiv:hep-ph/0611170JHEP. 061246T. Binoth, M. Ciccolini, N. Kauer and M. Kramer, JHEP 0612 (2006) 046 [arXiv:hep-ph/0611170];
. T Binoth, N Kauer, P Mertsch, arXiv:0807.0024hep-phT. Binoth, N. Kauer and P. Mertsch, arXiv:0807.0024 [hep-ph];
. N Kauer, G Passarino, arXiv:1206.4803JHEP. 1208116hep-phN. Kauer and G. Passarino, JHEP 1208 (2012) 116 [arXiv:1206.4803 [hep-ph]].
. T Melia, K Melnikov, R Rontsch, M Schulze, G Zanderighi, arXiv:1205.6987JHEP. 1208115hep-phT. Melia, K. Melnikov, R. Rontsch, M. Schulze and G. Zanderighi, JHEP 1208, 115 (2012) [arXiv:1205.6987 [hep-ph]].
. P Agrawal, A Shivaji, arXiv:1207.2927arXiv:1208.2593Phys. Rev. D. 8673013hep-ph. hep-phP. Agrawal and A. Shivaji, Phys. Rev. D 86, 073013 (2012) [arXiv:1207.2927 [hep-ph]], and arXiv:1208.2593 [hep-ph].
. R N Cahn, M S Chanowitz, Phys. Rev. Lett. 561327R. N. Cahn and M. S. Chanowitz, Phys. Rev. Lett. 56, 1327 (1986).
. J A M Vermaseren, Comput. Phys. Commun. 8345J. A. M. Vermaseren, Comput. Phys. Commun. 83 (1994) 45;
. D Binosi, L Theussl, arXiv:hep-ph/0309015Comput. Phys. Commun. 16176D. Binosi and L. Theussl, Comput. Phys. Commun. 161 (2004) 76 [arXiv:hep-ph/0309015];
. D Binosi, J Collins, C Kaufhold, L Theussl, arXiv:0811.4113Comput. Phys. Commun. 1801709hep-phD. Binosi, J. Collins, C. Kaufhold and L. Theussl, Comput. Phys. Commun. 180 (2009) 1709 [arXiv:0811.4113 [hep-ph]].
. J Küblbeck, M Böhm, A Denner, Comput. Phys. Commun. 60J. Küblbeck, M. Böhm, and A. Denner, Comput. Phys. Commun. 60 (1990) 165-180;
. T Hahn, arXiv:hep-ph/0012260Comput. Phys. Commun. 140T. Hahn, Comput. Phys. Commun. 140, 418 (2001) [arXiv:hep-ph/0012260].
. T Hahn, M Perez-Victoria, arXiv:hep-ph/9807565Comput. Phys. Commun. 118T. Hahn and M. Perez-Victoria, Comput. Phys. Commun. 118, 153 (1999) [arXiv:hep-ph/9807565].
. M Chanowitz, M Furman, I Hinchliffe, Nucl. Phys. B. 159225M. Chanowitz, M. Furman, and I. Hinchliffe, Nucl. Phys. B 159, 225 (1979).
. F Jegerlehner, arXiv:hep-th/0005255Eur. Phys. J. C. 18673F. Jegerlehner, Eur. Phys. J. C 18, 673 (2001) [arXiv:hep-th/0005255].
. S Dittmaier, S Kallweit, P Uwer, arXiv:0908.4124Nucl. Phys. B. 82618hep-phS. Dittmaier, S. Kallweit and P. Uwer, Nucl. Phys. B 826, 18 (2010) [arXiv:0908.4124 [hep-ph]].
. T Hahn, M Rauch, hep-ph/0601248Nucl. Phys. Proc. Suppl. 157T. Hahn and M. Rauch, Nucl. Phys. Proc. Suppl. 157, 236 (2006) [hep-ph/0601248].
. A Denner, S Dittmaier, arXiv:hep-ph/0212259Nucl. Phys. B. 658175A. Denner and S. Dittmaier, Nucl. Phys. B 658, 175 (2003) [arXiv:hep-ph/0212259].
. G Passarino, M J G Veltman, Nucl. Phys. B. 160151G. Passarino and M. J. G. Veltman, Nucl. Phys. B 160, 151 (1979).
. A Denner, S Dittmaier, arXiv:hep-ph/0509141Nucl. Phys. B. 73462A. Denner and S. Dittmaier, Nucl. Phys. B 734, 62 (2006) [arXiv:hep-ph/0509141].
. G J Van Oldenborgh, J A M J Vermaseren ; G, Van Oldenborgh, Comput. Phys. Commun. 4666Z. Phys. CG. J. van Oldenborgh and J. A. M. Vermaseren, Z. Phys. C 46, 425 (1990), G. J. van Oldenborgh, Comput. Phys. Commun. 66 (1991).
. R K Ellis, G Zanderighi, arXiv:0712.1851JHEP. 08022hep-phR. K. Ellis and G. Zanderighi, JHEP 0802, 002 (2008) [arXiv:0712.1851 [hep-ph]].
. K Arnold, Comput. Phys. Commun. 1801661K. Arnold et al., Comput. Phys. Commun. 180 (2009) 1661;
. K Arnold, arXiv:1207.4975hep-phK. Arnold et al., arXiv:1207.4975 [hep-ph].
. K Hagiwara, D Zeppenfeld, Nucl. Phys. B. 2741K. Hagiwara and D. Zeppenfeld, Nucl. Phys. B 274 (1986) 1.
. F Campanario, arXiv:1105.0920JHEP. 111070hep-phF. Campanario, JHEP 1110 (2011) 070 [arXiv:1105.0920 [hep-ph]].
. R Mertig, M Bohm, A Denner, Comput. Phys. Commun. 64345R. Mertig, M. Bohm and A. Denner, Comput. Phys. Commun. 64, 345 (1991).
. J Hakkinen, H Kharraziha, hep-ph/9603229Comput. Phys. Commun. 100311J. Hakkinen and H. Kharraziha, Comput. Phys. Commun. 100 (1997) 311 [hep-ph/9603229].
. A Djouadi, J Kalinowski, M Spira, arXiv:hep-ph/9704448Comput. Phys. Commun. 10856A. Djouadi, J. Kalinowski and M. Spira, Comput. Phys. Commun. 108 (1998) 56 [arXiv:hep-ph/9704448];
A Djouadi, J Kalinowski, M Mühlleitner, M Spira, J M Butterworth, arXiv:1003.1643Proceedings Les Houches 2009 workshop on TeV colliders. Les Houches 2009 workshop on TeV collidershep-phA. Djouadi, J. Kalinowski, M. Mühlleitner and M. Spira, in J. M. Butterworth et al., Proceedings Les Houches 2009 workshop on TeV colliders, arXiv:1003.1643 [hep-ph].
. J Pumplin, D R Stump, J Huston, H L Lai, P M Nadolsky, W K Tung, arXiv:hep-ph/0201195JHEP. 020712J. Pumplin, D. R. Stump, J. Huston, H. L. Lai, P. M. Nadolsky and W. K. Tung, JHEP 0207 (2002) 012 [arXiv:hep-ph/0201195].
. M R Whalley, D Bourilkov, R C Group, arXiv:hep-ph/0508110M. R. Whalley, D. Bourilkov and R. C. Group, arXiv:hep-ph/0508110.
. G Aad, ATLAS CollaborationarXiv:1207.7214Phys. Lett. B. 7161hep-exG. Aad et al. [ATLAS Collaboration], Phys. Lett. B 716 (2012) 1 [arXiv:1207.7214 [hep-ex]].
. S Chatrchyan, CMS CollaborationarXiv:1207.7235Phys. Lett. B. 71630hep-exS. Chatrchyan et al. [CMS Collaboration], Phys. Lett. B 716 (2012) 30 [arXiv:1207.7235 [hep-ex]].
. R K Ellis, I Hinchliffe, M Soldate, J J Van Der, Bij, Nucl. Phys. B. 297221R. K. Ellis, I. Hinchliffe, M. Soldate and J. J. van der Bij, Nucl. Phys. B 297 (1988) 221;
. U Langenegger, M Spira, A Starodumov, P Trueb, hep-ph/0604156JHEP. 060635U. Langenegger, M. Spira, A. Starodumov and P. Trueb, JHEP 0606 (2006) 035 [hep-ph/0604156];
. Q Li, M Spira, J Gao, C S Li, arXiv:1011.4484Phys. Rev. D. 8394018hep-phQ. Li, M. Spira, J. Gao and C. S. Li, Phys. Rev. D 83, 094018 (2011) [arXiv:1011.4484 [hep-ph]].
. E W N Glover, J J Van Der, Bij, Phys. Lett. B. 219561Nucl. Phys. BE. W. N. Glover and J. J. van der Bij, Phys. Lett. B 219, 488 (1989) and Nucl. Phys. B 321, 561 (1989).
| []
|
[
"TAFE-Net: Task-Aware Feature Embeddings for Efficient Learning and Inference",
"TAFE-Net: Task-Aware Feature Embeddings for Efficient Learning and Inference"
]
| [
"Xin Wang \nEECS Department\nBerkeleyUC\n",
"Fisher Yu \nEECS Department\nBerkeleyUC\n",
"Ruth Wang \nEECS Department\nBerkeleyUC\n",
"Trevor Darrell \nEECS Department\nBerkeleyUC\n",
"Joseph E Gonzalez \nEECS Department\nBerkeleyUC\n"
]
| [
"EECS Department\nBerkeleyUC",
"EECS Department\nBerkeleyUC",
"EECS Department\nBerkeleyUC",
"EECS Department\nBerkeleyUC",
"EECS Department\nBerkeleyUC"
]
| []
| Learning good feature embeddings for images often requires substantial training data. As a consequence, in settings where training data is limited (e.g., few-shot and zeroshot learning), we are typically forced to use a general feature embedding across prediction tasks. Ideally, we would like to construct feature embeddings that are tuned for the given task and even input image. In this work, we propose Task-Aware Feature Embedding Networks (TAFE-Nets) to learn how to adapt the image representation to a new task in a meta learning fashion. Our network is composed of a meta learner and a prediction network, where the meta learner generates parameters for the feature layers in the prediction network based on a task input so that the feature embedding can be accurately adjusted for that task. We show that our TAFE-Net is highly effective in generalizing to new tasks or concepts and offers efficient prediction with low computational cost. We demonstrate the general applicability of TAFE-Net in several tasks including zeroshot/few-shot learning and dynamic efficient prediction. Our networks exceed or match the state-of-the-art on most tasks. In particular, our approach improves the prediction accuracy of unseen attribute-object pairs by 4 to 15 points on the challenging visual attributes composition task. | null | [
"https://arxiv.org/pdf/1806.01531v2.pdf"
]
| 54,212,019 | 1806.01531 | c838c06717abd055f81768f54dd772b4f50259c1 |
TAFE-Net: Task-Aware Feature Embeddings for Efficient Learning and Inference
Xin Wang
EECS Department
BerkeleyUC
Fisher Yu
EECS Department
BerkeleyUC
Ruth Wang
EECS Department
BerkeleyUC
Trevor Darrell
EECS Department
BerkeleyUC
Joseph E Gonzalez
EECS Department
BerkeleyUC
TAFE-Net: Task-Aware Feature Embeddings for Efficient Learning and Inference
Learning good feature embeddings for images often requires substantial training data. As a consequence, in settings where training data is limited (e.g., few-shot and zeroshot learning), we are typically forced to use a general feature embedding across prediction tasks. Ideally, we would like to construct feature embeddings that are tuned for the given task and even input image. In this work, we propose Task-Aware Feature Embedding Networks (TAFE-Nets) to learn how to adapt the image representation to a new task in a meta learning fashion. Our network is composed of a meta learner and a prediction network, where the meta learner generates parameters for the feature layers in the prediction network based on a task input so that the feature embedding can be accurately adjusted for that task. We show that our TAFE-Net is highly effective in generalizing to new tasks or concepts and offers efficient prediction with low computational cost. We demonstrate the general applicability of TAFE-Net in several tasks including zeroshot/few-shot learning and dynamic efficient prediction. Our networks exceed or match the state-of-the-art on most tasks. In particular, our approach improves the prediction accuracy of unseen attribute-object pairs by 4 to 15 points on the challenging visual attributes composition task.
Introduction
Feature embeddings are central to computer vision. By mapping images into semantically rich vector spaces, feature embeddings extract key information that can be used for a wide range of prediction tasks. However, learning good feature embeddings typically requires substantial amounts of training data and computation. As a consequence, a common practice [7,12,49] is to re-use existing feature embeddings from convolutional networks (e.g., ResNet50 [18]) trained on large-scale labeled training datasets (e.g., ImageNet [35]); to achieve maximum accuracy, these general feature embedding are often fine-tuned [12,7,49] or transformed [20] using additional task specific training data.
In many settings, the training data are insufficient to learn Negative Cat
Dog Positive
Task embedding: Dog Task embedding: Cat Figure 1: A cartoon illustration of Task-aware Feature Embedding. In this case there are two binary prediction task: hasCat and hasDog). Task-aware feature embeddings mean that the same image can have different embeddings for each task. As a consequence, we can adopt a single task independent classification boundary for all tasks.
or even adapt general feature embeddings to a given task. For example, in few-shot and zero-shot prediction tasks, the scarcity of training data forces the use of generic feature embeddings. As a consequence, in these situations much of the research instead focuses on the design of joint task and data embeddings [4,11,51] that can be generalized to unseen tasks or tasks with fewer examples. Some have proposed treating the task embedding as linear separators and learning to generate them for new tasks [41,29]. Others have proposed hallucinating additional data points [47,17,44]. However, in all cases, a common image embedding is shared across tasks. As a consequence, the common image embedding may be out of the domain or sub-optimal for any individual prediction task. This problem is exacerbated in settings where the number and diversity of training tasks is relatively small. In this work we explore a meta-learning approach to constructing task-aware feature embeddings (TAFE). We introduce task-aware feature embedding networks (TAFE-Nets 1 ) composed of a task-aware meta learner that generates the
Prediction Network
Task-aware Meta Learner Figure 2: TAFE-Net architecture design. TAFE-Net has a task-aware meta learner that generates the parameters of the feature layers within a prediction network for classification. The generated weights are factorized into task-specific weights in low dimension and shared weights across all tasks.
parameters for feature embedding layers within a standard prediction network. As a consequence, we are able to learn a simple task-independent linear boundary that can separate the positive and negative examples through the use of the task aware feature embeddings. To address the challenge of meta-learning with limited numbers of training tasks, we couple the task embedding to the task aware feature embeddings with the addition of a novel embedding loss. The resulting coupling improves generalization across tasks by jointly clustering both images and tasks. Directly training the meta learner to generate the weights for the feature embedding network can be challenging [3] since the prediction task requires predicting a large number of weights from a low dimensional task embedding with only a few training tasks. We therefore introduce a novel method to factorize the generated weights into small set of predicted weights and a larger set of shared weights.
The proposed architecture exceeds the state-of-the-art zero-shot learning on four standard benchmarks without the need for additional data generation. On the newly proposed visual-attribute composition task which is a more challenging zero-shot learning task, we are able to achieve an 4 to 15 point improvements over state-of-the-art descriminative models based on joint embedding. Our methods also achieve competitive results on the challenging few-shot learning benchmark based on ImageNet. Furthermore, the proposed model can be used for efficient inference on a per-input basis with little modification and exceeds the performance of prior models that focus on model sparsification.
Related Work
Weight generation. Several efforts [3,16,6] have studied the idea of adopting one meta network to generate weights of another network. Our task-aware meta learner serves a similar role for the weight generation but in a more structured and constrained manner. We study different mechanisms to decompose the weights of the prediction network so that it can generate weights for multiple layers at once where Bertinetton et al. [3] focuses on generating weights for a single layer and Denil et al. [6] can generate up to 95% parameters of a single layer due to the quadratic size of the output space. As a byproduct of our design, we also study adding sparse constraints to the generated weights or efficient inference [28,43,45] and the resulting model outperforms the state-of-the-art dynamic channel pruning technique [28].
Joint embedding learning. Our architecture leverages common task and image embeddings to improve the task efficiency when learning the task embedding. This approach is commonly used in work on zero-and few-shot learning [25,48,41,51,4,11]. A metric-learning based objective [25,48,41] is often used to jointly regularize the task embeddings and image embeddings for training the classification network while our embedding objective is used to address the data scarcity issue of the weight generation network.
Meta learning. Our work resides in the meta learning regime [9,10,36,15] to learn the structure among previously seen tasks or concepts such that this learned prior can be combined with small amounts of new data for better generalization [9]. Our task-aware meta learner generalizes to new tasks with limited training data thansks to our novel embedding loss.
Feature modulation. In the domain of visual question answering, previous works [34,5] explore the use of question embedding network to modulate the features of the primary convolutional network. Our factorized weight generation scheme for convolutional layers can also be viewed as channel-wise feature modulation.
Task-Aware Feature Embedding
The choice of the featurization of data depends on the prediction tasks [7,12]. For example, identifying whether images are underwater scenes or contain a particular digit relies on different representations (e.g., color or texture).
In many cases there is rich semantic meta-data describing the concept associated with a prediction task. For example, we might have text describing the objects of interest or the semantic meaning of the categories. Even the input image can be used to help characterize the prediction task. Encoding the prediction task as part of the learning problem can aid in generalization. Xian et al. [46] and Frome et al. [11] demonstrate that modeling the semantic relationship between tasks can help the model generalize to new tasks.
In this work, we explore a mechanism that can better incorporate the task specifications so that the model can learn and predict more efficiently. More specifically, we are interested in the design of models that can leverage meta-data describing the task to augment the data featurization.
To address this problem, we propose Task-aware Feature Embedding networks (TAFE-Nets) that generate feature embeddings conditioned on the task specification. We adopt a meta learning approach to construct TAFE-Nets by generating task-specific parameters for the prediction network and introduce two key innovations in weight factorization and embedding loss design to train TAFE-Nets effectively.
TAFE-Net Model
We start with the TAFE-Net design. There are two subnetworks in TAFE-Net as shown in Figure 2: a task-aware meta leaner G and a prediction network F. The task-aware meta learner takes a task specification (e.g., word2vec [31] encoding or example image) t and generates the weights of layers in the prediction network. The prediction network F:
y = F(x; θ = {G(t), θ f }),(1)
takes images or image features x as inputs and predicts the class label y ∈ {1...N }. The prediction network F is parameterized by θ which is composed of generated parameters from G for task-aware feature embeddings and shared parameters θ f that are shared across tasks (e.g., generic feature extraction). The task-aware feature embedding (TAFE) is the layer output of F before the final classification layer. The task-aware meta learner G paramterized by η is composed of an embedding network T (t) to generate a latent task embedding e t and a set of weight generators g i , i = {1...K} that generate parameters for K feature layers in F conditioned on e t .
Weight Generation via Factorization
We now present details of the weight generation scheme for the feature layers in F. The feature layers that produce the task aware feature embeddings (TAFE) can either be convolutional layers or fully-connected (FC) layers. As noted by Bertinetto et al. [3], the number of weights that must be estimated by the meta-learner is often much larger than the task specifications and can therefore be difficult to learn from a small number of example tasks. To ensure meta learner generalizes effectively, we propose a weight factorization scheme along the output dimension of each FC layer and the output channel dimension of a convolutional layer. This is distinct form the low-rank decomposition used in prior meta-learning works [3]. The channel-wise factorization builds on the intuition that channels of a convolutional layer may have different or even orthogonal functionality.
Weight factorization for convolutions. Given an input tensor x i ∈ R w×h×cin for the i-th feature layer in F whose weight is W i ∈ R k×k×cin×cout (k is the filter support size and c in and c out are the number of input and output channels) and bias is b i ∈ R cout , the output x i+1 ∈ R w ×h ×cout of the convolutional layer is given by
x i+1 = W i * x i + b i ,(2)
where * denotes convolution. Without loss of generality, we remove the bias term of the convolutional layer as it is often followed by the batch normalization [22]. W i = g i (t) is the output of the i-th weight generator in G. We decompose the weight W i into
W i = W i sr * cout W i ts ,(3)
where W i sr ∈ R k×k×cin×cout is a shared parameter aggregating all tasks {t 1 , ...t N } and W ts ∈ R 1×1×cout is a taskspecific parameter depending on the current task input. * cout denotes the grouped convolution along the output channel dimension, i.e. each channel of x * cout y is simply the convolution of the corresponding channels in x and y.
Weight factorization for FCs. Similar to the factorization of the convolution weights, the FC layer weights W i ∈ R m×n can be decomposed into
W i = W i sr · diag(W i ts ),(4)
where W i sr ∈ R m×n is the shared parameters for all tasks and W i ts ∈ R n is the task-specific parameter. With such factorization, the weight generators only need to generate the task-specific parameters for each task in lower dimension and learn one set of parameters in high dimension shared across all tasks.
Embedding Loss for Meta Learner
The number of task specifications used for training the task-aware meta learner is usually much smaller than the number of images available for training the prediction network. The data scarcity issue may lead to a corrupted meta learner. We, therefore, propose to add a secondary embedding loss L emb for the meta learner alongside the standard classification loss L cls used for the prediction network. The overall objective is then defined as
min θ,η L = min θ,η L cls + β · L emb ,(5)
where β is the hyper-parameter to balance the two terms. We use β as 0.1 in our experiments if not specified.
The idea is to project the latent task embedding T (t) into a joint embedding space with the task-aware feature embedding (TAFE). We adopt a metric learning approach that for positive inputs of a given task, the corresponding TAFE is closer to the task embedding while for negative inputs, the corresponding TAFE is far from the task embedding as illustrated in Figure 1.
We use hinged cosine similarity as the distance measurement (i.e. D(p, q) = max(cosine_sim(p, q), 0)) and a regression loss defined as
L emb = 1 T S T i S j ||D(TAFE(x j ; t i ), T (t i )) − q i,j || 2 2 ,(6)
where x j is the j-th sample in the dataset, t i is the i-th task and q i,j is 1 if F(x j ; t) i predicts positive and 0 otherwise. T and S is the total number of task descriptions and input samples respectively.
We find in experiments this additional supervision helps training the meta learner especially under the case where the number of training tasks is extremely limited.
Shallow Image Embedding as a Task Desc.
An interesting usage of task aware feature embeddings is to adapt the feature embedding to the input image itself. Moreover, the image that we are trying to classify provides some hints as to what is likely a good embedding for classifying that image. For example, having a hint as to the likely prediction or a high-level category is a form of task specifications that can inform the choice of image embedding.
In this work we also explore the use of task aware feature embeddings as a mechanism to construct more compact feature embeddings for traditional classification tasks. We find that with simple sparse constraints on the weight generator, our model can be used to construct a unique thin embedding network for each input image with only minimal loss in prediction accuracy.
In this setting, the task embedding network T takes the image as the input and outputs a probabilistic assignment to the pre-defined tasks. That is to say, the embedding loss of the meta learner is defined using the cross-entropy loss with sparse constraints over the output of the weight generators (K is the number of feature layers in the prediction network):
L emb = 1 S S j CE(T (x j ), y j ) + 1 K K i ||g i (T (x j ))|| 1 .(7)
This soft assignment can be regarded as a task description for the image at hand. The computation for feature embedding can be reduced with the information from the meta network, since we can use the task description to decide which channels in each layer are related to the tasks of interest and which are not. To remove the irrelevant channels, we pass each of the generated parameters through a ReLU non-linearity. If a parameter is 0, we don't have to compute the activation map for that channel because all the parameters can be generated before any computation in feature network. This is similar to model cascading [42] if the meta and feature networks are both treated as making predictions. But in our case, the shallow network also generates weights for the more computation intensive network.
Model Configurations
We now describe the network configurations. In our work, we consider three types of task specifications: semantic representation in a vector format (e.g. word2vec [31] of the class labels), image features extracted from pre-trained models (the spacial dimension is collapsed) and raw images. For the first two cases detailed in Section 4, the task embedding network T is a three-layer FC network with hidden unit size of 2048 except for the aPY dataset [8] where we choose T as a 2-layer FC network with the same hidden size to avoid overfitting. The weight generator g i is a single FC layer with the output dimension same as the output dimension of the corresponding feature layer in F. To enable efficient inference, we just add a ReLU to the output of g i . In the case where raw images are used as inputs, T is configured as a 5-layer convolutional network with 3 × 3 kernels following the feature down-sampling schedule of ResNets [18].
For the prediction network, the TAFE is generated through a 3-layer FC network with the hidden size of 2048 if image features are extracted from pre-trained models and used as the input of the prediction network (on the aPY dataset, we use a 1024-1024-1024-2048 FC network to avoid overfitting). If raw images are used as the input, we will modify the backbone prediction network (e.g. VGG-16 [37] and ResNet-50 [18]) to generate the TAFE detailed in Section 4.4.
Experiments
We first conduct experiments using five zero-shot learning benchmark datasets: SUN, CUB, AWA1, AWA2 and aPY following the generalized zero-shot learning setting proposed by Xian et al. [46]. We also provide results on the more challenging visual attributes composition task proposed by Misra et al. [32] on both MITStates [23] and Stanford-VRD [29] datasets. Our model surpasses the state-of-the-art methods by a significant margin. We also evaluate the model on the few-shot learning and the dynamic model sparsification tasks on the CIFAR and ImageNet benchmarks and our model matches or exceed the prior works.
Generalized Zero-shot Learning
The GZSL setting proposed by Xian et al. [46] is more realistic compared to the conventional zero-shot learning which involves classifying test examples from both seen and unseen classes, with no prior distinction between them. We compare our model with two lines of prior work in our experiments: (1) discriminative baselines [50,48] which focus on mapping the images into a rich semantic embedding space, and (2) generative models [47,40] that tackle the data scarcity problem by generating synthetic images for the unseen classes using a GAN [13,52] based approach. Our work falls into the first category of research but we demonstrate in Table 2 that our approach is still competitive to the generative models with additional training data.
Datasets and evaluation metrics.
Following prior works [51,11,1,2], we conduct our experiments on 5 benchmark datasets (Table 1). We follow the evaluation metrics proposed by Xian et al. [46] to report the per class top-1 accuracy of both unseen acc u and seen classes acc s and the harmonic mean H = 2 × (acc u × acc s )/(acc u + acc s ).
Training details. We set the batch size to 32 and use Adam [24] as the optimizer with the initial learning rate of 10 −4 for the prediction network and weight generators, and 10 −5 for the task embedding network. We reduce the learning rate by 10× at epoch 30 and 45, and train the network for 60 epochs. For AWA1, we train the network for 10 epochs and reduce the learning rate by 10× at epoch 5. Quantitative results. We first report the performance of our TAFE-Net w/o the proposed embedding loss in Table 2 compared to the non-generative models. Our best performing models surpass the prior models by a significant margin with an improvement of roughly 16 points on AWA1 and 17 points on aPY. For the more challenging fine-grained SUN and Figure 3: Task-aware Image Feature Embedding projected into two dimensions using t-SNE [39] for two tasks (Zebra and Donkey). Note that changing the task produces different embeddings for the same data.
CUB datasets, we are able to improve the results by 7 and 2 points. Compared to the more recent approaches [50,48], our models have higher accuracy in the unseen classes which leads to the boost in the harmonic mean.
As an alternative approach to address the GZSL problem, Several [47,40] propose to generate synthetic images of the unseen classes conditioned on the class attributes with variants of GAN models [14]. We show the comparison of our model with these generative models in Table 2. Our model matches or outperforms the baseline generative models on both CUB and AWA1 datasets without using additional training data. This indicates that better embedding learning may be more beneficial than the synthetic data generation. However, task aware embeddings and data generation are complementary and could be combined to further improve accuracy.
Embedding loss ablation. We provide numbers for our model w/o the embedding loss in Table 2. In general, models with the embedding loss have stronger performance than those without the embedding loss except for the SUN dataset whose number of categories is about 3 − 22× larger than the other datasets. This observation matches our assumption that the additional supervision on the joint embedding better addresses the data scarcity issue (i.e. fewer class descriptions than the visual inputs) of training the controller model.
Embedding visualization. In Figure 3, we visualize the task-aware feature embeddings of images from the aPY
Unseen Visual-attribute Composition
Besides the standard zero-shot learning benchmarks, we evaluate our model in the visual-attribute composition task proposed by Misra et al. [32]. The goal is to compose a set of visual concept primitives like attributes and objects (e.g. large elephant, old building, etc.) to obtain new visual concepts for a given image. We see this task as a more challenging "zero-shot" learning task which requires the model not only to predict unseen visual concept compositions but model the contextuality of the concepts.
Datasets and evaluation metrics. We conduct the experiments on two datasets: MITStates [23] and the modified StanfordVRD [29]. The setup is the same as Misra et al. [32]. Each image in the MITStates dataset is assigned a pair of (attribute, object) as its label. The model is trained on 34K images with 1,292 label pairs and tested on 19K images with 700 unseen pairs. The second dataset is constructed based on the bounding boxes annotations of the StanfordVRD dataset. Each sample has an SPO (subject, predicate, object) tuple as the ground truth label. The dataset has 7,701 SPO tuples and 1,029 of them are seen only in the test split. We evaluate our models only on examples with unseen labels. We extract the image features with pre-trained models on ImageNet. We use ResNet-101 [18] as our main feature extractor and also test features extracted with ResNet-18 [18], VGG-16/19 [37] for ablation. For the task specifications, we concatenate the word embeddings of the attributes and objects with word2vec [31] trained with GoogleNews. We also consider one-hot encoding for the task id in the ablation.
For evaluation metrics, we report the mean Average Precision (mAP) of images with unseen labels in the test set together with the top-k accuracy where k = 1, 2, 3. We follow the same training schedule as that used in GZSL.
Quantitative results. We compare our model with several baselines provided by Misra et al. [32] and summarize the results in Table 3 for both MITStates and StanfordVRD datasets. Our model surpasses the state-of-the-art models with an improvement of more than 6 points in mAP and 4 to 15 points in top-k accuracy. Nagarajan and Grauman [33] recently propose an embedding learning framework for visualattribute composition. They report the top-1 accuracy of 12.0% on the MITStates dataset with ResNet-18 features. For fair comparison, we use the same ResNet-18 features and obtain the top-1 accuracy of 15.1%.
Ablation on the feature extractor and task specification. We consider different feature extractors (ResNet-101, VGG-16 and 19) and task encodings (word2vec and one-hot encoding) for ablation and summarize the results in Table 4. The average precision difference between different feature extractors are very minimal (within 0.1%) and the largest gap in Top-3 accuracy is within 2%. This indicates that TAFE-Net is robust in transforming the generic features into task-aware feature embeddings. For the task encoding, the one-hot encoding is comparable to the word2vec encoding and even stronger when using VGG-19 features. This shows that the task transformer network T is very expressive to extract rich semantic information simply from the task ids.
Visualization. In Figure 4, we show the top retrievals of unseen attribute-object pairs from the MITStates dataset. Our model can learn to compose new concepts from the
Low-shot Image Classification
Our model naturally fits the few-shot learning setting where one or few images of a certain category are used as the task descriptions. Unlike prior work on meta-learning which experiments with few classes and low resolution images [41,38,10], we evaluate our model on the challenging benchmark proposed by Hariharan and Girshick [17]. The benchmark is based on the ImageNet images and contains hundreds of classes that are divided into base classes and novel classes. At inference time, the model is provided with one or a few examples from the novel classes and hundreds of examples from the base classes. The goal is to obtain high accuracy on the novel classes without sacrificing the performance on the base classes.
Baselines. In our experiments, the baselines we consider are the state-of-the-art meta learning models Matching Network (MN) [41] and Prototypical Network (PN) [38]. We also compare the logistic regression (LogReg) baseline provided by Hariharan and Girshick [17]. Another line of research [44,17] for few-shot learning is to combine the meta-learner with a "hallucinator" to generate additional training data. We regard these works as complementary approaches to our meta-learning model. Experiment details. We follow the prior works [17,44] to run five trials for each setting of n (the number of examples per novel class, n = 1 and 2 in our experiments) on the five different data splits and report the average top-5 accuracy of both the novel and all classes. We use the features trained with ResNet-10 using SGM loss provided by Hariharan and Girshick [17] as inputs. For training, we sample 100 classes in each iteration and use SGD with momentum of 0.9 as the optimizer. The initial learning rate is set to 0.1 except for the task embedding network (set to 0.01) and the learning rate is reduced by 10× every 8k iterations. The model is trained for 30k iterations in total. Other hyper-paramters are set to the same as Hariharan and Girshick [17] if not mentioned. Quantitative results. As shown in Table 5, our model is on par with the state-of-the-art meta learning models in the novel classes while outperforming them in all categories. Attaching a "hallucinator" to the meta learning model improves performance in general. Our model can be easily attached with a hallucinator and we leave the detailed study as future work due to the time constraint.
Efficient Inference in Data-Rich Classification
As noted in Section 3.4, the task embedding network and the feature embedding network can share the same input image. In this case, the shallow embedding of the input image can be treated as a "task"and conditioned on it, and the weight generator network generates weights that are customized for the given input. Adding additional sparse constraints to the generated weights, our model can be used to selectively sparsify the channels of the convolutional layers of the prediction network on a per-input basis. This is in line with prior works [28,30,19] on channel pruning and we demonstrate that with a slight change in the weight generator (adding sparsity constraints), our model is competitive with or beats existing work on model sparsification.
Baselines. The closest baseline is the runtime neural pruning (RNP) proposed by Lin et al. [28], which dynamically prunes the channels of the subsequent convolutional layers based on the previous layer outputs. We compare with RNP on the CIFAR-100 [26] dataset using VGG-16 as the backbone following the setting as Lin et al. [28]. We also consider state-of-the-art static channel pruning works [19,30,21,27], with ResNet-50 on the ImageNet-2012 [35] dataset.
Sparse weight generation for residual blocks. For the bottleneck residual block used in ResNet-50 [18] which has 3 convolutional layers with the kernel filter size of 1, 3 and 1, we only generate sparse weights for the middle convolutional layer which is the computation bottleneck and share the weights of the first and last layer across inputs.
Training details. For the CIFAR dataset, we start training with the learning rate of 0.1 for ResNet and 0.01 for VGG16, which is reduced by 10× at the 150 th and 250 th epochs with total 350 epochs for the baselines and 270 epochs for TAFE-Net joint optimization stage and another 80 epochs for finetuning with fixed task embedding networks. For ImageNet, we train the network with the initial learning rate of 0.1 for 100 epochs and reduce it by 10× every 30 epochs.
Quantitative results. We summarize our comparison with the baselines in Figure 5 and Table 6. Our approach outperforms the previous baselines to achieve higher accuracy with lower runtime computation measured in floating point operations per second (FLOPs).
Conclusion
In this work, we explored a meta learning based approach to generate task aware feature embeddings for settings with little or no training data. We proposed TAFE-Net, a network that generates task aware feature embeddings (TAFE) conditioned on the given task descriptions. TAFE-Net is composed of a task-aware meta learner that generates weights for the feature embedding layers in a standard prediction network. To address the challenges in training the meta learner, we introduced two key innovations: (1) adding an additional embedding loss to improve the generalization of the meta learner; (2) a novel weight factorization scheme to generate parameters of the prediction network more effectively. We demonstrated the general applicability of the proposed network design on a range of applications including zero/few shot learning and dynamic efficient prediction, and exceeded or matched the state-of-the-art on most applications.
Figure 4 :
4Top retrievals on the unseen pairs of the MITStates dataset. Our model can learn to compose new concepts from the existing attributes and objects while respecting their context. The second row shows some of the failure cases. existing attributes and objects while respecting their context.
Figure 5 :
5Comparison of TAFE-Net and RNP with VGG-16 on CIFAR-100. Our model outperforms RNP under different accuracy and computation trade-off. β controls the amount of the L1 regularization in Equation 7.
Table 1 :
1Datasets used in GZSLDataset
SUN
CUB AWA1 AWA2
aPY
No. of Images 14,340 11,788 30,475 37,322 15,339
Attributes Dim.
102
312
85
85
64
Y
717
200
50
50
32
Y seen
645
150
40
40
20
Y unseen
72
50
10
10
12
Granularity
fine
fine
coarse coarse coarse
Table 2 :
2Evaluate TAFE-Net on five standard benchmarks under generalized zero-shot learning setting. Models with † (f-CLSWGAN and SE) generate additional data for training while the remaining models do not. Our model is better than all models without additional data and also competitive compared to models with additional synthetic data.dataset. As we can see, image embeddings are projected into different clusters conditioned on the task specification.Method
SUN
CUB
AWA1
AWA2
aPY
u
s
H
u
s
H
u
s
H
u
s
H
u
s
H
LATEM [51]
14.7 28.8 19.5 15.2 57.3 24.0 7.3 71.7 13.3 11.5 77.3 20.0 0.1 73.0 0.2
ALE [1]
21.8 33.1 26.3 23.7 62.8 34.4 16.8 76.1 27.5 14.0 81.8 23.9 4.6 73.7 8.7
DeViSE[11]
16.9 27.4 20.9 23.8 53.0 32.8 13.4 68.7 22.4 17.1 74.7 27.8 4.9 76.9 9.2
SJE [2]
14.7 80.5 19.8 23.5 59.2 33.6 11.3 74.6 19.6 8.0 73.9 14.4 3.7 55.7 6.9
SYNC [4]
7.9 43.3 13.4 11.5 70.9 19.8 8.9 87.3 16.2 10.0 90.5 18.0 7.4 66.3 13.3
DEM [50]
20.5 34.3 25.6 19.6 57.9 29.2 32.8 84.7 47.3 30.5 86.4 45.1 11.1 75.1 19.4
RelationNet [48]
-
-
-
38.1 61.1 47.0 31.4 91.3 46.7 30.0 93.4 45.3
-
-
-
f-CLSWGAN † [47]
-
-
-
57.7 43.7 49.7 61.4 57.9 59.6
-
-
-
-
-
-
SE † [40]
-
-
-
53.3 41.5 46.7 67.8 56.3 61.5
-
-
-
-
-
-
TAFE-Net *
27.7 41.1 33.1 36.5 60.0 45.4 46.1 81.4 58.8 31.9 91.2 47.2 19.4 71.3 30.5
TAFE-Net
27.9 40.2 33.0 41.0 61.4 49.2 50.5 84.4 63.2 36.7 90.6 52.2 24.3 75.4 36.8
Table 3 :
3Evaluation on 700 unseen (attribute, object) pairs on 19K images of the MITStates Dataset and 1029 unseen SPO tuples on 1000 images of the StanfordVRD Dataset. TAFE-Net improves over the baselines by a large margin.MITStates
StanfordVRD
Method
AP
Top-k Accuracy
AP
Top-k Accuracy
1
2
3
1
2
3
Visual Product [32]
8.8
9.8
16.1 20.6
4.9
3.2
5.6
7.6
Label Embed (LE) [32]
7.9
11.2 17.6 22.4
4.3
4.1
7.2
10.6
LEOR [32]
4.1
4.5
6.2
11.8
0.9
1.1
1.3
1.3
LE + R [32]
6.7
9.3
16.3 20.8
3.9
3.9
7.1
10.4
Red Wine [32]
10.4 13.1 21.2 27.6
5.7
6.3
9.2
12.7
TAFE-Net
16.2 16.9 27.9 35.1 12.2 12.3 19.7 27.5
Table 4 :
4Ablation study with different task en-
coding and base network features. The variance
of performance of TAFE-Net under different set-
tings is minimal.
Task Encoding
Features
AP
Top-k Accuracy
1
2
3
Word2vec
ResNet-101 16.2 17.2 27.8 35.7
Onehot
ResNet-101 16.1 16.1 26.8 33.8
Word2vec
VGG16
16.3 16.4 26.4 33.0
Onehot
VGG16
16.3 16.4 25.9 32.5
Word2vec
VGG19
15.6 16.2 26.0 32.4
Onehot
VGG19
16.3 16.4 26.0 33.1
Table 5 :
5Few-shot ImageNet Classification on ImageNet. Our model is competitive compared to the state-of-the-art meta learning model without hallucinator.Method
Novel Top-5 Acc All Top-5 Acc
n=1
n=2
n=1
n=2
LogReg [17]
38.4
51.1
40.8
49.9
PN [38]
39.3
54.4
49.5
61.0
MN [41]
43.6
54.0
54.4
61.0
TAFE-Net
43.0
53.9
55.7
61.9
LogReg w/ Analogies [17] 40.7
50.8
52.2
59.4
PN w/ G [44]
45.0
55.9
56.9
63.2
Table 6 :
6Pruned ResNet-50 on ImageNet. Top-1/5 error rate and computation FLOPs are reported. Our model achieves lower error rate with lower computational cost.Model
Top-1 Top-5 FLOPs(x10 9 ) Reduct.(%)
SSS [21]
26.8
-
3.0
20.3
Li et al. [27]
27.0
8.9
3.0
19.0
He et al. [19]
-
9.2
1.9
50.0
ThiNet [30]
29.0
10.0
1.7
55.8
TAFE-Net
26.2
8.4
1.6
56.8
Labelembedding for image classification. Z Akata, F Perronnin, Z Harchaoui, C Schmid, IEEE transactions on pattern analysis and machine intelligence. 386Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid. Label- embedding for image classification. IEEE transactions on pattern analysis and machine intelligence, 38(7):1425-1438, 2016. 5, 6
Evaluation of output embeddings for fine-grained image classification. Z Akata, S Reed, D Walter, H Lee, B Schiele, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition56Z. Akata, S. Reed, D. Walter, H. Lee, and B. Schiele. Evalua- tion of output embeddings for fine-grained image classifica- tion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2927-2936, 2015. 5, 6
Learning feed-forward one-shot learners. L Bertinetto, J F Henriques, J Valmadre, P Torr, A Vedaldi, Advances in Neural Information Processing Systems. 23L. Bertinetto, J. F. Henriques, J. Valmadre, P. Torr, and A. Vedaldi. Learning feed-forward one-shot learners. In Advances in Neural Information Processing Systems, pages 523-531, 2016. 2, 3
Synthesized classifiers for zero-shot learning. S Changpinyo, W.-L Chao, B Gong, F Sha, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 6S. Changpinyo, W.-L. Chao, B. Gong, and F. Sha. Synthesized classifiers for zero-shot learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. 1, 2, 6
Modulating early visual processing by language. H De Vries, F Strub, J Mary, H Larochelle, O Pietquin, A C Courville, Advances in Neural Information Processing Systems. H. De Vries, F. Strub, J. Mary, H. Larochelle, O. Pietquin, and A. C. Courville. Modulating early visual processing by language. In Advances in Neural Information Processing Systems, pages 6594-6604, 2017. 2
Predicting parameters in deep learning. M Denil, B Shakibi, L Dinh, N De Freitas, Advances in neural information processing systems. M. Denil, B. Shakibi, L. Dinh, N. De Freitas, et al. Pre- dicting parameters in deep learning. In Advances in neural information processing systems, pages 2148-2156, 2013. 2
Decaf: A deep convolutional activation feature for generic visual recognition. J Donahue, Y Jia, O Vinyals, J Hoffman, N Zhang, E Tzeng, T Darrell, International conference on machine learning. 13J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activa- tion feature for generic visual recognition. In International conference on machine learning, pages 647-655, 2014. 1, 3
Describing objects by their attributes. A Farhadi, I Endres, D Hoiem, D Forsyth, Computer Vision and Pattern Recognition. IEEECVPR 2009. IEEE Conference onA. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describing objects by their attributes. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1778-1785. IEEE, 2009. 4
Learning to Learn with Gradients. C Finn, UC BerkeleyPhD thesisC. Finn. Learning to Learn with Gradients. PhD thesis, UC Berkeley, 2018. 2
Model-agnostic metalearning for fast adaptation of deep networks. C Finn, P Abbeel, S Levine, 27C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta- learning for fast adaptation of deep networks. ICML, 2017. 2, 7
DeViSe: A deep visual-semantic embedding model. A Frome, G S Corrado, J Shlens, S Bengio, J Dean, T Mikolov, Advances in neural information processing systems. 56A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, T. Mikolov, et al. DeViSe: A deep visual-semantic embedding model. In Advances in neural information processing systems, pages 2121-2129, 2013. 1, 2, 3, 5, 6
Rich feature hierarchies for accurate object detection and semantic segmentation. R Girshick, J Donahue, T Darrell, J Malik, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition13R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580-587, 2014. 1, 3
Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Advances in neural information processing systems. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde- Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information process- ing systems, pages 2672-2680, 2014. 5
Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Advances in Neural Information Processing Systems. Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. WeinbergerCurran Associates, Inc27I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde- Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672-2680. Curran Associates, Inc., 2014. 5
A Graves, G Wayne, I Danihelka, arXiv:1410.5401Neural turing machines. arXiv preprintA. Graves, G. Wayne, and I. Danihelka. Neural turing ma- chines. arXiv preprint arXiv:1410.5401, 2014. 2
. D Ha, A Dai, Q V Le, Hypernetworks, arXiv:1609.09106arXiv preprintD. Ha, A. Dai, and Q. V. Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016. 2
Low-shot visual recognition by shrinking and hallucinating features. B Hariharan, R Girshick, 2017 IEEE International Conference on Computer Vision (ICCV). 7B. Hariharan and R. Girshick. Low-shot visual recognition by shrinking and hallucinating features. In 2017 IEEE In- ternational Conference on Computer Vision (ICCV), pages 3037-3046. IEEE, 2017. 1, 7, 8
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition6K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 1, 4, 6, 8
Channel pruning for accelerating very deep neural networks. Y He, X Zhang, J Sun, International Conference on Computer Vision (ICCV). 2Y. He, X. Zhang, and J. Sun. Channel pruning for accelerating very deep neural networks. In International Conference on Computer Vision (ICCV), volume 2, page 6, 2017. 8
Cycada: Cycle consistent adversarial domain adaptation. J Hoffman, E Tzeng, T Park, J.-Y Zhu, P Isola, K Saenko, A A Efros, T Darrell, International Conference on Machine Learning (ICML). J. Hoffman, E. Tzeng, T. Park, J.-Y. Zhu, P. Isola, K. Saenko, A. A. Efros, and T. Darrell. Cycada: Cycle consistent ad- versarial domain adaptation. In International Conference on Machine Learning (ICML), 2018. 1
Data-driven sparse structure selection for deep neural networks. Z Huang, N Wang, arXiv:1707.01213arXiv preprintZ. Huang and N. Wang. Data-driven sparse structure selection for deep neural networks. arXiv preprint arXiv:1707.01213, 2017. 8
Batch normalization: Accelerating deep network training by reducing internal covariate shift. S Ioffe, C Szegedy, arXiv:1502.03167arXiv preprintS. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 3
Discovering states and transformations in image collections. P Isola, J J Lim, E H Adelson, CVPR. 56P. Isola, J. J. Lim, and E. H. Adelson. Discovering states and transformations in image collections. In CVPR, 2015. 5, 6
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintD. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 5
Siamese neural networks for one-shot image recognition. G Koch, G. Koch. Siamese neural networks for one-shot image recog- nition. 2015. 2
Learning multiple layers of features from tiny images. A Krizhevsky, G Hinton, A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009. 8
Pruning filters for efficient convnets. H Li, A Kadav, I Durdanovic, H Samet, H P Graf, International Conference on Learning Representations. 8H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf. Pruning filters for efficient convnets. International Confer- ence on Learning Representations, 2017. 8
Runtime neural pruning. J Lin, Y Rao, J Lu, J Zhou, Advances in Neural Information Processing Systems. 2J. Lin, Y. Rao, J. Lu, and J. Zhou. Runtime neural pruning. In Advances in Neural Information Processing Systems, pages 2178-2188, 2017. 2, 8
Visual relationship detection with language priors. C Lu, R Krishna, M Bernstein, L Fei-Fei, European Conference on Computer Vision. 56C. Lu, R. Krishna, M. Bernstein, and L. Fei-Fei. Visual relationship detection with language priors. In European Conference on Computer Vision, 2016. 1, 5, 6
ThiNet: A filter level pruning method for deep neural network compression. J.-H Luo, J Wu, W Lin, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionJ.-H. Luo, J. Wu, and W. Lin. ThiNet: A filter level pruning method for deep neural network compression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5058-5066, 2017. 8
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, ICLR. 436T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. ICLR, 2013. 3, 4, 6
From red wine to red tomato: Composition with context. I Misra, A Gupta, M Hebert, CVPR. 27I. Misra, A. Gupta, and M. Hebert. From red wine to red tomato: Composition with context. In CVPR, volume 2, page 6, 2017. 5, 6, 7
T Nagarajan, K Grauman, Attributes as operators. ECCV. T. Nagarajan and K. Grauman. Attributes as operators. ECCV, 2018. 6
Film: Visual reasoning with a general conditioning layer. E Perez, F Strub, H Vries, V Dumoulin, A C Courville, AAAI. E. Perez, F. Strub, H. de Vries, V. Dumoulin, and A. C. Courville. Film: Visual reasoning with a general conditioning layer. In AAAI, 2018. 2
Imagenet large scale visual recognition challenge. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, International Journal of Computer Vision. 1153O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Ima- genet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211-252, 2015. 1, 8
Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta. J Schmidhuber, Technische Universität MünchenPhD thesisJ. Schmidhuber. Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-... hook. PhD thesis, Technische Universität München, 1987. 2
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, arXiv:1409.155646arXiv preprintK. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 4, 6
Prototypical networks for few-shot learning. J Snell, K Swersky, R Zemel, Advances in Neural Information Processing Systems. 7J. Snell, K. Swersky, and R. Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pages 4077-4087, 2017. 7, 8
Visualizing data using t-SNE. L Van Der Maaten, G Hinton, Journal of Machine Learning Research. 95L. van der Maaten and G. Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579-2605, 2008. 5
Generalized zero-shot learning via synthesized examples. V K Verma, G Arora, A Mishra, P Rai, 56V. K. Verma, G. Arora, A. Mishra, and P. Rai. Generalized zero-shot learning via synthesized examples. 5, 6
Matching networks for one shot learning. O Vinyals, C Blundell, T Lillicrap, D Wierstra, Advances in Neural Information Processing Systems. O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. Match- ing networks for one shot learning. In Advances in Neural Information Processing Systems, pages 3630-3638, 2016. 1, 2, 7, 8
Idk cascades: Fast deep learning by learning not to overthink. X Wang, Y Luo, D Crankshaw, A Tumanov, F Yu, J E Gonzalez, UAI. 4X. Wang, Y. Luo, D. Crankshaw, A. Tumanov, F. Yu, and J. E. Gonzalez. Idk cascades: Fast deep learning by learning not to overthink. UAI, 2018. 4
X Wang, F Yu, Z.-Y Dou, J E Gonzalez, arXiv:1711.09485SkipNet: Learning dynamic routing in convolutional networks. arXiv preprintX. Wang, F. Yu, Z.-Y. Dou, and J. E. Gonzalez. SkipNet: Learning dynamic routing in convolutional networks. arXiv preprint arXiv:1711.09485, 2017. 2
Lowshot learning from imaginary data. CVPR. Y.-X Wang, R Girshick, M Hebert, B Hariharan, Y.-X. Wang, R. Girshick, M. Hebert, and B. Hariharan. Low- shot learning from imaginary data. CVPR, 2018. 1, 7, 8
Blockdrop: Dynamic inference paths in residual networks. Z Wu, T Nagarajan, A Kumar, S Rennie, L S Davis, K Grauman, R Feris, Z. Wu, T. Nagarajan, A. Kumar, S. Rennie, L. S. Davis, K. Grauman, and R. Feris. Blockdrop: Dynamic inference paths in residual networks. 2018. 2
Zero-shot learning-a comprehensive evaluation of the good, the bad and the ugly. Y Xian, C H Lampert, B Schiele, Z Akata, IEEE transactions on pattern analysis and machine intelligence. 35Y. Xian, C. H. Lampert, B. Schiele, and Z. Akata. Zero-shot learning-a comprehensive evaluation of the good, the bad and the ugly. IEEE transactions on pattern analysis and machine intelligence, 2018. 3, 4, 5
Feature generating networks for zero-shot learning. Y Xian, T Lorenz, B Schiele, Z Akata, Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Salt Lake City, UT, USA6Y. Xian, T. Lorenz, B. Schiele, and Z. Akata. Feature generat- ing networks for zero-shot learning. In Proc. of the IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 2018. 1, 5, 6
Learning to compare: Relation network for fewshot learning. F S Y Yang, L Zhang, T Xiang, P H Torr, T M Hospedales, Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Salt Lake City, UT, USA6F. S. Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales. Learning to compare: Relation network for few- shot learning. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 2018. 2, 5, 6
Visualizing and understanding convolutional networks. M D Zeiler, R Fergus, European conference on computer vision. SpringerM. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818-833. Springer, 2014. 1
Learning a deep embedding model for zero-shot learning. L Zhang, T Xiang, S Gong, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition56L. Zhang, T. Xiang, and S. Gong. Learning a deep embedding model for zero-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. 5, 6
Zero-shot learning via joint latent similarity embedding. Z Zhang, V Saligrama, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition6Z. Zhang and V. Saligrama. Zero-shot learning via joint latent similarity embedding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6034- 6042, 2016. 1, 2, 5, 6
Unpaired imageto-image translation using cycle-consistent adversarial networks. J.-Y Zhu, T Park, P Isola, A A Efros, arXiv preprintJ.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image- to-image translation using cycle-consistent adversarial net- works. arXiv preprint, 2017. 5
| []
|
[
"The influence of line tension on the formation of liquid bridges",
"The influence of line tension on the formation of liquid bridges"
]
| [
"F Dutka \nInstytut Fizyki Teoretycznej\nUniwersytet Warszawski\nHoża 6900-681WarszawaPoland\n",
"M Napiórkowski \nInstytut Fizyki Teoretycznej\nUniwersytet Warszawski\nHoża 6900-681WarszawaPoland\n"
]
| [
"Instytut Fizyki Teoretycznej\nUniwersytet Warszawski\nHoża 6900-681WarszawaPoland",
"Instytut Fizyki Teoretycznej\nUniwersytet Warszawski\nHoża 6900-681WarszawaPoland"
]
| []
| The formation of liquid bridges between a planar and conical substrates is analyzed macroscopically taking into account the line tension. Depending on the value of the line tension coefficient τ and geometric parameters of the system one observes two different scenarios of liquid bridge formation upon changing the fluid state along the bulk liquid-vapor coexistence. For τ > τ * (τ * < 0) there is a first-order transition to a state with infinitely thick liquid bridge. For τ < τ * the scenario consists of two steps: first there is a first-order transition to a state with liquid bridge of finite thickness which upon further increase of temperature is followed by continuous growth of the thickness of the bridge to infinity. In addition to constructing the relevant phase diagram we examine the dependence of the width of the bridge on thermodynamic and geometric parameters of the system. PACS numbers: 68.03.-g, 68.08.-p, 68.37.Ps FIG. 1. A liquid-like bridge (l) surrounded by vapor (v) connects the planar (1) and conical (2) walls. | 10.1063/1.3469770 | [
"https://arxiv.org/pdf/1005.2311v1.pdf"
]
| 28,200,320 | 1005.2311 | 6c0aa9bdc88e2048aca7e78e52e51872c942c2e1 |
The influence of line tension on the formation of liquid bridges
13 May 2010
F Dutka
Instytut Fizyki Teoretycznej
Uniwersytet Warszawski
Hoża 6900-681WarszawaPoland
M Napiórkowski
Instytut Fizyki Teoretycznej
Uniwersytet Warszawski
Hoża 6900-681WarszawaPoland
The influence of line tension on the formation of liquid bridges
13 May 2010(Dated: 14 May 2010)numbers: 6803-g6808-p6837Ps
The formation of liquid bridges between a planar and conical substrates is analyzed macroscopically taking into account the line tension. Depending on the value of the line tension coefficient τ and geometric parameters of the system one observes two different scenarios of liquid bridge formation upon changing the fluid state along the bulk liquid-vapor coexistence. For τ > τ * (τ * < 0) there is a first-order transition to a state with infinitely thick liquid bridge. For τ < τ * the scenario consists of two steps: first there is a first-order transition to a state with liquid bridge of finite thickness which upon further increase of temperature is followed by continuous growth of the thickness of the bridge to infinity. In addition to constructing the relevant phase diagram we examine the dependence of the width of the bridge on thermodynamic and geometric parameters of the system. PACS numbers: 68.03.-g, 68.08.-p, 68.37.Ps FIG. 1. A liquid-like bridge (l) surrounded by vapor (v) connects the planar (1) and conical (2) walls.
I. INTRODUCTION
In this note we investigate the phase diagram of a fluid enclosed between two infinite walls: one planar and one conical. Such a system resembling the atomic force microscope geometry has been analyzed in different contexts [1][2][3] . Particular emphasis has been put on the structure of the phase diagram which displays a phase characterized by the presence of a liquid bridge formed between the walls 4 .
The mean curvature of the meniscus of the bridge is given by the Young-Laplace equation 5 and its width is a function of the undersaturation 3,6 . The presence of the bridge induces force acting between opposite walls which can be measured using atomic force microscope 2,7 . It turns out, however, that the line tension can have qualitative influence on the phase behavior of such system. This issue has not received much attention in the literature and we discuss it in this note. In particular, we focus on bridge formation and filling transitions along the bulk liquid-vapor coexistence where presence of line tension leads to effects similar to what -in a different contextis termed a frustrated complete wetting 8 .
In the following section we recall the form of the free energy functional of the shape of liquid bridge. This functional and the corresponding equation for equilibrium liquid-vapor interfacial shape supplemented by the boundary conditions form the basis of our approach. Their analysis along the bulk liquid-vapor coexistence for different values of the line tension coefficient leads to different transition scenarios presented in Section III. In the last section we summarize our results and point at the possibility of indirect measurement of the width of the bridge. The behavior of this width reflects the transition scenario taking place in the system.
II. SHAPE OF A LIQUID BRIDGE
The system under considerations consists of a fluid confined between two walls, Fig. 1. The thermodynamic state of the fluid is located on the bulk liquid-vapor coexistence line. The surface of the lower substrate (1) is an infinite plane z = 0, and the upper substrate (2) is formed by infinite cone whose tip is at distance h from the plane z = 0. The system is axially symmetric around the z-axis. In cylindrical coordinates (r = x 2 + y 2 , z 0) the surface of the cone is described by r = a(z) = (z − h) cot ϕ, where z h and π − 2ϕ is the opening angle of the cone (0 ϕ π/2). In our macroscopic approach the grand canonical functional Ω([f ], T, h, ϕ) is a functional of the liquid-vapor interfacial shape r = f (z). It is parametrized by the temperature T and by geometric parameters h and ϕ. Instead of temperature we shall use the angle θ which fulfills the Young equation 5 . The actual contact angles present in the problem fulfill the modified Young equations 9 but it is convenient to use Θ to reparametrize the temperature dependence. We assume that both substrates are made of the same material and are thus characterized by the same angles θ 1 = θ 2 = θ, and the same line tension coefficients τ 1lv = τ 2lv = τ . Accordingly, cos θ = (σ sv − σ sl )/σ lv , where σ αβ denote the relevant surface tension coefficients. Although for a given system both the line and surface tension coefficients are functions of its thermodynamic state, in our macroscopic analysis they will be varied independently. In other words, the parameters θ and τ will be considered as independent variables. In particular, the possibility of both signs of τ will be taken into account.
At bulk liquid-vapor coexistence the functional of the liquid bridge shape f (z) relative to the state without the bridge ∆Ω[f ] is given by 4 :
∆Ω[f ] 2πσ lv = dz Θ(f − a) f 1 + f ′2 Θ(f ) Θ(z) − cos θ f 2 2 δ(z) + a sin ϕ Θ(z − h) +τ f δ(z) + cot ϕ Θ(z − h) ,(1)
where the ratioτ = τ /σ lv has dimension of length. The symbols Θ(z) and δ(z) denote the Heaviside and Dirac function, respectively. The equilibrium interfacial shapef (z) minimizes ∆Ω[f ]. This leads to equation
1 f (z) 1 +f ′ (z) 2 − d dzf ′ (z) 1 +f ′ (z) 2 = 0(2)
and two boundary conditions
0 = cos θ +f ′ (z) 1 +f ′ (z) 2 −τ f (z) z=0 , 0 = cos θ − sin ϕ + cos ϕf ′ (z) 1 +f ′ (z) 2 −τ f (z) cos ϕ z=z2 .(3)
The above boundary conditions are equivalent to the modified Young equations for each of the substrates. The coordinate z 2 is such thatf (z 2 ) = a(z 2 ). The lhs of (2) is equal to the mean curvature of the interface and thus the surface of the bridge is a catenoid. The solution of (2)f
(z) = w cosh z − z 1 w ,(4)
is parametrized by z 1 and w, where w =f (z 1 ) is the minimal value off which will be considered to be the width of the bridge. With the help of dimensionless quantities α = z 1 /w and β = z 2 /w the boundary conditions (3) can be rewritten as
cos θ =τ w cosh α + tanh α ,(5)cos θ =τ cos ϕ w cosh(β − α) + sin ϕ cosh(β − α) + tanh(β − α) cos ϕ .(6)
Now the width of the bridge w can be considered to be the following function of α and β parametrized by h and ϕ:
w(α, β) = h β − tan ϕ cosh(β − α) .(7)
The relative free energy of the system is given by
∆Ω = πσ lv h w(α, β) τ h cosh α+cosh(β −α) +1 . (8)
Analysis of the above equation enables the construction of the phase diagram; this will be discussed in the next section.
III. PHASE DIAGRAM
The basis for determining the phase diagram is the knowledge of the equilibrium shapes of the bridge f = f (z) and the corresponding relative grand canonical free energies ∆Ω = ∆Ω[f ]. Depending on the sign of ∆Ω three cases are possible: (a) ∆Ω < 0 -the phase with the bridge present is favorable, (b) ∆Ω > 0 -phase without bridge is favorable, (c) ∆Ω = 0 -the previous phases coexist.
The set of equations (5) and (6) is not solvable analytically. The numerically obtained plots of functions β = β(α) (parametrized by θ,τ , h and ϕ) illustrate the transition scenario taking place at bulk liquid-vapor coexistence, Fig. 2, for fixed value of the opening angle of the cone, ϕ = π/6, and negative value of the line tension coefficientτ = −h. For the angles θ > θ * (τ /h, ϕ) the curves corresponding to the solutions of equations (5) and (6) do not intersect, Fig. 2a. This situation corresponds to the absence of the liquid bridge. For θ = θ * (τ /h, ϕ), Fig. 2b, the bridge with a finite width is present, and its relative free energy ∆Ω is negative. For θ * 0 (ϕ) < θ < θ * (τ /h, ϕ) there are two solutions of (5) and (6) with negative relative free energies corresponding to bridges of different width w(α, β). The solution with a larger width is represented by point A on Fig. 2c and has smaller energy than the one corresponding to point B. Upon decreasing the angle θ towards the angle θ = θ * 0 (ϕ) the width of the bridge tends to infinity, Fig. 2d. For θ θ * 0 (ϕ) the bridge has infinite width which corresponds to the whole space between the walls filled with liquid.
For particular value of the angle θ = θ * 0 (ϕ) the width of the bridge becomes infinite, and equations (5) (5) and (6), curves (1) and (2), respectively. Different values of angle θ are considered: (a) θ > θ * (τ /h, ϕ),
(b) θ = θ * (τ /h, ϕ), (c) θ * 0 (ϕ) < θ < θ * (τ /h, ϕ), (d) θ = θ * 0(
ϕ) at fixedτ = −h and ϕ = π/6. The curve denoted as (3) corresponds to w(α, β) −1 = 0, and divides the (α, β) plane into part corresponding to w(α, β) > 0 (denoted as I), and (unphysical) part corresponding to w(α, β) < 0 (denoted as II). Points of intersection A and B correspond to equilibrium solutions; point A is the solution which corresponds to the bridge with a larger width and smaller relative free energy.
After inserting the above expressions to equation w(α * , β * ) −1 = 0 one gets the equation for the angle θ * 0 (ϕ) arccosh 1 sin(θ * 0 + ϕ)
+ arccosh 1 sin θ * 0 = tan ϕ sin(θ * 0 + ϕ)
.
(10) Its numerical solution is shown on Fig. 3. For ϕ ≪ 1 the function θ * 0 can be approximated by θ * 0 ≃ π/2 − ϕ, and for ϕ → π/2 it tends to zero tangentially.
We note that the bridge of finite width can exist for θ > θ * 0 (ϕ) provided the free energy of the system (8) evaluated at θ = θ * 0 (ϕ) is not greater than zero. This requirement can be rewritten as the condition on the line tension coefficient
τ −h sin θ * 0 (ϕ) sin(θ * 0 (ϕ) + ϕ) sin θ * 0 (ϕ) + sin(θ * 0 (ϕ) + ϕ) ≡τ * (ϕ) .(11)
Thus forτ τ * (ϕ) the liquid bridge is present for θ θ * 0 (ϕ), otherwise there is no bridge in the system. For ϕ ≪ 1 the line tension coefficient can be approximated by the functionτ * /h ≃ −0.5, and for ϕ → π/2 tends to zero tangentially, Fig. 3. In order to find the divergence of the width of the bridge for θ → θ * 0 (ϕ) at fixedτ <τ * (ϕ) we expand equa- tions (5) and (6) around θ = θ * 0 (ϕ), α = α * and β = β * . In the leading order the divergence is given by
w ≃ A(ϕ) hτ * (ϕ) −τ |τ * (ϕ)| θ − θ * 0 (ϕ) −1 ,(12)
where the amplitude
A(ϕ) = sin 2 (θ * 0 (ϕ) + ϕ) sin θ * 0 (ϕ) cos ϕ sin 2 (θ * 0 + ϕ) + sin 2 θ * 0 (ϕ)(13)
is positive for 0 < ϕ < π/2. To find the angle θ = θ * (τ /h, ϕ) at which there is a first order transition from the phase without bridge to phase with the bridge one has to solve eqs. (5), (6), (7) supplemented by the requirement of tangetiality of the curves β(α) determined by these equations, Fig.2b. The numerically obtained phase diagram, Fig. 4, displays the coexistence line between the phase with no bridge in the system (N B) and with the bridge (B). The coexistence lines forτ τ * and forτ τ * meet tangentially. For θ * 0 < θ θ * the width of the bridge is finite (w < ∞) and for θ θ * 0 the whole system is filled with the liquid phase (w = ∞).
IV. DISCUSSION
We have shown that a fluid confined between planar and a conical walls may undergo -upon changing its thermodynamic state along the bulk liquid-vapor coexistence line -two different transition scenarios leading from the phase without liquid bridge to the phase with liquid bridge of infinite thickness. The type of scenario depends on the value of dimensionless line tension coefficient τ /σ lv h ≡τ /h. Forτ <τ * (ϕ) first the bridge of finite width is formed discontinuously at temperature corresponding to angle θ = θ * (τ /h, ϕ). Upon further decrease of θ the width of the bridge increases continuously and at θ = θ * 0 (ϕ) it becomes infinite; the whole space between substrates is filled with the liquid phase. This scenario is qualitatively similar to the one observed experimentally by Takata et al. 10 in a different context. On the other hand, when decreasing the angle θ atτ >τ * (ϕ) one observes a discontinuous transition at θ = θ * 0 (ϕ). This transition takes the system from the phase without the liquid bridge to the phase with bridge of infinite width.
To find the width of the bridge one can perform the solvation force measurements using the atomic force microscope 1-3 with conical tip. The solvation force associated with the liquid bridge F = −∂∆Ω(T, h, ϕ)/∂h can be rewritten in the following z-independent form F = −2πσ lgf (z) 2 2 1 r 1 (z)
+ 1 r 2 (z) ,(14)
where the radii of curvature
r 1 (z) = d dzf ′ (z) 1 +f ′ (z) 2 −1 , r 2 (z) =f (z) 1 +f ′ (z) 2 ,(15)
can be evaluated at arbitrary z ∈ [0; z 2 ]. The zindependent expression in (14) can be presented in a particularly transparent form 11
F = −2πσ lg w ,(16)
where the negative sign indicates that the substrates attract each other once the liquid bridge is formed. Thus the solvation force measurements provide direct information on the width of the liquid bridge.
FIG. 2 .
2and(6) do not depend on the line tension coefficientτ . Their solutions denoted by α * and β * have the following form Plots of functions β = β(α) obtained by numerically solving equations
FIG. 3 .
3The dependence of the angle θ * 0 and the dimensionless line tension coefficientτ * /h on the opening angle ϕ (solid lines). For ϕ ≪ 1 the function θ * 0 (ϕ) can be approximated by θ * 0 ≃ π/2 − ϕ and the line tension coefficient byτ * /h ≃ −0.5 (dashed lines). For ϕ → π/2 both functions tend to zero tangentially.
FIG. 4 .
4The phase diagram in variables (τ /h, θ) displaying the coexistence line between the phase with no liquid bridge present in the system (N B) and with the bridge present in the system (B) (continuous line), (w < ∞) denotes the region in which the width of the bridge is finite, and (w = ∞)the region where the whole system is filled with liquid. The opening angle is equal ϕ = π/6, for which θ * 0 ≈ 1.034 and τ * /h ≈ −0.462.
. H Butt, B Capella, M Kappl, Surf. Sci. Rep. 591H. Butt, B. Capella, and M. Kappl, Surf. Sci. Rep., 59, 1 (2005).
H J Butt, K Graf, M Kappl, Physics and chemistry of interfaces. WeinheimWiley-VCHH. J. Butt, K. Graf, and M. Kappl, Physics and chemistry of interfaces (Wiley-VCH, Weinheim, 2003).
. J Jang, G C Schatz, M A Ratner, J. Chem. Phys. 1163875J. Jang, G. C. Schatz, and M. A. Ratner, J. Chem. Phys., 116, 3875 (2002).
. F Dutka, M Napiórkowski, J. Phys.: Condens. Matter. 19466104F. Dutka and M. Napiórkowski, J. Phys.: Condens. Matter, 19, 466104 (2007).
J S Rowlinson, B Widom, Molecular Theory of Capillarity. Oxford University, LondonJ. S. Rowlinson and B. Widom, Molecular Theory of Capillarity (Oxford University, London, 1982).
. J Jang, G C Schatz, M A Ratner, Phys. Rev. Lett. 92885504J. Jang, G. C. Schatz, and M. A. Ratner, Phys. Rev. Lett., 92, 885504 (2004).
. H J Butt, M Kappl, Advances in Colloid and Interface Science. 14648H. J. Butt and M. Kappl, Advances in Colloid and Interface Science, 146, 48 (2009).
. D Bonn, J Eggers, J O Indekeu, J Meunier, E Rolley, Rev. Mod. Phys. 81739and references thereinD. Bonn, J. Eggers, J. O. Indekeu, J. Meunier, and E. Rolley, Rev. Mod. Phys., 81, 739 (2009) and references therein.
. P S Swain, R Lipowsky, Langmuir. 146772P. S. Swain and R. Lipowsky, Langmuir, 14, 6772 (1998).
. Y Takata, H Matsubara, T Matsuda, Y Kikuchi, T Takiue, B Law, M Aratono, Colloid & Polymer Science. 286647Y. Takata, H. Matsubara, T. Matsuda, Y. Kikuchi, T. Takiue, B. Law, and M. Aratono, Colloid & Polymer Science, 286, 647 (2008).
. M Farshchi-Tabrizi, M Kappl, Y Cheng, J Gutmann, H J Butt, Langmuir. 222171M. Farshchi-Tabrizi, M. Kappl, Y. Cheng, J. Gutmann, and H. J. Butt, Langmuir, 22, 2171 (2006).
| []
|
[
"Multiparameter Integrable QFT's with N bosons",
"Multiparameter Integrable QFT's with N bosons"
]
| [
"H Saleur \nDepartment of Physics\nUniversity of Southern California Los Angeles\n90089-0484CA\n",
"P Simonetti \nDepartment of Physics\nUniversity of Southern California Los Angeles\n90089-0484CA\n"
]
| [
"Department of Physics\nUniversity of Southern California Los Angeles\n90089-0484CA",
"Department of Physics\nUniversity of Southern California Los Angeles\n90089-0484CA"
]
| []
| We introduce a new family of integrable theories with N bosons and N freely adjustable mass parameters. These theories restrict in particular limits to the "generalized supersymmetric" sine-Gordon models, as well as to the flavor anisotropic chiral Gross Neveu models (studied recently by N. Andrei and collaborators). The scattering theory involves scalar particles that are no bound states, and bears an intriguing resemblance wih the results of a sharp cut-off analysis of the Thirring model carried out by Korepin in (1980). Various physical applications are discussed. In particular, we demonstrate that our theories are the appropriate continuum limit of integrable quantum spin chains with mixtures of spins.4/981 Packard Fellow | 10.1016/s0550-3213(98)00622-1 | [
"https://arxiv.org/pdf/hep-th/9804080v1.pdf"
]
| 14,332,021 | hep-th/9804080 | b08af68c4297a78ff2a0073b05269b40df9e5961 |
Multiparameter Integrable QFT's with N bosons
Apr 1998
H Saleur
Department of Physics
University of Southern California Los Angeles
90089-0484CA
P Simonetti
Department of Physics
University of Southern California Los Angeles
90089-0484CA
Multiparameter Integrable QFT's with N bosons
Apr 1998arXiv:hep-th/9804080v1 10 1 Packard Fellow
We introduce a new family of integrable theories with N bosons and N freely adjustable mass parameters. These theories restrict in particular limits to the "generalized supersymmetric" sine-Gordon models, as well as to the flavor anisotropic chiral Gross Neveu models (studied recently by N. Andrei and collaborators). The scattering theory involves scalar particles that are no bound states, and bears an intriguing resemblance wih the results of a sharp cut-off analysis of the Thirring model carried out by Korepin in (1980). Various physical applications are discussed. In particular, we demonstrate that our theories are the appropriate continuum limit of integrable quantum spin chains with mixtures of spins.4/981 Packard Fellow
Introduction
A variety of low dimensional experimental condensed matter systems have been studied recently, that involve field theories with several bosons. Examples include tunneling in quantum wires, where two bosons are necessary to describe the charge and spin degrees of freedom of the electrons [1], tunneling between multiple edges in fractional quantum Hall devices [2], nanotubes and two-leg ladders [3], etc. Properties of interest in these systems are usually non perturbative, and only a few techniques are available to obtain quantitatively reliable results, mostly conformal invariance and integrability. The search for integrable quantum field theories with several bosons is thus of some importance.
The problem is, that besides the sine-Gordon model, most known integrable bosonic theories are of little practical use: they are usually of Toda type, and involve real exponential of fields, that usually do not appear in a condensed matter context. Some exceptions to this unsatisfactory situation are known: for instance, the double sine-Gordon model turns out to be exactly solvable for some values of the couplings [4], [5], [6], with potential applications to quantum wires. Also, the "generalized supersymmetric" extensions of the sine-Gordon model [7] can be rebosonized using standard bosonization formulas for the parafermions [8]. These theories are useful in the discussion of the multichannel Kondo model [9], [10], [11]; the N = 1 supersymmetric sine-Gordon model also appears in the context of quantum wires [12].
In this paper, we point out that there is a simple, integrable family of theories extending the generalized supersymmetric sine-Gordon models, that involve N bosons and have N adjustable mass parameters. This family can be considered as an extension of the flavor anisotropic Gross Neveu models that have been studied in the last few years by N.Andrei and collaborators( mostly in the context of the channel anisotropic Kondo model [13], [14], [15]), to the case where an anisotropy is introduced both in the color and flavor sectors.
The case where the color anisotropy is at the special "Toulouse" value is of special interest for applications to quantum wires or dissipative brownian motion [16].
The models are presented in section 2, where integrability is proven and various limiting cases discussed. The scattering theory is discussed in section 3. The "classical" limit is analyzed in section 4, providing a general check of our approach. In section 5, the relation with quantum spin chains involving several species of spins is discussed. Some applications to impurity problems are discussed in section 6. Some final remarks are collected in section 7. The appendix contains numerous details on the numerical treatment both of the perturbation theory and of the TBA.
Generalities
The integrable theories
We consider a system of N chiral bosons with propagators
< φ j (z)φ j (w) > = −2 N − 1 N ln(z − w) < φ j (z)φ k (w) > = 2 N ln(z − w), j = k,(2.1)
and introduce the following fields:
Ψ (j) = 1 √ N N k=1 ω jk e iφ k ,(2.2)
where ω = e 2iπ/N . These fields provide different realizations [8] of the fundamental parafermion [17] of Z N type. The bosonic fields φ j are not independent (one can set indeed φ N = −φ 1 − . . . − φ N−1 ); they can be expressed in terms of N − 1 independent fields Φ j obeying Introduce one additional bosonic field, which has trivial contractions with the preceding ones, and obeys
< Φ j (z)Φ k (w) >= −2δ jk ln(z − w),(2.< Φ(z)Φ(w) >= − 1 4π ln(z − w). (2.5)
Consider then the action (we assume all the a j are real positive numbers) In the case where all the coefficients a j but one vanish, this is the action of the "generalized supersymmetric" sine-Gordon model, which is known to be integrable [7]. We claim that this only a particular case of a more general integrable model, given by (2.6).
A = 1 2 dxdy N−1 j=1 ∂ µ Φ j +Φ j 2 + ∂ µ Φ +Φ 2 + N j=1 a j Ψ (j) (z) N j=1 a jΨ (j) (z) e iβ[Φ(z)+Φ(z)] + conjugate.
To establish this result, we first observe that the fields Ψ (j) obey the short distance
expansions Ψ (j) † (z)Ψ (j) (w) ≈ 1 (z − w) 2(N −1) N 1 + 2 N + 2 N (z − w) 2 T (j) (w) + . . . ,(2.7)
where
T (j) (z) = 1 N + 2 − 1 2 N k=1 (∂ z φ k ) 2 + k =l ω j(k−l) e i(φ k −φ l ) , and Ψ (j) † (z)Ψ (k) (w) ≈ 1 (z − w) 2 N −1 N (z − w)J (jk) (w) + . . . , (2.8) where J (jk) = − i N N k=1 ω −(k−j)l ∂ z φ l
We can then prove integrability, following [18] , by establishing the existence of non local conserved currents. Introduce
J − (z) = N j=1 b j Ψ (j) † (z) exp −i 4π β 2 N Φ(z) . (2.9)
then the short distance expansion of this current with the first term in the action reads, for the chiral part,
N j=1 b j Ψ (j) † (z) N k=1 a k Ψ (j) (w) exp −i 4π β 2 N Φ(z) exp (iβΦ(w)) ≈ 1 (z − w) 2 N k=1 b k a k + (z − w) k =l b k a l J (kl) (w) + . . . exp − 8iπ βN Φ(z) + iβΦ(w) .
The residue of the simple pole with thus be a total derivative iff the factor of (z − w) in the first bracket vanishes. This is equivalent to the condition k =l b k a l ω (k−l)m − 1 = 0, m = 1, . . . , N − 1 which always has solutions, since it is a system of N − 1 equations with N unknown 3 .
The short distance expansion of this current with the chiral part of the second term in the action has a leading term that goes as (z − w) −2/N (z − w) 2/N , and thus no simple pole. Following the standard argument, the expansion of J − having a simple pole whose residue is a total derivative, the non local charge J − is conserved to first order in the perturbation. For generic value of β, one can then argue that this is true to any order in perturbation theory, and, presumably, non perturbatively as well.
Another conserved current is easily found by complex conjugation:
J + (z) = N j=1 b * j Ψ (j) (z) exp i 4π β 2 N Φ(z) . (2.10)
The conservation of J ± then ensures integrability [18].
The case N = 2
Let us discuss in more details the simplest example where N = 2. In that case, the parafermions are self-conjugated (up to a sign). We set
ψ (1) = −i √ 2 sin φ 1 = iχ ψ (2) = √ 2 cos φ 1 = ψ,(2.11)
where ψ and χ are (real) Majorana fermions. The perturbative part of the action reads then (aψ + ibχ)(aψ + ibχ)e iβΦ + (aψ − ibχ)(aψ − ibχ)e −iβΦ , (2.12) that is, regrouping terms 2 a 2 ψψ − b 2 χχ cos βΦ + 2ab ψχ + χψ sin βΦ.
(2.13)
The non local conserved currents read then
J + (z) =(aψ − ibχ) exp i 4π β Φ J − (z) =(aψ + ibχ) exp −i 4π β Φ .
(2.14) 3 The solution is easily expressed using the matrix N × (N − 1) matrix M whose elements are M jk = a j+k−1 by b j equal to the j th cofactor.
If a = 0 or b = 0, the action reduces to the one of the supersymmetric sine-Gordon model (with an additional, decoupled, Majorana fermion). If a = b, the combinations appearing in the action become Dirac fermions, ψ + iχ = √ 2e iφ 1 . We can thus reexponentiate them, to write the perturbing term as a cos(βΦ + φ 1 ), so the model is equivalent to a sine-Gordon model, at a coupling constant β ′ with (β ′ ) 2 8π = β 2 8π + 1 2 . The currents J ± have a fractional spin s = 1 γ , where
γ = 2β 2 4π − β 2 . (2.15)
In the case a = 0 or b = 0, the currents are generators of the algebra sl(2) q [18], with deformation parameter q = −e −iπ/γ . In the general case however, they do not form a closed algebra.
To understand the situation a little better, it is useful to go to the SU (2) symmetric point β 2 = 4π. There are two underlying level one algebras, with generators
J + 1 = ψ + iχ √ 2 e i √ 4πΦ J − 1 = ψ − iχ √ 2 e −i √ 4πΦ J 3 1 =i −ψχ + √ 4π∂Φ (2.16)
and
J + 2 = ψ − iχ √ 2 e i √ 4πΦ J − 2 = ψ + iχ √ 2 e −i √ 4πΦ J 3 2 =i ψχ + √ 4π∂Φ ,(2.17)
and all short distance expansions between operators of different algebras are non singular.
The sum J 1 + J 2 provides a level two representation. For general a, b, the currents can be written as combinations of the J 1 and J 2 . Setting a = µ + λ and b = µ − λ, we have
J + =λJ + 1 + µJ + 2 J − =λJ − 1 + µJ − 2 .
This is suggestive of a system where two flavors of fermionic currents are combined in a flavor anisotropic fashion. Indeed, introduce new bosons defined by
−φ 1 + √ 4πΦ =ϕ 1 φ 1 + √ 4πΦ =ϕ 2 ,(2.18)
the perturbing term is then proportional to λ 2 cos(ϕ 1 +φ 1 ) + µ 2 cos(ϕ 2 +φ 2 ) + λµ cos(ϕ 1 +φ 2 ) + λµ cos(φ 1 + ϕ 2 ) This is the abelian bosonized form of a Gross Neveu type interaction with flavor anisotropy
(λJ x 1 + µJ x 2 ) × λJ x 1 + µJ x 2 + (x → y)
The zz term is missing in this interaction -it is well known that this term is generated under renormalization [5]. Away from the SU (2) point, one can similarly consider our model as a color and flavor anisotropic chiral Gross Neveu model (upon bosonization, this model gives rise to 4 independent fields, but only 2 of them appear in the interaction due to chirality, the other 2 ones contributing free parts to the action).
Arbitrary N
The previous discussison easily extends to other values of N . The currents J ± have a fractional spin s = 1 γ , where
γ = N β 2 /8π 1 N − β 2 8π . (2.19)
In the case a = 0 or b = 0, the currents are generators of the algebra sl(2) q with deformation parameter q = −e −iπ/γ . In the general case, they do not form a closed algebra. In the limit β 2 = 4π, they can be expressed as combinations of N generators belonging to N different realizations of a level 1 SU (2) algebra. The rebosonized action is a Gross Neveu models with two colors and N flavors, and flavor anisotropy, with an interaction term of
the form N j=1 λ j J x j × N j=1 λ jJ x j + (x → y)
where the anisotropy coefficients λ j are related with the terms in the original action by
λ j = N k=1 ω jk a k . (2.20)
That the Gross Neveu model with flavor anisotropy is integrable has been pointed out several years ago in fact, in [13], [14]. Integrability is established there by direct diagonalization of the bare hamiltonian together with "dynamical fusion". An intriguing feature is that the proof presented in [14], strictly speaking, works only for the case N = 2 (and some subcases of special flavor anisotropy for larger N ). The reason is, that in the approach of [14], the bare particles must have a bare flavor scattering matrix that is a solution of the Yang Baxter equation. In the flavor isotropic case, this S matrix is the standard SU (N ) R-matrix; but, as far as we know, there is no way to deform this R matrix in a non trivial way by introducing N − 1 independent anisotropic parameters -all available single and multiparameters quantum group approaches still explore a very small subset of all the possible flavor anisotropies. On the other hand, from the point of view we have adopted (that deals directly with the renormalized action), all flavor anisotropies play equivalent roles, and integrability appears generally true. The argument also extends straightforwardly to the case of color anisotropy, not considered in [13].
Of course, a particular choice of anisotropy is when one of the coefficients λ j vanishes exactly, in which case the modele reduces to one with N − 1 flavors. In the case of general β, the same conditon, say λ N = 0 leads, using formula (2.2), to a problem where the field φ N has disappeared from the action. We are then left with a set of N − 1 fields satisfying
< φ j (z)φ j (w) > = −2 N − 1 N ln(z − w) = −2 N − 2 N − 1 ln(z − w) − 2 N (N − 1) ln(z − w) < φ j (z)φ k (w) > = 2 N ln(z − w) = 2 N − 1 ln(z − w) − 2 N (N − 1) ln(z − w), j = k (2.21)
We can then write φ j = φ ′ j + Φ ′ where there are N − 1 fields φ ′ j satisfying relations similar to (2.21) but with the replacement N → N − 1. The problem is then equivalent to the case N → N − 1, but with a shift of the leftover exponential,
β 2 8π → β 2 8π + 1 N(N−1) .
Conjectured scattering theory
The thermodynamic Bethe ansatz
Though there are arguments based on symmetry to infer what the scattering theory should look like, the approach we use is to first conjecture a set of thermodynamic Bethe ansatz equations (TBA) to compute the free energy of the 1+1 quantum field theory associated with the action (2.6) at temperature T . We parametrize
β 2 8π = γ N (N + γ) ,(3.1)
and consider the case γ an integer. Our conjecture is as follows. Introduce the TBA
diagram γ + N − 1 γ + N 1 2 N N + 1 ------------- γ + N − 2
with incidence matrix N jk such that N jk = 1 if the nodes j and k are connected, and 0 otherwise (in particular N jj = 0). With this diagram, we associate the set of pseudo energies (one for each node) solution of the system (R = 1/T , T the temperature)
ǫ j = N k=1 δ jk m k R cosh θ − k N jk dθ ′ 2π 1 cosh (θ − θ ′ ) ln 1 + e −ǫ k (θ ′ ) . (3.2)
The free energy reads then
F = − T 2π N k=1 dθ 2π m k cosh θ ln 1 + e −ǫ k (θ) . (3.3)
In the foregoing equations, the m k are a set of masses which depend on the couplings a k in the bare action. By dimensional analysis,
[a k ] = [length] β 2 4π − 2 N . Therefore, we have m k = G k (a 1 , . . . , a N ) = a N +γ 2 1 F k (a 2 /a 1 , . . . , a N /a 1 ). (3.4)
The G k are homogeneous functions of the couplings a k . Some properties of these functions are known before hand of course. They are symmetric functions of their arguments. If all the a k but one vanish, we know that the problem becomes equivalent to the N th supersymmetric sine-Gordon model, and therefore, from known results [18], [19], all the masses but the N th one must vanish. This means that G j = 0, j = 1, . . . , N − 1, when all the a k but one vanish. Also, we know that the N th mass vanishes when one of the fields decouples, ie when one of the coefficients (2.20) vanishes. More generally, the masses m k . . . m N vanish when N − k + 1 of these coefficients vanish. We will get back to the determination of the functions G k below.
The evidence for the TBA comes first from the compatibility with all limiting cases.
Moreover, the analysis of the Y system [19] associated with it shows that the dimension of the UV perturbing operator is always the same as in the generalized supersymmetric case (it does not depend on the number of massive nodes), ie h = γ+N−1 γ+N = β 2 8π + N−1 N as desired. Finally, the central charge, in the generic case when all the m k are non zero is simply equal to the number of massive nodes, ie c = N . This is easily checked. Using standard formulas, the central charge is expressed in terms of the solutions of the system (3.2) as T → 0 and T → ∞. In the first case, the N first ǫ's are all infinite, the others follow from
x j = e −ǫ j = (j + 1) 2 − 1, j = N + 1, . . . , N + γ − 2 x N+γ−1 = x N+γ = γ − 1.
In the second case, one has to solve the same system with more nodes, ie
y j = e −ǫ j = (j + 1) 2 − 1, j = 1, . . . , N + γ − 2 y N+γ−1 = y N+γ = N + γ − 1.
The central charge is then (here L designates the Euler dilogarithm) [19] c = 6
π 2 L y 1 + y − L x 1 + x . (3.5)
For a D diagram, as T → ∞, the sum 6
π 2 L y 1+y
is equal to the number of nodes minus one, so the central charge is simply, from L(1) = π 2 6 , equal to the number of massive nodes, ie c = N indeed.
Scattering theory
The scattering theory associated with this TBA is very simple. One introduces a set of N − 1 scalar massive particles with masses m 1 , . . . , m N−1 . One also introduces a pair soliton/antisoliton with masses m N . The latter scatter with the usual sine-Gordon S matrix that corresponds to the quantum group parameter introduced above, q = −e −iπ/γ -it is the same as the S matrix of an ordinary sine-Gordon model at coupling β 2 eq 8π = γ γ+1 . The scalar particle of label k scatters trivially with all particles, except the ones of label k ± 1, with which it scatters with the CDD factor S = i tanh θ 2 − iπ 4 . When k = N − 1, the particle scatters with the soliton and antisolitons with the same CDD factor. It is important to stress that the sine-Gordon S-matrix considered here has no poles in the physical strip: the scalar particles are not bound states of the soliton and antisoliton.
Remarkably, a scattering theory built with similar ingredients appears in a paper by Korepin [20]. There, the author discusses the Thirring model in the repulsive regime, using a sharp cut-off regularization. For a coupling corresponding to a sine-Gordon parameter
β 2 8π ∈ l l+1 , l+1
l+2 , he finds, in addition to the soliton and antisoliton, a spectrum made of (l − 1) neutral particles, with same S-matrices as ours, but where all the masses are uniquely determined as a function of l (in particular, the soliton mass becomes infinite when β 2 8π = l+1 l+2 ). The soliton and antisoliton scatter through a sine-gordon S matrix with a renormalized β 2 8π l
= 1−l(1−β 2 /8π) 1−(l−1)(1−β 2 /8π) parameter.
The relation with our problem, if any, is not clear. It is usually admitted that the in the repulsive regime of SG, the quantum theory must be defined with care, and depends on the cut-off. For smoother cut-offs, as well as XXZ type regularizations, the results of [20] are not supposed to hold, and the standard description of [21] with only the soliton and antisoliton to be correct.
Observe that the sine-Gordon part of the S matrix commutes with the quantum algebraŝl(2) q . While the non local conserved charges Q ± have commutation relations that do not close, they can be expresed as combinations of N basic charges generatingŝl(2) q . A simple representation of the algebra generated by the Q ± is then obtained by identifying all these charges -it is realized on the multiparticle soliton antisoliton states as in the usual sine-Gordon model [18]. In the general case, there are no other conserved charges.
If N = 2 for instance, another, local, conserved charge appears when eg b = 0, since the local currents G = ψ∂Φ is conserved [18]. But away from this value (and a = 0 of course), this is not true. We must thus complete the S matrix by a sector with no apparent symmetries, that does not spoil the Yang Baxter equation: besides "vertex" and "RSOS" type solutions, the only available choice is a set of particles with diagonal scattering. Requiring the central charge to be N , and the TBA to restrict to the known ones when some of the couplings vanish (and additional symmetries appear) seems to leave no choice but our result.
Still another check comes from discussing the limit β → 0, to which we turn now.
The "classical" limit.
We call here classical the limit where the sine-Gordon part of the scattering theory becomes identical with the classical one, that is β → 0, γ → 0.
The TBA at the reflectionless points
So far we wrote a TBA in the simplest case γ an integer, for which β 2 8π ≥ 1 N(N+1) , corresponding to the SG component of the S-matrix being in the repulsive regime. To approach β = 0, we consider instead the cases γ = 1 n , that is β 2 8π = 1 N(nN+1) : the SG S-matrix is now in the attractive regime, at the so called reflectionless points. This means that the spectrum has to be completed by the bound states of solitons and antisolitons, the n − 1 breathers of masses 2m sin j π 2n , where m is the soliton mass. We denote 2m 1 sin π 2n , . . . , 2m N−1 sin π 2n the masses of the scalar particles (recall these are not bound states; the factor 2 sin π 2n in their masses is introduced for convenience only). By building the complete scattering theory using bootstrap and fusion, one finds the following TBA equations.
One first has equations for the right hand side of the diagram, that look like the standard ones for the attractive regime of sine-Gordon
ǫ j = k N jk K * ln (1 + e ǫ k ) , N + 1 ≤ j ≤ N + n. (4.1)
There is then a central part involving the nodes N − 1 and N , with
ǫ N = K * ln (1 + e ǫ N +1 ) − ln 1 + e −ǫ N −1 ,(4.2)
and
ǫ N−1 = 2m N−1 cos π 2n + m tan π 2n cosh θ T − K * ln (1 + e ǫ N ) − K ′′ * 1 + e −ǫ N −1 − K ′ * 1 + e −ǫ N −2 .ǫ j = 2m j sin π 2n cosh θ T − k N jk K ′ * 1 + e −ǫ k , j ≤ N − 2. (4.4)
These equations can be conveniently encoded in the diagram * *
N + n − 1 N + n 1 2 N − 1 N -/ -----/ -------- N + n − 2
The asymptotic conditions for the attractive part are For T → 0, we have x j = 0, since now all nodes are massive. For T → ∞, we have
ǫ j ≈ 2m sin (j − N + 1)π 2n cosh θ T , N ≤ j ≤ n + N − 2 ǫ n+N−1 ≈ ǫ n+N ≈ m cosh θ T .e −ǫ j =(j + 1) 2 − 1, j ≤ N − 1 e ǫ j = j − N + 1 + 1 N 2 − 1, N ≤ j ≤ N + n − 2 e ǫ N +n−1 =e ǫ N +n = n + 1 N − 1. (4.6)
The central charge is thus
c = 6 π 2 2L 1 n + 1 N + n−1 j=1 L 1 j + 1 N 2 + N−1 j=1 L 1 − 1 (j + 1) 2 = N. (4.7)
4.2. The classical limit
The "classical limit" is obtained by letting β → 0. In our TBA, this means n → ∞.
To get non trivial results then, we scale the soliton mass m with n, so the mass of the first breather remains finite. Similarly, we assume that the parameters m j are also scaled with n, so that the masses of the scalar particles remain finite. Our computation follows the general strategy of [22], [23]. In that limit, it is convenient to introduce the new notation κ j = ǫ j+N−1 , j ≥ 1. When n → ∞, the kernels K and K ′′ become delta functions, (we set K ′ = s) and one finds the general solution
e κ j + 1 = aA j − a −1 A −j A − A −1 2
The constant A follows from the knowledge of mass terms, A = e C/2T , C = mπ n cosh θ, while the constant a depends on ǫ N−1 :
1 + e −ǫ N −1 = A − A −1 a − a −1
In that limit, the equation satisfied by ǫ N−1 is
ǫ N−1 = −s * ln 1 + e −ǫ N −2 − 1 2 ln (1 + e κ 1 ) − 1 2 ln 1 + e −ǫ N −1 + Λ + 1 2 C T where Λ = m N −1 m . Let us then define ǫ ′ N−1 = s * ln (1 + e ǫ N −2 ) + Λ C T .
By simple algebra, one finds that the following holds
1 + e −ǫ N −1 = 1 + e −ǫ N 1 + e −ǫ ′ N −1 together with ǫ N = (Λ + 1) C T − s * ln 1 + e −ǫ N −2
We can thus trade completely the right hand side of the diagram for an additional node, and get the equations (recall s = K ′ ,s = We can now compute the free energy. Its general expression is
F T = − ∞ j=1 j dθ 2π C(θ) ln 1 + e −κ j − N−1 j=1 m j m dθ 2π C(θ) ln 1 + e −ǫ j . (4.9)
By using the basic TBA equation
κ j ≈ jC T + 2 ∞ k=1 k ln 1 + e −κ k − ln 1 + e −ǫ N −1 ,
one finds after a few simple manipulations
F T = − N j=1 dθ 2π M j cosh θ ln 1 + e −ǫ j + dθ 2π m cosh(θ) ln 1 − e −M cosh θ/T . (4.10)
Here, the new mass M N = M N−1 + mπ n as before, and M = mπ n . The results of the classical limit are therefore equations (4.8) and (4.10). This means, the classical limit is made up of N particles scattering in a non trivial way, plus a decoupled free boson.
In the particular case of the generalized supersymmetric sine-Gordon model, all the masses M j but the N th one vanish: the system (4.8) reproduces the well known TBA for Z N field theories perturbed by the parafermion field [24], as is expected from letting β → 0 in the action.
In the particular case N = 2, there is no node N − 2, and the TBA system is trivial.
The system decouples into two free fermions of masses m 1 π n and (m 1 + m) π n , and a free boson of mass mπ n . This is again expected from the action, and a rather non trivial check from the point of view of the TBA. The mass for the free boson certainly arises from counter terms analogous to ones arising in the N = 1 supersymmmetric action [25] (for more discussion about this, see next section). It also follows that, in the classical limit, the correspondence between masses in the TBA and bare couplings goes, assuming a 2 ≤ b 2 , as
m 1 ∝a 2 m ∝(b 2 − a 2 ),(4.11)
where m is the mass of the first breather, and m 1 the mass of the scalar particle. In the more general case, the decoupled free boson presumably gets its mass from a counter term still analogous to what happens in the supersymmetric case [25]. The rest of the TBA corresponds to a non trivial theory, with N species of Z N parafermions interacting.
It is interesting to discuss in more details the case N = 3. There, the TBA is based on the diagram 1 2 3 ---with central charge c = 2. Observe that this value can be obtained by 2 = 1 2 + 7 10 + 4 5 , corresponding to the sum of the central charges for the Ising, tricritical and tetracritical Ising model (or, alternatively, the Ising and tricritical Ising models, and the 3 state Potts model). Remarkably, the weight h = 2 3 of our three parafermions Ψ (j) can be recovered by using fields of these minimal conformal field theories. The three fields Φ (where lower labels are labels of the Kac table, upper labels are the central chagres) do have conformal weights h = 2 3 . One might be tempted to infer that the foregoing TBA also describes the perturbation of this product of three theories by a combination of these three fields. This cannot be true however: it is easy to check that the operator algebra of these three fields cannot be reproduced by using our parafermion fields only, due for instance to the appearance of powers 1/15 and 19/15. More generally, a TBA like While it is tempting to speculate that ther TBA does describe the product of M minimal models perturbed by a combination of these fields, this result does not seem to be true.
Numerical check
To check the validity of the TBA besides the qualitative features we just discussed, one needs to compare the result for the free energy (3.3) to perturbative computations.
We restrict here to the simplest case N = 2. Because the action (2.13) has two free parameters, reflected in the existence of the two masses m 1 and m 2 , a full consistence check requires at least going to the 6 th order (odd orders vanish) in perturbation -a really complicated task, as discussed in more details in the appendix. The second order does not give any check but fixes a global scale. The fourth order does contain some information: consistency determines uniquely the relation between the masses and a, b. In fact, it is not obvious a priori that this relation will be physical: finding it involves solving some quadratic equations whose solutions might well be complex, establishing, in fact, that the TBA is not the right one. We have however always found solutions that are physical, indicating at least that the TBA is consistent to that order. Moreover, the general shape of the functions G we obtain can be argued to be the right one based on limiting cases, giving us some confidence in the TBA indeed.
I = 12 Γ(1/2 + β 2 8π ) Γ(1/2 − β 2 8π ) 2 Γ(− β 2 4π ) Γ(1 + β 2 4π ) 2 a 4 a 2 2 (5.2)
as a function of the mass ratio m/m 1 . By solving the second order equation in (k − /k + ) 2 one can extract the dependence of the coupling constants on the mass ratio. In Fig.3 we show that the quantity R = (b 2 − a 2 )/a 2 is still, for our value β 2 /8π = 1/10, almost a linear function of m/m 1 , like in the classical limit β 2 /8π → 0 where R = m m 1 : moreover, the slope is very close to its classical value of 1/2.
Relations with multispin integrable lattice models
The conjectured TBA appears very naturally in an a priori different context: the study of inhomogeneous integrable lattice models of XXZ type with a mixture of different representations. To explain this in a concise manner, we refer the reader to [26], and use similar notations (though the matter is quite standard). We consider thus an integrable model based on sl q 0 (2) R-matrices, whose "vertical space" is an array with spins s 1 = s 2 = 1, s 3 = s 4 = 2, . . . , s 2N−1 = s 2N = N and s 2N+i = s i otherwise, ie made of blocks representing the N first values of SU (2) spin. The associated spectral parameters
alternate u 1 = −u 2 = i Λ+λ 1 2 , . . . , u 2N−1 = −u 2N = i Λ+λ N 2
, and u 2N+i = u i otherwise.
The anisotropy is determined by the quantum group parameter q 0 = exp iπ γ+N . For hamiltonian we chose
H = −1 t d du ln t 1 i Λ + λ 1 2 + u t 1 −i Λ + λ 1 2 − u −1 . . . t N i Λ + λ N 2 + u t N −i Λ + λ N 2 − u −1 u=0 . (6.1)
Here, t s denotes the transfer matrix based on the foregoing vertical space and a "horizontal space" is a representation of spin s. The whole geometry can be illustrated on the following picture u 1 . . . u j . . .
s 1 . . . s j . . . u ---------------s
In the case of an array with a single type of spin j, the physical equations are well known to be encoded in a TBA identical to what we studied above for β 2 8π = γ N(γ+N) , but with a mass term on node j only. In this more general case, it is straightforward to show that one gets now a mass term for each of the N first nodes, with masses
m j = M exp − (γ + N ) 2 λ j , (6.2) where M = 4 ∆ exp − (γ+N) 2 Λ .
By taking the continuum limit, this local integrable lattice model will give rise to an integrable quantum field theory that has exactly the TBA conjectured in section 3. This indicates that this TBA is more than an abstract set of equations, but must be related to a genuine quantum field theory: we conjecture this field theory is nothing but (2.6).
It is interesting to observe that the same TBA would also be obtained by chosing a uniform spectral parameter (eg all λ j = 0) but by putting different amounts of the various spins, with densities proportional to the masses m j ; see [27] for more details on this approach. In the latter reference in particular, S matrices are directly derived from the lattice regularization, and agree with the results of section 3. In the context of lattice models, central charges have also been computed with TBA similar to ours [28], [29] 7. Impurity problems
The same argument of perturbative integrability carries through in the case of impurity problems [30]. Consider thus the problem with free bosons Φ i , Φ in the bulk, and an interaction term at the boundary
H bdr = N j=1 a j Ψ (j) (0) S − e iβΦ(0) + conjugate. (7.1)
Here, S ± are raising and lowering operators in a spin j 5 representation of the quantum group sl(2) q 0 (here the deformation parameter is not the same than the one appearing in the S-matrices earlier, but rather the one of the lattice model above, q 0 = exp iπ γ+N ). Note that only the right moving part of the fields appears in the action, but that the bosons φ, Φ all have Neumann boundary conditions, ie their right and left moving components are identical at the boundary. Alternatively, one could thus express the boundary perturbation with the total fields, and exponentials of half the argument. A particular case is where the boundary spin is in a "cyclic" representation, and can be gauged away [31], [32]. One finds then
H bdr = N j=1 λ j exp i (φ j + βΦ) + conjugate,(7.2)
with λ j defined in (2.20).
In the simplest case of the repulsive regime, and for γ an integer, the boundary free energy for spn j with j ≤ γ + N − 2 reads, as in the usual anisotropic Kondo problem ,
F bdr = −T dθ 2π 1 cosh(θ − θ B ) ln 1 + e −ǫ j . (7.3)
Here, the ǫ j are obtained by solving the TBA system (3.2) in the massless limit; this means, one sends the masses to zero and the rapidities to ∞, such that only right moving particles with dispersion relation e = p remain. In the TBA, the source terms are obtained by the simple substitution cosh θ → 1 2 e θ . The "masses" are simply parameters with the physical dimension of an inverse length. The rapidity θ B is such that m 1 e θ B ∝ a N +γ 2
1
. For the case of the cyclic spin, the free energy reads as (7.3) but with j = γ + N .
It is especially interesting to consider the free energy in the classical limit 6 . There, the same formula (7.3) holds for spin j ≤ N − 2 and the TBA now given by (4.8). For the spin j = N − 1, we have, due to some of the foregoing changes of variables,
F bdr = −T dθ 2π 1 cosh(θ − θ B ) ln 1 + e −ǫ N −1 + ln 1 + e −ǫ N . (7.4)
Finally, for the case of cyclic boundary spin.
F bdr = −T dθ 2π 1 cosh(θ − θ B ) ln 1 + e −ǫ N ,(7.5)
(in the last formulas, we have subtracted the trivial boundary free energy of the free boson Φ).
In the case N = 2, the problem has been studied in [34]. Take an impurity of spin 1/2, and set S + = d + , S − = d. Introducing the Dirac fermion Υ = ψ + iχ, the boundary action reads
a − b 2 Υ + d + + Υd + a + b 2 Υd + + Υ + d
According to (7.4), the free energy in the spin 1/2 case (ie j = 1 in our notations), reads, using notations of section 2,
F bdr = −T dθ 2π 1 cosh(θ − θ B ) + 1 cosh(θ − θ ′ B ) ln 1 + e −e θ /T ,
where we have made a shift of the variable of integration, and thus e θ B /e θ ′ B ∝ a 2 b 2 . In terms of the λ, µ variables describing the couplings to the different channels, this reads
e θ B /e θ ′ B ∝ λ−µ λ+µ 2
, in agreement with results of [34].
For general N , and when all the masses m k but m N vanish ( the standard generalized supersymmetric case) we obtain from (7.5) the ratio of degeneracy factors in the UV and IR g U V g IR = √ N , corresponding to a flow from free to fixed boundary conditions in the Z N model. In general, we have here the solution of a multiboson problem with arbitrary couplings; when N = 3 for instance, this means that the problem with boundary
perturbation λ cos φ 1 + µ (cos φ 2 + cos φ 3 )
is integrable. Rexpressing the bosons φ in terms of the independent bosons Φ, we obtain a two boson problem that is nothing but a quantum wire problem: see [16] for more details.
Conclusions
We feel there is more to understand in the theories we have addressed. The relation with the bare Bethe ansatz solutions of [13], [14] in the color isotropic case is poorly understood, as discussed in the text. The scattering theory we have proposed is rather Finally, we have restricted to positive coefficients a j in the problem, but it is clear that the perturbations will not always be massive if we allow some of these coefficients to be negative: where the theories flow to in that case is also an open question.
Appendix
As a check of the correctness of our solution we now compute with conformal perturbation theory the free energy for the case N = 2. The result has to be compared with the free energy obtained by numerically solving the TBA equations. Let's consider the system in the strip geometry (R, L) defined by the action A = A cf t − strip Φ int , where the interaction is given by (2.6). The dimensionless running central charge
C(R, a, b) = lim L→∞ 6R πL ln Z[R, L] = 6R 2 π F (9.1)
becomes in perturbation theory
C(R, a, b) = c UV + 12 lim L→∞ R 2πL ∞ k=2 1 k! strip d 2 w 1 · · · d 2 w k < Φ int (w 1 ) · · · Φ int (w k ) > c .
(9.
2)
The first correction k = 2 is ultraviolet divergent for any real value of β 2 , the anomalous dimension of the perturbing operator being ∆ = 1/2 + β 2 8π . Whatever regularization we choose, for example a radially ordered one [35], we obtain a divergent part, to be subtracted by a constant counterterm in the lagrangian, and a universal part. The counterterm contains a possible finite contribution giving rise to a non-universal bulk term which has to be fixed by a normalization condition. This condition is C(R, a, b) → 0 when R → ∞.
In practice the bulk term will be determined by comparison with the TBA result, while in integrable theories with only one mass scale it can be computed analytically. Taking into account the first non trivial correction, the running central charge reads
C(R, a, b) =c UV + c bulk (R, a, b) + 6(2π) β 2 2π Γ(1/2 + β 2 8π ) Γ(1/2 − β 2 8π ) 2 Γ(− β 2 4π ) Γ(1 + β 2 4π ) (a 2 + b 2 )R 1− β 2 4π 2 . (9.3)
The theory depends on the pair of massive coupling constants (a, b), or alternatively on the two masses (m1, m). Since we are going to compare our perturbative expansion with the TBA result it is useful to introduce the dimensionless coupling constants (k + , k − ) and to make explicit the dependence on the mass ratio:
a 2 + b 2 = k + (m/m 1 ) m 1− β 2 4π and a 2 − b 2 = k − (m/m 1 ) m 1− β 2
4π . Defining the dimensionless quantity r = mR, the second order running central charge becomes C(r, m/m 1 ) =2 + c bulk (m/m 1 ) r 2 + 6 (2π) β 2 2π
Γ(1/2 + β 2 8π ) Γ(1/2 − β 2 8π ) 2 Γ(− β 2 4π ) Γ(1 + β 2 4π ) [k + (m/m 1 )] 2 r 2− β 2 2π . (9.4) where C(4)1 = 12 lim L→∞ R 2πL strip d 2 w 1 · · · d 2 w 4 [G 1 − D]| strip (9.9)
and C (4)
2 = 12 lim L→∞ R 2πL strip d 2 w 1 · · · d 2 w 4 G 2 | strip . (9.10)
In the computation of the first integral we take the limit L → ∞ first, and we cancel the overall volume of the strip RL, and then we map the infinite strip to the whole z-plane.
We thus get 9.11) and by using the residual symmetry on the integration variables we are essentially reduced to two integrals
C (4) 1 = (2π) β 2 π 2 R 4− β 2 π d 2 z 2 2π d 2 z 3 2π d 2 z 4 2π (|z 2 ||z 3 ||z 4 |) β 2 4π −1 [G 1 − D] z 1 =1(C (4) 1 = 3(2π) β 2 π R 4− β 2 π (a 2 − b 2 ) 4 A 1 + (a 2 + b 2 ) 4 A 2 (9.12) where A 1 = d 2 z 2 2π d 2 z 3 2π d 2 z 4 2π (|z 2 ||z 3 ||z 4 |) β 2 4π −1 |z 12 | 2 |z 34 | 2 |z 12 | |z 34 | |z 13 | |z 14 | |z 23 | |z 24 | β 2 2π z 1 =1 (9.13)
and
A 2 = 2 d 2 z 2 2π d 2 z 3 2π d 2 z 4 2π (|z 2 ||z 3 ||z 4 |) β 2 4π −1 |z 12 | 2 |z 34 | 2 × |z 13 | |z 24 | |z 12 | |z 14 | |z 23 | |z 34 | β 2 2π − 1 |z 12 ||z 34 | β 2 2π z 1 =1 .
(9.14)
These two integrals are of Dotsenko-Fateev type [36]. By deforming the integration contours they can be transformed into products of two factors. Each factor is the sum, with proper trigonometric coefficients, of one-dimensional integrals of this kind
1 0 3 i=1 dv i v α i i (1 − v i ) β i (1 − v 1 v 2 ) γ 1 (1 − v 2 v 3 ) γ 2 (1 − v 1 v 2 v 3 ) γ 3 . (9.15)
They can be formally integrated by binomially expanding the last three factors of the integrand and using the fundamental integral 1 0 dvv a (1−v) b = Γ(1+a)Γ(1+b)/Γ(2+a+b). Therefore the two integrals A1 and A2 can be reduced to the computation of (products of) converging series of three indices. We don't give the explicit expressions because of their algebraic heaviness. The method is really a straightforward generalization of the paper [37]. The result can now be evaluated numerically by extrapolating the finite sums.
The integral contributing to C (4) 2 is evaluated with the method developed in [38]. We first map the finite strip to the annulus ρ < |z| < 1, where ρ = exp(−2πL/R), and then we compute the leading contribution of the integral in the limit ρ → 0. Using the symmetry under permutation of the four integration variables we obtain Having ordered the integration variables it is now possible to binomially expand each factor z γ jl = r γ j exp(iθ j γ)(1 − exp(i(θ l − θ j )r l /r j ) γ for j > l. Then we obtain a series on 12 indices constrained by three independent conditions from the angular integrations, each term of the series being a product of binomial coefficients and four radially ordered integrals of powers of the r i . The last integral, the one in dr 4 , gives the necessary overall volume divergence (2.1) ln(1/ρ). The result is
C (4) 2 = 3(2π) β 2 π R 4− β 2 π (a 4 − b 4 ) 2 S 1 + (a 2 + b 2 ) 4 S 2 (9.17)
where we give as an example a term contributing to S 2 n 1 ...,m 1 ...
′
−2q n,m (n 1 + n 2 + n 3 + 1 2 + β 2 8π )(n 3 + n 5 + n 6 + 1 2 + β 2 8π )(n 2 + n 3 + n 4 + n 5 + 1 + The coefficient q n,m is q n.m =b n 1 1 +
β 2 4π b m 1 1 + β 2 4π b n 2 − β 2 4π b m 2 1 − β 2 4π b n 3 1 − β 2 4π b m 3 − β 2 4π b n 4 1 − β 2 4π b m 4 − β 2 4π b n 5 − β 2 4π b m 5 1 − β 2 4π b n 6 1 + β 2 4π b m 6 1 + β 2 4π (9.20)
with b n (x) = Γ(n + 1 − x)/ (n!Γ(1 − x)).
Summarizing, the fourth order correction for the running central charge is given by
C (4) = 3(2π) β 2 π (A 2 + S 2 ) k 4 + + S 1 k 2 − k 2 + + A 1 k 4 − r 4− β 2 π (9.21)
where the numbers A 1 ,A 2 ,S 1 ,S 2 depend on β 2 8π and can be computed numerically by extrapolating the values of the respective finite sums. Unfortunately the above sums, especially
φ j = e j • Φ, j = 1, . . . , N,(2.4)where the • denotes scalar product and the e j are weights of the fundamental representation of SU (N ) 2 .
2
That is, e 1 = Λ 1 , e 2 = Λ 2 − Λ 1 , ..., e N −1 = Λ N −1 − Λ N −2 , e N = −Λ N −1 , Λ i the fundamental weights of SU (N ) and Φ the (N − 1) dimensional vector of coordinates Φ 1 , . . . , Φ N −1 .
the left hand side of the diagram looks in turn like the usual repulsive one
πω/2n cosh πω/2 .
N
jk s * ln 1 + e −ǫ k , j = 1, . . . , N, and M N = M N−1 + m π n .
.
central charge that can be written as c = c 1 +c 2 +. . . c M , where c M = 1− 6 (M +2)(M +3) . The conformal weight of the perturbing operator is h = M −1 M , which can be reproduced by M fields of the form Φ . . Φ c M 33 , m = 1, . . . , M .
by fitting. As a check, the bulk terms of the (extrapolated) limiting cases m/m 1 = 0 and m 1 /m = 0, i.e. the sine-Gordon and the supersymmetric sine-Gordon points, agree with the exact values B = 3/π and B = 0 within an accuracy of 0.1%. In Fig.1 we give the result for the adimensional ratio of coefficients
Fig. 1 :Fig. 2 :Fig. 3 :
123TBA result for the ratio I This curve has to be compared with the same universal ratio determined in the appendix with perturbation theory as a function of x = (k − /k + ) 2 , shown in Fig.2. The limiting cases are the sine-Gordon model at x = 0 and the supersymmetric sine-Gordon model at x = 1. Perturbative "Quasi-classical" behaviour of the ratio of coupling contants The value of the universal ratio I at the minimum point found with perturbation theory is I pert min = −2.374 in good agreement with the TBA value I T BA min = −2.366. Another important check is the value of the universal ratio at the supersymmetric sine-Gordon point: the perturbative result is I pert susy = 29.45 while the TBA value is I T BA susy = 29.49, confirming therefore the standard analysis 4 of this point
mysterious, and we have not answered the question of what the scalar particles have to do with the flavor symmetry breaking. The problem of analytically determining the relation between the action parameters a j and the masses m j remains in general open.
i
G 2 .
symbol ′ means the following conditions on the indices n 1 , . . . n 6 , m 1 , . . . m 6 n 1 + n 2 + n 3 = m 1 + m 2 + m 3 ; n 1 − n 4 − n 5 = m 1 − m 4 − m 5 n 2 + n 4 − n 6 = m 2 + m 4 − m 6 ; n 3 + n 5 + n 6 = m 3 + m 5 + m 6 . (9.19)
the A i ones, are slow to converge affecting therefore the precision of the extrapolated values. For the most favourable case β 2 8π = 1 10 the extrapolated values are A 1 =49.9 , A 2 = 1.44 S 1 = − 20.1 , S 2 = −1.79 . (9.22)For the first two coefficients A 1 and A 2 we have used the VBS extrapolation method over the set of finite sums with 25 < N ≤ 40, while the other coefficients S 1 and S 2 have been determined with the BST extrapolation method with convergence parameter ω = 1 over finite sums with N ≤ 7. The need for difference extrapolation methods is due to the difference in the rate of convergence of the series. With our choice the given extrapolated values are the most stable with respect to N . A coincise introduction to the above extrapolation methods can be found in[39].The numerical integration of the TBA equations gives us the running central charge C(r, m/m 1 ) with high precision and therefore the coefficients of the expansion determined with a standard fitting procedure. By matching the perturbative computation with the first two coefficients a 2 and a 4 we can determine now the two functions k + (m/m 1 ), k − (m/m 1 ) as solution of the following second order algebraic system 2 + S 2 ) .
That is, no counter term is necessary to make the action supersymmetric and integrable away from β = 0.
In our conventions, the fundamental has spin one.
This is usually called the "Toulouse" limit in the context of impurity problems[33]
Acknowledgments:We thank N. Andrei F. Lesage, and Al. Zamolodchikov for many useful discussions.Let's consider the case β 2 < 2π, which includes the attractive regime. The only UV divergence of perturbation theory is the one that occurs at second order. All the other perturbative contributions are UV finite. The third order correction is zero because the unperturbed three points correlation function of the interaction is zero by charge neutrality. In order to compute the fourth order correction we map the w-strip onto the z-plane, z = exp(i2πw/R), and we express by Wick theorem the unperturbed four points correlation functionto which we have to subtract the disconnected termIt is useful to rewrite the correlation function as the sum of two piecesand23+ c.c. |z 12 | |z 34 | |z 13 | |z 14 | |z 23 | |z 24 | β 2 2π + Perm.(9.8)and to compute the perturbation integral as the sum of the two corresponding integrals (G 1 − D) and G 2 , both separately UV-finite. As a result, the fourth order term is the sum C (4) = C1 + C (4) 2
. C Kane, M Fisher, Phys. Rev. 4615233C. Kane, M. Fisher, Phys. Rev. B46 (1992) 15233.
. C Nayak, M P Fisher, A W W Ludwig, H H Lin, cond-mat/9710305C. Nayak, M. P. Fisher, A. W. W. Ludwig and H. H. Lin, cond-mat/9710305.
. H H Lin, L Balents, M P Fisher, cond-mat/9801285H. H. Lin, L. Balents and M. P. Fisher, cond-mat/9801285.
. V A Fateev, Nucl. Phys. 473509V. A. Fateev, Nucl. Phys. B473 (1996) 509
. A P Bukhvostov, L N Lipatov, Nucl. Phys. 180116A. P. Bukhvostov, L. N. Lipatov, Nucl. Phys. B180 (1981) 116.
. F Lesage, H Saleur, P Simonetti, cond-mat/9707131F. Lesage, H. Saleur, P. Simonetti, cond-mat/9707131.
. D Bernard, A Leclair, Comm. Math. Phys. 14299D. Bernard, A. Leclair, Comm. Math. Phys. 142 (1991) 99.
. P Griffin, D Nemeschanksy, Nucl.Phys. 323545P. Griffin, D. Nemeschanksy, Nucl.Phys.B323 (1989) 545.
Cambridge (1993) and references therein. A C See For Instance, Hewson, Cambridge Studies in Magnetism. N. Andrei and CThe Kondo problem to heavy fermionsSee for instance A. C. Hewson, "The Kondo problem to heavy fermions", Cambridge Studies in Magnetism, Cambridge (1993) and references therein; N. Andrei and C.
. Destri, Phys. Rev. Lett. 52364Destri, Phys. Rev. Lett. 52 (1984) 364;
. I Affleck, A W W Ludwig, Nucl. Phys. 360641I. Affleck and A. W. W. Ludwig, Nucl. Phys. B360 (1991) 641.
. V Emery, S Kivelson, Phys. Rev. 4710812V. Emery, S. Kivelson, Phys. Rev. B47 (1992) 10812.
. M Fabrizio, A O Gogolin, cond-mat/9407104M. Fabrizio, A. O. Gogolin, cond-mat/9407104.
. F Lesage, H Saleur, P Simonetti, cond-mat/9703220F. Lesage, H. Saleur, P. Simonetti, cond-mat/9703220.
. N Andrei, A Jerez, cond-mat/9412054N. Andrei, A. Jerez, cond-mat/9412054
. N Andrei, M Douglas, A Jerez, cond-mat/9502082N. Andrei, M. Douglas, A. Jerez, cond-mat/9502082
. N Andrei, P Zinn-Justin, cond-mat/9801158N. Andrei, P. Zinn-Justin, cond-mat/9801158.
. I Affleck, M Oshikawa, H Saleur, in preparationI. Affleck, M. Oshikawa, H. Saleur, in preparation
. V A Fateev, A B Zamolodchikov, Sov.Phys.JETP. 62215V. A. Fateev, A.B. Zamolodchikov, Sov.Phys.JETP 62 (1985) 215
. C Ahn, D Bernard, A Leclair, Nucl. Phys. 346409C. Ahn, D. Bernard, A. Leclair, Nucl. Phys. B346 (1990) 409.
. Al B Zamolodchikov, Nucl. Phys. 385497Al. B. Zamolodchikov, Nucl. Phys. B385 (1991) 497.
. V Korepin, Comm. Math. Phys. 76165V. Korepin, Comm. Math. Phys. 76 (1980) 165.
. A B Zamolodchikov, Al B Zamolodchikov, Annals of Physics. 120253A. B. Zamolodchikov and Al. B. Zamolodchikov, Annals of Physics 120 (1979) 253.
. A M Tsvelick, P B Wiegmann, Adv. in Physics. 32331A. M. Tsvelick and P. B. Wiegmann, Adv. in Physics 32 (1983) 331.
. M Fowler, Phys. Rev. 262514M. Fowler, Phys. Rev. B26 (1982) 2514.
. V A Fateev, Int.J.Mod.Phys. 62109V. A. Fateev, Int.J.Mod.Phys. A6 (1991) 2109.
. S Sengupta, P Majumdar, Phys. Rev. 333138S. Sengupta, P. Majumdar, Phys. Rev. B33 (1986) 3138.
. N Yu Reshetikhin, H Saleur, Nucl. Phys. 419507N. Yu Reshetikhin, H. Saleur, Nucl. Phys. B419 (1994) 507.
. H J Vega, L Mezincescu, R I Nepomechie, J.Mod.Phys. 8H.J. de Vega, L. Mezincescu and R. I. Nepomechie, J.Mod.Phys. B8 (1994) 3473-3485.
. H J Vega, L Mezincescu, R I Nepomechie, Phys. Rev. 4913223H.J. de Vega, L. Mezincescu and R.I. Nepomechie, Phys. Rev. B49 (1994) 13223.
. S R Aladin, M J Martins, J.Phys.A:Math.Gen. 26529S.R. Aladin and M.J. Martins, J.Phys.A:Math.Gen. 26 (1993) L529.
. S Goshal, A B , Erratum-ibid. 4353Zamolodchikov Int.J.Mod.Phys. 9S. Goshal, A. B. Zamolodchikov Int.J.Mod.Phys.A9 (1994) 3841, Erratum-ibid. 4353.
. P Fendley, H Saleur, Phys. Rev. Lett. 754495P. Fendley, H. Saleur, Phys. Rev. Lett. 75 (1992) 4495.
. V Bazhanov, A Lukyanov, A B Zamolodchikov, Commun.Math.Phys. 177V. Bazhanov, A. Lukyanov, A. B. Zamolodchikov, Commun.Math.Phys.177 (1996).
. G Toulouse, C.R. Acad. Sci. 2681200G. Toulouse, C.R. Acad. Sci. 268 (1969) 1200.
. M Fabrizio, A Gogolin, P Nozieres, cond-mat/9412118M.Fabrizio, A. Gogolin, P. Nozieres, cond-mat/9412118
. A W W Ludwig, J L Cardy, Nucl.Phys. 687B285[FS19A.W.W. Ludwig and J.L. Cardy, Nucl.Phys. B285[FS19] (1987) 687.
Nucl.Phys. B240[FS12. S Vl, V A Dotsenko, Fateev, 312Vl.S. Dotsenko and V.A. Fateev, Nucl.Phys. B240[FS12] (1984) 312.
. . S Vl, Dotsenko, Nucl.Phys. 314687Vl.S. Dotsenko, Nucl.Phys. B314(1989) 687.
. H Saleur, C Itzykson, J.Stat.Phys. 48449H. Saleur and C. Itzykson, J.Stat.Phys. 48 (1987) 449.
Introduction to conformal invariance and its applications to critical phenomena. P Christe, M Henkel, Lecture Notes in Physics. 16Springer-VerlagP.Christe, M.Henkel, "Introduction to conformal invariance and its applications to critical phenomena", Lecture Notes in Physics, volume 16, Springer-Verlag.
| []
|
[]
| []
| []
| []
| Recharging a plug-in electric vehicle (PEV) is more time-consuming than refueling an internal combustion engine vehicle. As a result, charging stations may face serious congestion problems during peak traffic hours in the near future with the rapid growth of PEV population. Considering that drivers' time costs are usually expensive, charging congestion will be a dominant factor that affect a charging station's quality of service. Hence, it is indispensable to conduct adequate congestion analysis when designing charging stations in order to guarantee acceptable quality of service in the future. This paper proposes a data-driven approach for charging congestion analysis of PEV charging stations. Based on a data-driven PEV charging station planning model, we adopt the queueing theory to model and analyze the charging congestion phenomenon in these planning results. We simulate and analyze the proposed method for charging stations servicing shared-use electric taxis in the central area of Beijing leveraging real-world taxi travel data. | null | [
"https://arxiv.org/pdf/1712.07300v1.pdf"
]
| 37,243,015 | 1712.07300 | ed6547a4a7735fb38f62d7124853038b6d6b2619 |
Been Submitted to IEEE PES General Meeting 2018. This Paper HasIndex Terms-Plug-in electric vehiclescharging station planningcharging congestionqueueing theorydata-driven approach
Recharging a plug-in electric vehicle (PEV) is more time-consuming than refueling an internal combustion engine vehicle. As a result, charging stations may face serious congestion problems during peak traffic hours in the near future with the rapid growth of PEV population. Considering that drivers' time costs are usually expensive, charging congestion will be a dominant factor that affect a charging station's quality of service. Hence, it is indispensable to conduct adequate congestion analysis when designing charging stations in order to guarantee acceptable quality of service in the future. This paper proposes a data-driven approach for charging congestion analysis of PEV charging stations. Based on a data-driven PEV charging station planning model, we adopt the queueing theory to model and analyze the charging congestion phenomenon in these planning results. We simulate and analyze the proposed method for charging stations servicing shared-use electric taxis in the central area of Beijing leveraging real-world taxi travel data.
I. INTRODUCTION
S a cleaner mode of transport, plug-in electric vehicles (PEVs) have been long considered as a promising tool to combat the energy crisis and climate change. Hence, governments around the world have released extensive incentive policies to popularize them [1].
Different from internal combustion engine vehicles, PEVs need more time to refuel (recharge) so that charging congestion might occur at PEV charging stations. However, this factor is not considered adequately in most literature on PEV charging stations planning [2]- [7]. In the limited number of papers referring to the charging congestion, the overall charging process, including arriving, waiting and charging, is usually modeled by the queueing theory. For example, in [8] and [9], M/M/s queueing systems are used, while an M/G/s/k queueing system is adopted in [10]. Authors of [11] leverage an M/M/s/k queueing model to estimate the probability of electric taxis being charged at their dwell places, but the accurate model is approximated by means of regression and logarithmic transformation.
To the best of our knowledge, there is few paper focusing on the specific and detailed analysis of charging congestion in PEV charging stations. Thus, in this paper, we combine the queueing theory and the real-world taxi travel data to study this problem. The main procedures and contributions of the paper are summarized below.
1) Extract the PEV charging demands from the taxi travel data in the central area of Beijing;
2) Provide a typical median-based location model for PEV charging station planning;
3) Use the queueing theory to model PEV charging congestion, calculate the mean charging waiting time and waiting probability of PEV charging station; 4) Analyze the charging congestion under different charging station planning results and among different charging stations.
The remainder of the paper is organized as follows. Section II describes the taxi travel data and forecasts the PEV charging demands. In Section III, a basic PEV charging station planning method is provided and the corresponding results are shown. In Section IV, the calculation and analysis of charging congestion is elaborately presented. Finally, conclusions are drawn in Section V.
II. DATA-DRIVEN PEV CHARGING DEMANDS FORECASTING
First, we introduce our data set for this research. The data include 29709 taxis' travel records in the central area, around within the fifth ring, of Beijing from July 1st to July 31st in 2016, which were collected by smart phones or on-board devices. The taxis fleet recorded in the data account for 44% of all the taxis (about 67,000 in total [12]) in Beijing. For each travel record, the information contains the taxi ID, the time and the position (in longitude and latitude). Table I gives a sample of the taxi travel records in the data set.
Based on the data, we further forecast the PEV charging demands. In this research, we assume that ten percent of the taxis in Beijing are replaced by plug-in hybrid electric vehicles, which are still capable of driving by consuming petroleum after the battery power is exhausted or below a threshold. Hence the travel behavior of electric taxis can be supposed to be similar to traditional taxis [13]- [15]. Considering that A recharging battery is more time-consuming than refueling an internal combustion engine vehicle, we regard the dwelling time of at least 30 minutes as the available recharging time windows. In light of the specifics of some plug-in hybrid electric vehicles on the market [16], [17], vehicles' battery capacity and electric range are respectively set as 10 kWh and 50 km. Besides, the rated power of the chargers to be deployed is supposed to be 10 kW. Then, the charging demands for each dwelling of more than 30 minutes can be calculated by the previous vehicle traveled miles. Finally, we can obtain the charging demand point positions and their charging demand weights. Fig. 1 shows the charging demand point distribution of a typical day.
III. DATA-DRIVEN CHARGING STATION PLANNING METHOD
Here, as the basis for the charging congestion analysis in the later section, we provide a typical median-based location model for PEV charging station. In this model, p charging stations are located to minimize the total charging demand weighted distance between charging demand points and the corresponding nearest charging stations. The p-median model is formulated as below. min
i ij ij ji D L A UV (1) subject to: 1, ij j Ai U V (2) 0, , ij j A B i j VU (3) j j Bp U (4) 0,1 , j Bj U (5) 0,1 , , ij A i j VU(6)
In the above model, i D is charging demands at location i ; ij L is the trip distance between location i and location j ; ij A is an assignment variable, which equals 1 if the PEV at location i is assigned to the PEV charging station at location j , and 0 otherwise; j B is a deployment configuration variable, which equals 1 if we site a PEV charging station at location j and 0 otherwise; p is the total number of PEV charging stations to deploy; U is the set of candidate locations of PEV charging stations and V is the set of charging demand points. The objective function (1) minimizes the total charging demand weighted distance for PEVs driving to the stations, which describes the convenience of charging services. Constraint (2) ensures that all the charging demands are assigned, while constraint (3) ensures assignments of charging demands to a location with PEV charging station deployed. Constraint (4) shows that the total number of PEV charging stations to be deployed is p . Constraints (5) and (6) state that ij A and ij B are binary variables. Constraints (6) can be relaxed to (7) without any sacrifice of optimality, because for any given PEV charging station deployment, charging demands will be assigned to the closest station to achieve the minimum objective. The formulated model (1)-(5), (7) is a mix integer linear programming (MILP) model, which can be solved by deterministic branch-and-bound methods. As for the number and the distribution of candidate locations of PEV charging stations, i.e., U , a relatively larger number, e.g., 500, and a uniform distribution are suggested under general scenarios. When 30 p , Fig. 2 (a) presents the results of PEV charging station deployment and the corresponding charging demand assignment, and Fig. 3 (a)
0 1, , ij A i j VU(7)
IV. CHARGING CONGESTION ANALYSIS USING QUEUEING MODEL
For a PEV charging station, the arrival, waiting and charging of PEVs can be modeled mathematically using queueing theory [8]- [10]. According to [8]-10], the arrival process of PEVs can be considered as a Poisson process, and the service time, i.e., the time to charge each PEV, is supposed to follow the negative exponential distribution. However, in practice, the charging time of PEVs will be affected by various factors, such as the vehicle miles traveled [18], the parking duration [19] and the battery capacity [15]. As a result, the service time is inexplicitly distributed. So, herein, we apply the general distribution to describe the service time of PEVs [10].
Based on the above considerations, the PEV queue in a PEV charging station can be modelled as an M/G/s/k queueing system ( ks ), where M represents that the time between PEV arrivals to the queue obeys the negative exponential distribution, i.e., the arrival process of PEVs is a Poisson process, G represents that the service time of PEVs obeys the general distribution, s denotes the number of chargers, and k is the total capacity of the queue system, i.e., the summation of the number of chargers of PEV charging stations and the capacity of the waiting spaces. For simplicity, in this paper, the waiting spaces are assumed to be sufficient, i.e., k , and the M/G/s queueing system is thereby adopted. Note that the queueing discipline is first come first served (FCFS), i.e., the PEVs are served in the order they arrived in, and the size of calling source, i.e., the population from which the PEVs come, is assumed to be infinite because the total PEV population is large enough so that the arrival rate of PEVs with charging demands will not fluctuate anomalously.
A. Mean Waiting Time
Armed with the queueing model of a PEV charging station, we are now equipped to consider the calculation of the waiting time of PEV charging. Leveraging the approximation for an M/G/s queueing system developed in [20] and [21], we can approximatively compute the mean waiting time of PEVs, expressed as below: When 30 p and 60 p , Fig. 4 shows how the mean waiting time changes as the total number of chargers varies. From the figure, it can be seen that 1) more chargers bring less mean waiting time; 2) for a given number of chargers, less number of stations can achieve lees mean waiting time, because centralized chargers are shared by more PEVs and their utilization are higher. Note that a defect of less station is that the distance for a PEV to go to the nearby station becomes longer. For a given number of total chargers (1600 chargers here), the distribution of the number of stations with respect to the mean waiting time of stations when 30 p and 60 p are respectively presented in Figs. 5 and 6. It can be easily observed that more stations leads to longer mean waiting time when 60 p . Actually, for all the station, the system transmission capacities, i.e., , are nearly the same, that is to say, the proportions of the number of PEVs with charging demands to the number of chargers are almost equal. The reason, why the mean waiting time of different stations differs, is primarily due to the different numbers of PEVs with charging demands, i.e., s , for different stations. According to (8)- (12), for a fixed , the eventual mean waiting time also depends on s and , and the former plays a dominant role.
B. Waiting Probability
Let N denote the number of PEVs either waiting or being charged at the station and 60 p , we plot the waiting probability variation curves with respect to the total number of chargers, as shown in Fig. 7. The results are similar to those of the mean waiting time, and can be interpreted by the same reason in Subsection IV.A. Also, letting the number of total chargers be 1600, the distribution of the number of stations with respect to the waiting probability of stations when 30 p and 60 p are respectively presented in Figs. 8 and 9. It can be observed that compared with the results in Fig. 10, in Fig. 11, the stations with relatively larger waiting probability accounts for a higher percentage. The analysis for the different waiting probabilities among the different stations can refer to that for mean waiting time.
C. Mean Driving Time to the Station
In Subsection IV-A, we can see that for a fixed charger number, less stations can achieve less waiting time. However, less stations in a certain area causes more inconvenience for PEV users to recharge their cars. Therefore, there is a tradeoff between waiting time and driving time to the station. We calculate the mean driving time to the station when the charging station number is 30 and 60, respectively. The corresponding results are 2.3 minutes and 1.5 minutes. Combining Figure 6, it can be observed that as the total charger number increases, the difference of mean waiting time between 60 stations scenario and 30 stations scenario tends to be closing. Thus, when the total charger number is limited, for example, less than 1660, it is better to build 30 stations rather than 60 stations. On the contrary, more stations are preferred, when the investment for charging infrastructure is sufficient to install more chargers.
V. CONCLUSIONS AND FUTURE WORK
In this paper, we combine the queueing theory and realworld taxi travel data in the central area of Beijing to analyze the charging congestion of PEV charging stations. The results show that 1) the mean waiting time and the waiting probability decrease as the total number of the chargers of all the PEV charging stations increases; 2) even though the same number of chargers are deployed, different number of charging stations significantly affects the mean waiting time and the waiting probability of PEVs; 3) in charging stations with the almost same proportion of the number of PEVs to the number of chargers, the difference of the number of chargers may bring distinctly different mean waiting time and waiting probabilities; 4) if the investment for charging infrastructure is sufficient, i.e., more charger can be installed, chargers should be located dispersedly; otherwise, more centralized deployment is better.
In future work, we plan to apply the charging congestion analysis into the PEV charging station planning.
VI. REFERENCES
This work was supported in part by the National Natural Science Foundation of China under Grant 51477082. H. Chen, H. Zhang, Z. Hu and H. Luo are with the Department of Electrical Engineering, Tsinghua University, Beijing, 100084, P. R. China (email: [email protected]). Y. Liang is with Key Laboratory of Road and Traffic Engineering of the Ministry of Education, Tongji University, Shanghai, 201804, P. R. China. Y. Wang is with the Department of Civil and Environmental Engineering, University of Washington, Seattle, WA, 98195, USA.
Fig. 3. Comparisons of number of chargers among PEV charging stations, where the radiuses of the circles represent the number of chargers.
depicts the comparisons of the number of chargers among the PEV charging stations. The corresponding results for 60 p are given in Figs. 2 (b) and 3 (b).
Fig. 4 .Fig. 5 .
45Mean waiting time curves as the total number of chargers changes. Distribution of the number of stations with respect to the mean waiting time of stations when the total number of chargers is 1600 and 30 p . 0-10 10-20 20-30 30-40 40-50 50-
Fig. 7 .Fig. 8 .Fig. 9 .
789Waiting probability curves as the total number of chargers changes. Distribution of the number of stations with respect to the waiting probability of stations when the total number of chargers is 1600 and 30 p . Distribution of the number of stations with respect to the waiting probability of stations when the total number of chargers is 1600 and 60 p .
Plug-in Electric Vehicle Charging Congestion Analysis Using Taxi Travel Data in the CentralArea of BeijingHuimiao Chen, Student Member, IEEE, Hongcai Zhang, Student Member, IEEE, Zechun Hu, Senior
Member, IEEE, Yunyi Liang, Haocheng Luo, Yinhai Wang
TABLE I .
ITAXI TRAVEL RECORD SAMPLE Fig. 1. Charging demand point distribution of a typical day.Taxi ID
Time
Longitude
Latitude
26491
20160704141051
116.426285
39.921867
116.2
116.3
116.4
116.5 116.56
39.75
39.8
39.85
39.9
39.95
40
40.05
Longitude
Latitude
Fig. 2. Results of PEV charging station deployment and charging demand assignment, where the asterisks are the positions of PEV charging stations and the dots of different colors represent the charging demand assignment.116.2
116.3
116.4
116.5
116.56
39.75
39.8
39.85
39.9
39.95
40
40.05
Longitude
Latitude
(a)
30
p
116.2
116.3
116.4
116.5
116.56
39.75
39.8
39.85
39.9
39.95
40
40.05
Longitude
Latitude
(b)
60
p
116.2
116.3
116.4
116.5
116.56
39.75
39.8
39.85
39.9
39.95
40
40.05
Longitude
Latitude
(a)
30
p
116.2
116.3
116.4
116.5
116.56
39.75
39.8
39.85
39.9
39.95
40
40.05
waiting times of the corresponding M/M/s and M/D/s queueing systems; denotes the arrival rate of PEVs; μ and are the reciprocal of the mean charging time of PEVs and the standard deviation of charging time, respectively. Note that (8)-(10) hold when the PEV arrival rate is less than the system transmission capacity, i.e., the utilization factor
2
/ /
/ /
//
2
/ /
2
/ /
1
21
M M s
M D s
M G s
M D s
M M s
WW
W
WW
(8)
where
//
2
1
1
1
1
0
1!
1
!
1!
s
MMs
s
z
s
s
s
z
W
ss μ
μ
z μ
ss μ
μ
(9)
1
1
/ /
/ /
1 11
2
s
Hs μ
s
M D s
M M s
sμ
W
H
e
W
(10)
1
2
1 10 8
2
16
2
ss
H
s
(11)
μ
(12)
In (8)-(12),
//
M G s
W
is the mean waiting time of an M/G/s
queueing system, and
//
MMs
W
and
//
M D s
W
are respectively
the mean 1
sμ <
. Interested readers can refer to [20] and [21] for
the details.
While calculating the mean waiting time, the PEV arrival
rate is a constant, i.e., the arrival process of PEVs is a
homogeneous Poisson process. But actually, the arrival rate
significantly depends on the number of PEVs with charging
demand nearby the PEV charging station, which varies with
time. Thus, in the real world situation, the PEV arrivals should
be equivalent to a non-homogeneous Poisson process with a
time-varying , i.e.,
t
. In this research, we discretely
regard within an hour as a constant and focus on the mean
waiting time in the peak hour.
PN denote the probability that N PEVs are at the station. Then
P N s
is the waiting
probability, i.e., the probability that an arriving PEV need to
wait. For an M/M/s queueing system, it is well known that
0
//
,
0,1, , 1
!
1,
N
MMs
Ns
s
P N
s
N
PN
C
N s
(13)
where
1
1
0
1
0
1
!
1!
z
s
s
s
z
P
z μ
ss μ
μ
(14)
and C is the delay probability that
//
1
MMs
Cs μ
W
(15)
For an M/G/s queueing system, geometric approximation is
suggestion based on (13) to calculate
//
M G s
PN [22], shown
as below.
0
//
,
0,1, , 1
!
1,
N
M G s
Ns
s
P N
s
N
PN
C
N s
(16)
where
/ /
/ /
/ /
/ /
1
M G s
M M s
M G s
M M s
WW
WW
(17)
Equation (17) can be derived easily by the Little's formula
[23] and
//
0
1
M G s
N
PN
. According to (16), the waiting
probability
//
M G s
P
N s
can be calculated by (18).
1
//
0
1
11
!
N
s
M G s
N
s
P
N s
P C
N
(18)
When
30
p
and
Integration of plug-in hybrid and electric vehicles: Experience from China. Y Song, X Yang, Z Lu, Proc. nullPower Energy Soc. GenY. Song, X. Yang, and Z. Lu, "Integration of plug-in hybrid and electric vehicles: Experience from China," in Proc. Power Energy Soc. Gen.
. Meeting, Minneapolis, Usa Mn, Jul , Meeting, Minneapolis, MN, USA, Jul. 2010.
A corridor-centric approach to planning electric vehicle charging infrastructure. Y Nie, M Ghamami, Transport. Res. B: Meth. 47Y. Nie and M. Ghamami, "A corridor-centric approach to planning electric vehicle charging infrastructure," Transport. Res. B: Meth., vol. 47, pp. 172-190, Nov. 2013.
Optimal planning of electric-vehicle charging stations in distribution systems. Z Liu, F Wen, G Ledwich, IEEE Trans. Power Del. 281Z. Liu, F. Wen, and G. Ledwich, "Optimal planning of electric-vehicle charging stations in distribution systems," IEEE Trans. Power Del., vol. 28, no. 1, pp. 102-110, Jan. 2013.
Optimal deployment of public charging stations for plug-in hybrid electric vehicles. F He, D Wu, Y Yin, Y Guan, Transp. Res. B: Meth. 47F. He, D. Wu, Y. Yin, and Y. Guan, "Optimal deployment of public charging stations for plug-in hybrid electric vehicles," Transp. Res. B: Meth., vol. 47, pp. 87-101, Jan. 2013.
An Integrated Planning Framework for Different Types of PEV Charging Facilities in Urban Area. H Zhang, Z Hu, Z Xu, Y Song, IEEE Trans. on Smart Grid. 75H. Zhang, Z. Hu, Z. Xu and Y. Song, "An Integrated Planning Framework for Different Types of PEV Charging Facilities in Urban Area," IEEE Trans. on Smart Grid, vol. 7, no. 5, pp. 2273-2284, Sep. 2016..
Traffic-constrained multiobjective planning of electric-vehicle charging stations. G Wang, Z Xu, F Wen, K Wong, IEEE Trans. Power Del. 284G. Wang, Z. Xu, F. Wen, and K. Wong, "Traffic-constrained multiobjective planning of electric-vehicle charging stations," IEEE Trans. Power Del., vol. 28, no. 4, pp. 2363-2372, Oct. 2013.
Design and Planning of a Multiple-charger Multipleport Charging System for PEV Charging Station. H Chen, 10.1109/TSG.2017.2735636IEEE Trans. on Smart Grid. to be publishedH. Chen et al., "Design and Planning of a Multiple-charger Multiple- port Charging System for PEV Charging Station," IEEE Trans. on Smart Grid, to be published, doi: 10.1109/TSG.2017.2735636.
Spatial and temporal model of electric vehicle charging demand. S Bae, A Kwasinski, IEEE Trans. Smart Grid. 31S. Bae and A. Kwasinski, "Spatial and temporal model of electric vehicle charging demand," IEEE Trans. Smart Grid, vol. 3, no 1, pp. 394-403, July 2012.
Modeling of plug-in hybrid electric vehicle charging demand in probabilistic power flow calculations. G Li, X Zhang, IEEE Trans. Smart Grid. 31G. Li and X. Zhang, "Modeling of plug-in hybrid electric vehicle charging demand in probabilistic power flow calculations," IEEE Trans. Smart Grid, vol. 3, no 1, pp. 492-499, Feb. 2012.
PEV Fast-Charging Station Siting and Sizing on Coupled Transportation and Power Networks. H Zhang, S J Moura, Z Hu, Y Song, 10.1109/TSG.2016.2614939IEEE Trans. Smart Grid. to be publishedH. Zhang, S. J. Moura, Z. Hu, and Y. Song, "PEV Fast-Charging Station Siting and Sizing on Coupled Transportation and Power Networks," IEEE Trans. Smart Grid, to be published, doi: 10.1109/TSG. 2016.2614939.
A data-driven optimization-based approach for siting and sizing of electric taxi charging stations. J Yang, J Dong, L Hu, Transpor. Res. Part C: Emer. 77J. Yang, J. Dong, and L. Hu. "A data-driven optimization-based approach for siting and sizing of electric taxi charging stations," Transpor. Res. Part C: Emer., vol. 77, pp. 462-477, Apr. 2017
. Beijing Trip, Beijing Trip. (2015). Taxi [Online].
Charging infrastructure planning for promoting battery electric vehicles: An activity-based approach using multiday travel data. J Dong, C Liu, Z Lin, Transport. Res. C: Emer. 38J. Dong, C. Liu, and Z. Lin, "Charging infrastructure planning for promoting battery electric vehicles: An activity-based approach using multiday travel data," Transport. Res. C: Emer., vol. 38, pp. 44-55, 2014.
Siting public electric vehicle charging stations in Beijing using big-data informed travel patterns of the taxi fleet. H Cai, X Jia, A S F Chiu, X Hu, M Xu, Transport. Res. D: Tr. E. 33H. Cai, X. Jia, A. S. F. Chiu, X. Hu, and M. Xu, "Siting public electric vehicle charging stations in Beijing using big-data informed travel patterns of the taxi fleet," Transport. Res. D: Tr. E., vol. 33, pp. 39-46, 2014.
Improving the electrification rate of the vehicle miles traveled in Beijing: A data-driven approach. M Li, Y Jia, Z Shen, F He, Transport. Res. A: Pol. 97M. Li, Y. Jia, Z. Shen, and F. He, "Improving the electrification rate of the vehicle miles traveled in Beijing: A data-driven approach," Transport. Res. A: Pol., vol. 97, pp. 106-120, 2017.
2017 Fusion Hybrid S. Ford, Ford. (Jan. 2017). 2017 Fusion Hybrid S. [Online]. Available: http://ww w.ford.com/cars/fusion/2017/models/fusion-hybrid-s/, accessed Feb. 17, 2017.
BYD Qin. Wikipedia, Wikipedia. (Jan. 2017). BYD Qin. [Online]. Available: https://en.wikipedia.org/wiki/BYD_Qin#cite_note-Delayed-3, accessed Feb. 17, 2017.
Investigating the impacts of plug-in hybrid electric vehicles on power distribution systems. S Shafiee, M Fotuhi-Firuzabad, M Rastegar, IEEE Trans. Smart Grid. 43S. Shafiee, M. Fotuhi-Firuzabad, and M. Rastegar, "Investigating the impacts of plug-in hybrid electric vehicles on power distribution systems," IEEE Trans. Smart Grid, vol. 4, no. 3, pp. 1351-1360, Apr. 2013.
Coordination of PEVs charging across multiple aggregators. Z Xu, Z Hu, Y Song, Y Zhao, Zhang, Appl. Energy. 136Z. Xu, Z. Hu, Y. Song, W Zhao and Y. Zhang, "Coordination of PEVs charging across multiple aggregators," Appl. Energy, vol. 136, pp. 582- 589, Aug. 2014.
A two-moment approximation for the mean waiting time in the GI/G/s queue. T Kimura, Manage. Sci. 326T. Kimura, "A two-moment approximation for the mean waiting time in the GI/G/s queue," Manage. Sci., vol. 32, no. 6, pp. 751-763, June 1986.
Approximations for multi-server queues: System interpolations. T Kimura, Queueing Syst. 173T. Kimura, "Approximations for multi-server queues: System interpolations," Queueing Syst., vol. 17, no. 3, pp. 347-382, Feb. 1994.
A transform-free approximation for the finite capacity M/G/s queue. T Kimura, Oper. Res. 446T. Kimura. "A transform-free approximation for the finite capacity M/G/s queue," Oper. Res., vol. 44, no. 6, pp. 984-988, 1996.
B V Gnedenko, I N Kovalenko, Introduction to queuing theory. second editionB. V. Gnedenko and I. N. Kovalenko, "Introduction to queuing theory (second edition)," Birkhäuser Boston, 1989.
| []
|
[
"Bombus Species Image Classification",
"Bombus Species Image Classification"
]
| [
"George Lavezzi [email protected] \nDepartment of Computer Science\nDepartment of Computer Science\nKansas State University Manhattan\nKSUSA\n",
"Venkat Margapuri \nDepartment of Computer Science\nKansas State University Manhattan\nKSUSA\n",
"Robert Stewart [email protected] \nDepartment of Computer Science\nKansas State University Manhattan\nKSUSA\n",
"Dan Wagner [email protected] \nKansas State University Manhattan\nKSUSA\n"
]
| [
"Department of Computer Science\nDepartment of Computer Science\nKansas State University Manhattan\nKSUSA",
"Department of Computer Science\nKansas State University Manhattan\nKSUSA",
"Department of Computer Science\nKansas State University Manhattan\nKSUSA",
"Kansas State University Manhattan\nKSUSA"
]
| []
| Entomologists, Ecologists and others struggle to rapidly and accurately identify the species of bumble bees they encounter in their field work and research. The current process requires the bees to be mounted, then physically shipped to a taxonomic expert for proper categorization. We investigated whether an image classification system derived from transfer learning can do this task. We used Google's Inception, Oxford's VGG16 and VGG19 and Microsoft's ResNet 50. We found Inception and VGG classifiers were able to make some progress at identifying bumble bee species from the available data, whereas ResNet was not. Individual classifiers achieved accuracies of up to 23% for single species identification and 44% "top-3" labels, where a composite model performed better, 27% and 50%. We feel the performance was most hampered by our limited data set of 5,000-plus labeled images of 29 species, with individual species represented by 59 -315 images. | null | [
"https://arxiv.org/pdf/2006.11374v1.pdf"
]
| 219,966,950 | 2006.11374 | 598a739eb3ea2ea30c029db7c9e3f50feafea88a |
Bombus Species Image Classification
George Lavezzi [email protected]
Department of Computer Science
Department of Computer Science
Kansas State University Manhattan
KSUSA
Venkat Margapuri
Department of Computer Science
Kansas State University Manhattan
KSUSA
Robert Stewart [email protected]
Department of Computer Science
Kansas State University Manhattan
KSUSA
Dan Wagner [email protected]
Kansas State University Manhattan
KSUSA
Bombus Species Image Classification
bumble beeimage classificationselected modelInceptionVGG16VGG19CNN
Entomologists, Ecologists and others struggle to rapidly and accurately identify the species of bumble bees they encounter in their field work and research. The current process requires the bees to be mounted, then physically shipped to a taxonomic expert for proper categorization. We investigated whether an image classification system derived from transfer learning can do this task. We used Google's Inception, Oxford's VGG16 and VGG19 and Microsoft's ResNet 50. We found Inception and VGG classifiers were able to make some progress at identifying bumble bee species from the available data, whereas ResNet was not. Individual classifiers achieved accuracies of up to 23% for single species identification and 44% "top-3" labels, where a composite model performed better, 27% and 50%. We feel the performance was most hampered by our limited data set of 5,000-plus labeled images of 29 species, with individual species represented by 59 -315 images.
I. INTRODUCTION
Dr. Brian Spiesman, from Kansas State University's College of Agriculture's Entomology Department, has identified a need for the rapid, accurate identification of bumble bees by species from images taken in the field by researchers. The current identification process involves capturing the bees, returning from the field, mounting the bees on pin boards, then shipping them to taxonomic experts for proper identification. This is both an expensive and time-consuming process, often requiring months from bee collection to proper identification; thus delaying the pace at which research can be conducted. A trained classifier, particularly one working from images in the wild (as opposed to dried, pinned and mounted) which can properly identify bumble bee species from images would be of tremendous help.
Contemporaneously, several pre-trained convolutional neural networks are available for transfer learning image classification tasks, such as Google's Inception, Oxford's VGG16 and VGG19, and Microsoft's ResNet 50. This offers the opportunity to compare their performance on the bumble bee task and opens the possibility of a composite model solution.
Several of these models are implemented in the TensorFlow machine learning framework, which were to conduct this project.
II. RELATED WORK
A. Brief History of Image Classification
In the 1960s, Papert is credited with some of the earliest work in this area where image recognition and characterization are based on distinct feature identification (edges, textures, curve etc.) [1]. Techniques to identify and classify these features continued apace but were hampered by limited computational power and memory. In the 1980s, several algorithms were introduced (e.g. Canny Edge Detection) to improve this feature detection [2]. Deep learning techniques began to make their presence felt for feature extraction and pattern recognition in the 2000s with the advancement in processing power and memory capacity [3]. A CNN image segmentation won its first challenge in 2012 and dominated the field for several years thereafter [4].
B. CNN for Image Classification
Image classification identifies the presence of an item of interest (a member of a class) in the picture in question. A classifier which recognizes both cats and dogs may classify an image with both a cat and a dog as one or the other based on some degree of "cat-ness" (or "dog-ness") computed by the network; this calculation may not have an exact human understandable analog.
A CNN is composed of two or more connected layers of neurons. At least one of the layers is convolutional, using a "window" (receptive field) to map a set of inputs, through the convolution operation to the neurons in the receiving layer. A given neuron's output is then determined by its convoluted input, a weighting value, bias value and activation function. Thus, a node's (neuron's) behavior looks akin to Figure 1, where the x# are the convolutional results of the previous layer (or input). The network itself may be like Figure 2, but with a different number of layers and without picturing the convolutional functions between layers.
Figure 2. Representative CNN Layers [5]
The activation function is typically a differentiable nonlinear function, such as the sigmoid or a rectifier liner unit (ReLU). The output layers are typically the class-labels themselves, with class selection based on a maximum or onehot selection.
A CNN will have one or more convolution layers which look at collections of outputs from the previous layer (inputs), like the values of all the pixels adjacent to the pixel of interest and convolves (combines/filters) them. Such a convolution may filter, pool, etc. the incoming information as well as change its dimensionality. There can also be skip layers, upscaling layers, etc. to provide the "structure" or "encourage", or if you will, the abstractions that are appropriate to the classification task. The selection and ordering of layers appears to be based as much on empirical experience as theoretical footings.
Early layers (near the input layer) detect feature-analogs such as edges. Mid-layers are analogous to more complex features, such as color-histograms. Later layers (near the output) recognize objects. However, none of these layers necessarily has a human cognitive analog.
Krizhevsky, Sutskever, and Hinton used a CNN to classify over a million images in 2010. CNNs' capacities can be controlled by varying their depth and breadth, and they tend to make strong and mostly correct assumptions about the nature of images (namely, stationarity of statistics and locality of pixel dependencies). Figure 3 shows a portion (1 GPU's worth) of the CNN used by [6].
C. CNNs and Transfer Learning
The types of features learned by early layers of a CNN trained on image data tend to have generalizable characteristics, whereas the latter layers tend to be more specific to the actual objects being classified [8]. This enables one to "graft" a pretrained set of initial layers onto a blank or to-be-trained set of late layers to speed the training of the complete CNN. This approach is known as transfer learning-the features the "early layers" have been trained on are transferred to the new problem set.
It has become common practice to use generalized, pretrained early feature detection layers, trained on hundreds of thousands to over a million images, connected to target network for "top-off" or "customization" training. This enables the target network to be trained more quickly with fewer images.
D. Bee Classification
We encountered only two previous attempts at bee-image classification: Dr Spiesman's unpublished work using just the mounted and pinned forewings of bees (which resulted in 89% single species accuracy); and, a DrivenData hosted a crowd sourced competition to classify bees by genus in 2015. A solution for DataDriven's challenge using Google's Inception achieved a 99% AUC score on images of bees taken in the wild [14].
III. PROBLEM STATEMENT
Identification of bumble bee species is difficult, requiring collection of live specimens in the wild. These specimens are mounted and physically shipped to a taxonomic expert for correct species categorization. This process takes a large amount of time from both the collectors and the expert; and is expensive. We propose developing an image based classifier whose goal is to accurately identify bumble bee species in an efficient manner to expedite the procedure.
IV. TECHNICAL APPROACH
Using data provided by Dr. Spiesman, we developed a script which produced standardized training, augmented training, and test datasets for use in training the selected pre-trained classification engines, informed by several [9,10,11]. Each researcher used these sets to train their selected model and pursue some level of parameter fine tuning in a TensorFlow 2.0 framework. We will also build composite models.
We then characterized the performance of our individual and composite models with respect to single species accuracy and 3subspecies grouping accuracy. As we are interested in bumble bee species accuracy, the test data does not contain non-bumble bee images; we did not want to skew the accuracy in case we achieved good performance in differentiating by genus (as was obtained by DataDriven) but poor performance in species differentiation.
We chose the VGG19, VGG16, ResNet50 and InceptionV3 pre-trained image classifiers for this experiment. All of these were obtained from TensorFlow-hub. The general approach was to vary the number of post-retrained additional hidden layers, the number of nodes in each added layer, learning rate, drop out, batch normalization and other techniques to control validation over fitting
V. EXPERIMENTAL SETUP
The bumble bee data for this experiment provided by Dr. Spiesman consists of over 5,000 images classified into 29 species. An additional "classification" of non-bumble bees, consisting of roughly 200 hundred labeled honey bee images from Kagel [12] was added so the data set would have both positive and negative examples to aid in learning generalizations. The images are predominately of bees in the wild and therefore contain random backgrounds, bee orientations. Some or most of the bee is often obscured. Additionally, the images lack standard size and resolution and are not evenly distributed by class.
We created three standardized data sets: training, testing, augmented training. Image augmentation was performed by using a random combination of: rotation, contrast manipulation, salt and peppering and adding obscuring blocks (randomly "zeroing out" a box of pixels in the image.) Roughly 25% of the images were augmented in the augmented training set. The segmentation of training data into training and validation was left to each researcher. Models were trained on both training sets for comparisons.
A. VGG 19
The VGG19 [13] model was imported from Keras' built-in models with weights from the ImageNet dataset. The network was trained on a 16GB Intel I-7 hexacore CPU node with an integrated graphics card: all training was run on the processor rather than the GPU. Each layer was copied from the existing model into a new model except for the final three fully connected layers, including the categorization/output layer: this layer was modified to reflect our reduced number of categories (i.e. 1000 reduced to 30). The two FC layers before the categorization layer had their node size reduced due to learning stagnation on early iterations: this value worked well and did not improve as the hidden layer size was decreased further. Learning rate was experimentally altered and ranged from 0.00001 to 0.001 using a batch size of 64. Dropout was added between each FC layer to help with any overfitting and had a probability of 0.5.
The dataset that was used for training was split with 85% training and 15% validation data. These were fed to the model as generators; the training generator was shuffled on each set of training (5, 10, and 10 epochs) while the validation set remained constant for a consistent point of comparison. The model trained for up to 25 epochs with tweaking to prevent overfitting. Iterations of the model were run on both the normal and augmented datasets, with the former exhibiting better performance and faster learning than the latter. In general, the augmented dataset did not help: with the different resolutions and orientations of bees in the images, the dataset already exhibited a degree of augmentation.
B. VGG16
The VGG16 model was imported from the canned architectures provided by Keras as part of its Applications module. The imported model comes with pre-trained weights from the ImageNet dataset. The model was trained in a CPU and a GPU environment where the CPU environment was an Intel i7 CPU running Windows 10 and the GPU environment was an Intel Core i7-6700k quad core processor, with an 8GB RTX 2080 graphics card, running Ubuntu 18.04.02 and TensorFlow 2.0 (nightly version).
VGG16 requires the images to be of size 224 x 224 x 3 pixels (pixel width x pixel height x RGB channels). Several architectures of the model with varying hyperparameter values were evaluated and are as shown in table 1. A Sequential model was built using each of the layers of the imported model with the exception of the final layer. In addition, between 1 and 3 fully connected layers were added to the models. None of the pre-trained layers were trained. Instead, only the newly added fully connected layers were trained.
Of the data in the training dataset, 80% of the data was used for training and 20% of the data was used for validation. All of the experimented architectures were trained for about 20 -25 epochs after which the models started to overfit. Applying Learning Rate Decay did not help alleviate the problem. The use of the augmented dataset on the model only resulted in mediocre results.
C. RESNET 50
ResNet50 was imported from Tensorflow 2.0. The model was trained with an Intel Core i7-6700k quad core processor, 8GB RTX-2080, running on Ubuntu 18.04.02 and the GPUenabled Tensorflow 2.0 (nightly version). ResNet50 requires all input images to be of size 224 x 224 x 3 (pixel width, pixel height, RGB values).
Throughout the experiment, we attempted numerous variations to hyperparameter values (e.g. learning rates, number of fully-connected layers, etc.), weight initialization, and batch amounts. In addition to varying hyperparameters, we also attempted different optimizers, namely Adam and Standard Gradient Descent (SGD). For SGD, we also tested with weightdecay and momentum both enabled and disabled.
Table 2. Hyperparameters for ResNet50
Ultimately, for the composite model, we decided to use zero additional hidden layers and instead have a GlobalAveragePooling2D layer before the final output layer.
The output layer consists of 30 nodes, representing the 30 distinct classes. Our final ResNet model had randomly initialized weights, used the Adam optimizer, with a learning rate of 5e-4, categorical cross-entropy as the loss function, and softmax as the activation function. We used an 80/20 trainingvalidation split, with a batch size of 64 images, and trained for 15 epochs. Dropout, weight-decay, and momentum were not used for this model. The inspiration for these models can be found in [18] and [19].
D. InceptionV3
The Inception based classifier was trained on a 20 GB intel I-7 quad core with a 6GB GTX-1060 GPU, running Windows 10 and GPU enabled TensorFlow 2.0. InceptionV3 requires all images to be 299 x 299 pixels in size and native TensorFlow sizing functions were used to shrink/stretch each image as the data was loaded. The following structures and hyper parameters were varied in an effort to fine tune the model. An 85/15 train validation split was used with 10-25 epochs of training being normal for each model. The limited size of the GPU memory forced batch sizes of less than 16 (12 was used). The models tended to badly over fit, even when drop out is used. Learning rate decay did not help when validation loss plateaued. Better results were obtained against both validation and test data sets when the model was trained with the normal (un augmented) data. The software for this model was strongly influenced by the TensorFlow Hub Authors [17].
E. Composite Model
We combined various combinations of the best trained modes into a composite model, by summing their softmax outputs and selecting the largest resultant values. Different combinations of "best model" were tried to see if a such a simple composite model can improve performance. Most of the models performed better on the normal (unaugmented) data training set. We hypothesize that the training set contains sufficient "noise", with its different orientations, resolutions and sizes, that good generalization is obtain without the need of "fuzzing" the images.
Hyper Parameter/Structure Value Range
A. Best Individual Models 1) VGG19
In the case of VGG19, the model trained the best with the two FC layers having 2048 nodes each, dropout 0.5, learning rate decay starting at 0.00001 and decaying by 0.96 every 100 epochs. ADAM was used with the decay rate and error was calculated via categorical cross entropy. Training stopped after 10 epochs on the final model due to consistent overfitting.
2) VGG16
The best VGG16 architecture had three additional fully connected layers each with 2048 nodes, a dropout of 0.3 between each of the fully connected layers, an optimizer of ADAM and a learning rate of 0.0001. 20 epochs of training was performed on the model before the model began to show signs of overfitting. In the following plots, the blue lines represent training data and the orange lines, validation.
3) ResNet 50
ResNet50's best accuracy plots, figures 11 and 12, use blue for training data and orange for validation.. As can be seen, this is a clear indication of overfitting. Furthermore, the model's accuracy on the validation set never exceeded 3.33%. All hyperparameter tuning turned out to equally poor.
This was the principle reason for performing so many variations in the hyperparameters. Even with regularization methods in place and a small learning rate, the models never seemed to break out of the local minima it reached. We also attempted smaller blocks, as seen in [19], to reduce the possibility of over-relying on pre-learned features. The end result still did not change. We also froze and unfroze layers to determine if training from scratch would give better results. Still, the 3.33% validation accuracy remained unmoved. [19] remarks that utilizing pre-trained models for transfer learning depends on the size of the dataset and its correlation to the features learned from the images on which the model has been trained. Furthermore, our dataset did not seem to be correlated to the objects and features used to train ResNet [20]. We hypothesize that ResNet performed poorly on our given dataset because the dataset is too small and ResNet's transferred features are not relevant.
To test this theory, we ran ResNet on the CIFAR dataset, with a learning rate of 5e-4,, for 200 epochs, using the categorical cross-entropy loss function, and Adam optimizer [21]. The final test accuracy resulted to 91.9%.
4) InceptionV3
The best Inception model used 2 additional hidden layers of 1536 Nodes, with dropouts of 0.5, leaning rates of .00005 and the ADAM optimizer using categorical cross entropy loss functions. Training was halted at 21 epochs. Inception and VGG19 performed better than VGG16 and ResNet. Inception's base classifier is trained on over 10 million images, many of which often included different orientations and partial obstruction, much like our bee data set contains; this may account for its comparatively higher performance with a little fine tuning.
B. Composite Model
We tried various combinations of composite models (a summing of each model's softmax output then select the highest category (ies)) and found that the composite model outperformed the best individual model.
C. Confusion Matrix
The confusion matrix from our best composite model is located in the Appendix. We note that only two (0.3%) bumble bees were mischaracterized as honey bees. Additionally, recall and precision were poor, see Appendix.
VII. CONCLUSION AND FUTURE WORK
Based on the image set we did not get great accuracy for either single species or top-3 classification from individual or composite models. While we achieved eight-times better accuracy than sheer guessing, this is probably not better than a skilled amateur can accomplish.
We found Inception performed better than VGG, and that ResNet is not well suited for this particular transfer learning task. This is not a statement of ResNet's suitability for all transfer Our results at species classification, based on natural bee images, are significantly worse than DrivenData's results from the same type of images. However, the observable differences between genus (honey vs bumble) may not be as difficult a problem as detecting more subtle intra-genus species differences. "Honeybees have a clear distinction between head and abdomen, bumblebees are 'all of one piece.' Honeybees also have two clear sets of wings: a larger set in front and a smaller set in back [16]". Notably, we have a very low rate of misidentifying bumble bees as honey bees.
Figure 15. Differences between Honey and Bumble Bees[15]
A. Need for More Data Our first desire would be to acquire many more (an order of magnitude more) labeled images. Our models begin rapidly overtraining indicating there is not enough variation to present a large learning challenge. Our data was reasonable distributed, but at best a class was represented by 351 images and at worst 59. We feel this encourages the models to try and memorize the training data.
Next, we would look for unobstructed images of bees. A common entomological practice is to pin and mount insects for display and study. If we could source a large number of profiles, top and front images of previously mounted and identified bumble bees, it could aid learning species differences.
We observed that the composite model has a pronounced tendency to mistakenly categorize images as those where it had larger training sets, see Figure 16.
We note that few false positives occur when the training base size was less than 150 images. We suspect the classifiers did not learn generalizable features for these species and hypothesize that the misclassifications would be more randomly distributed if all species had over 150 images.
All of the "zero" values for precision and recall, caused by a lack of true positive classifications, came from species with training data sets below 150 images, see figures in the Appendix.
B. A Top-3 Loss Function
Then we could investigate or build a custom "top three" loss function. We feel this may achieve better results than cobbling together the top-3 from a strict summation of the individual classifier's softmax activations.
C. Different Type of Composite Model
Finally, we envision a different type of composite model based on the proposition that different pretrained models have different strengths at identifying the important species differentiating features. We would build an encoder from the trained models, and then feed their concatenated outputs into a new neural network which can then train based on the learned features of the classifier-based encoder model.
Figure 1 .
1Single Neuron Activation[1]
Figure 3 .
3CNN used for image Classification
Figure 4 .
4
Figure 5 .
5Image and Augmented Image
Figure 4 .
4Bumble Bee Species Distribution
Figure 6 .
6Conceptual
Figure 9 .Figure 10 .
910VGG16 VGG16 Loss Plot
Figure 7 .
7VGG19 Accuracy Plot
Figure 11 .
11ResNet50 Loss Plot
Figure 8 .
8VGG19 Loss Plot
Figure 13 .
13InceptionV3 Loss Plot
Figure 14 .
14InceptionV3 Accuracy Plot
Figure 12 .
12ResNet50
Figure 17 .Figure 16 .Figure 18 .
171618Alternate False Best Composite (VGG19, VGG16, Inception) Model Confusion Matrix
Figure 19 .Figure 20 .
1920Recall vs Training Image Number Precision vs Training Image Number
Table 1 .
1Hyperparameters for VGG16 Hyper Parameter/Structure Experimental Values Number hidden trainable layers1 -3
Nodes per trained layers
64 -2048
Optimizers
Adam, SGD
Learning Rate
0.01 -0.00001
Learning Rate Decay
Yes
Drop out (b/w each FC layer) 0.0 -0.5
Table 3 .
3Hyperparameters for InceptionV3Hyper Parameter/Structure Value RangeNumber hidden trainable
layers
1 -3
Nodes per trained layers
128 -2048
Learning Rate
0.001 -0.000001
Drop out
0.0 -0.75
Normalization
Attempted -did not help
Table 4 .
4BestModel Performances
Single Class Acc
Top-3 Acc
VGG19
19.7%
40.2%
VGG16
15.7%
39%
RESNET 50
0.0%
7.4%
InceptionV3
23.6%
44.5%
Inc + VGG19
25.5%
50.3%
Inc + VGG 19 +
VGG 16
27.5%
50.4%
All combined
25.5%
40.6%
Table 5 .
5Recall and Precision by SpeciesRecall
Precision
Affinis
0.0%
0.0%
Appositus
0.0%
0.0%
Auricornus
38.9%
15.2%
Bifarius
43.8%
12.1%
Bimaculatus
15.6%
4.7%
Borealis
7.1%
16.7%
Californicus
0.0%
0.0%
Centralis
0.0%
0.0%
Citrinus
12.5%
50.0%
Fernaldae
0.0%
0.0%
Fervidus
44.4%
6.4%
Flavifrons
0.0%
0.0%
Fraternus
53.8%
31.8%
Griseocolis
65.0%
11.5%
Huntii
18.8%
18.8%
Impatiens
18.5%
20.8%
Insularis
0.0%
0.0%
Melanopygus
41.9%
5.5%
Mixtus
19.2%
22.7%
Nevadensis
50.0%
18.1%
Occidentalis
0.0%
0.0%
Pensylvanicus
48.1%
21.6%
Perplexus
50.0%
16.3%
Rufocinctus
33.3%
19.2%
Sonorus
28.6%
18.8%
Ternarius
21.7%
5.4%
Terricola
0.0%
25.0%
Vagans
0.0%
0.0%
Vosnesenskii
11.1%
16.7%
The Summer Vision Project. Seymour Papert, MITSeymour Papert, "The Summer Vision Project", online https://dspace.mit.edu/bitstream/handle/1721.1/6125/AIM- 100.pdf?sequence=2 (retrieved 4 Dec 2018) MIT, July 1966 .
A Computational Approach to Edge Detection. John Canny, IEEE Transactions on Pattern Analysis and Machine Intelligence. Canny, John, "A Computational Approach to Edge Detection" in IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-8, NO. 6, 1986.
Richard Szeliski, Computer Vision: Algorithms and Applications. Springer Science & Business MediaRichard Szeliski, Computer Vision: Algorithms and Applications. Springer Science & Business Media. pp. 10-16, (30 September 2010). On line https://books.google.com/books?id=bXzAlkODwa8C& pg=PA17&lpg=PA27&focus=viewport (retrieved 4 Dec 2018)
Convolutional Neural Network Based Image Segmentation a Review. H Ajmal, S Rehman, U Farooq, Proceedings of SPIE, Pattern Recognition and Tracking XXIX. SPIE, Pattern Recognition and Tracking XXIXH. Ajmal, S. Rehman, U. Farooq, et al, "Convolutional Neural Network Based Image Segmentation a Review," in Proceedings of SPIE, Pattern Recognition and Tracking XXIX, 2018.
Understanding Activation Functions in Deep Learning. Aditya Sharma, Learn Open CV. 12Aditya Sharma, "Understanding Activation Functions in Deep Learning", Learn Open CV, October 2017, online at https://www.learnopencv.com/understanding-activation-functions-in- deep-learning/ Retrieved: 12 Dec 2018
Lecture Notes, CIS 730 Artificial Intelligence. William Hsu, Manhattan, KSKansas State UniversityunpublishedWilliam Hsu, Lecture Notes, CIS 730 Artificial Intelligence, Kansas State University, Manhattan, KS, 2018, unpublished.
ImageNet Classification with Deep Convolutional Neural Networks. A Krizhevsky, I Sutskever, G Hinton, Advances in Neural Information Processing Systems 25. A. Krizhevsky, I. Sutskever, and G. Hinton, "ImageNet Classification with Deep Convolutional Neural Networks", in Advances in Neural Information Processing Systems 25, 2012.
Region-based Convolutional Networks for Accurate Object Detection and Segmentation. R Girshick, J Donahue, T Darrell, J Malik, R. Girshick, J. Donahue, T. Darrell, J. Malik, "Region-based Convolutional Networks for Accurate Object Detection and Segmentation", http://islab.ulsan.ac.kr/files/announcement/513/ rcnn_pami.pdf 6 Dec 2018
Build an image dataset. A Damien, A. Damien, "Build an image dataset." https://github.com/aymericdamien/TensorFlow- Examples/blob/master/examples/5_DataManagement/build_an_image _dataset.py
Simple and efficient data augmentations using the Tensorfow tf.Data and Dataset API. W Bulten, W. Bulten, "Simple and efficient data augmentations using the Tensorfow tf.Data and Dataset API," https://www.wouterbulten.nl/blog/tech/data-augmentation-using- tensorflow-data-dataset/
Load Image. Tensorflow, TensorFlow, "Load Image" https://www.tensorflow.org/tutorials/load_data/images
The BeeImage Dataset: Annotated Honey Bee Images. J Yang, J. Yang, "The BeeImage Dataset: Annotated Honey Bee Images," https://www.kaggle.com/jenny18/honey-bee-annotated-images
Very Deep Convolutional Networks For Large-Scale Image Recognition. K Simonyan, A Zisserman, K. Simonyan, A. Zisserman, "Very Deep Convolutional Networks For Large-Scale Image Recognition," https://arxiv.org/abs/1409.1556 7-9 May 2015.
Naive Bees Classifier. Drivendata, DrivenData, "Naive Bees Classifier", https://www.drivendata.org/competitions/8/naive-bees- classifier/page/122/, 2019.
How to tell the difference between honey bees and bumble bees. Alex Wild, Wild, Alex, "How to tell the difference between honey bees and bumble bees", 2011, http://www.myrmecos.net/2011/10/11/how-to- tell-the-difference-between-honey-bees-and-bumble-bees/
Bumblebees vs. Honeybees: What's the Difference, and Why Does it Matter?. Student Conservation AssociationStudent Conservation Association, "Bumblebees vs. Honeybees: What's the Difference, and Why Does it Matter?", https://www.thesca.org/connect/blog/bumblebees-vs-honeybees- what%E2%80%99s-difference-and-why-does-it-matter
TF Hub for TF2: Image Module Retraining. Tensorflow Authors, TensorFlow Authors, "TF Hub for TF2: Image Module Retraining", https://colab.research.google.com/github/tensorflow/hub/blob/master/ examples/colab/tf2_image_retraining.ipynb#scrollTo=dlauq- 4FWGZM
Understanding and Coing a ResNet in Keras. P Dwivedi, Dwivedi, P. (2019). "Understanding and Coing a ResNet in Keras", https://towardsdatascience.com/understanding-and-coding-a-resnet- in-keras-446d7ff84d33
Transfer Learning Introduction. S Gupta, Gupta, S. (n.d.). "Transfer Learning Introduction.", https://www.hackerearth.com/practice/machine-learning/transfer- learning/transfer-learning-intro/tutorial/
Image Classification using Residual Networks. Github, GitHub. "Image Classification using Residual Networks". https://gist.github.com/nudles/e7c739b12f4409953bb498d5dadb4543
Keras Documentation. Trains a ResNet on the CIFAR10 dataset. Keras Documentation. "Trains a ResNet on the CIFAR10 dataset", https://keras.io/examples/cifar10_resnet/
Fine-tune VGG16. Deeplizard, Image Classifier with Keras | Part. 1Deeplizard. (2017, Nov, 22). Fine-tune VGG16 Image Classifier with Keras | Part 1: Build https://youtu.be/oDHpqu52soI
Fine-tune VGG16. Deeplizard, Image Classifier with Keras | Part. 2Deeplizard. (2017, Nov, 22). Fine-tune VGG16 Image Classifier with Keras | Part 2: Train https://youtu.be/INaX55V1zpY
| [
"https://github.com/aymericdamien/TensorFlow-"
]
|
[
"Search for ultralight scalar dark matter with NANOGrav pulsar timing arrays",
"Search for ultralight scalar dark matter with NANOGrav pulsar timing arrays"
]
| [
"Ryo Kato \nDepartment of Physics\nKobe University\nRokkodai 1-1657-8501KobeJapan\n",
"Jiro Soda \nDepartment of Physics\nKobe University\nRokkodai 1-1657-8501KobeJapan\n"
]
| [
"Department of Physics\nKobe University\nRokkodai 1-1657-8501KobeJapan",
"Department of Physics\nKobe University\nRokkodai 1-1657-8501KobeJapan"
]
| []
| An ultralight scalar field is a candidate for the dark matter. The ultralight scalar dark matter with mass around 10 −23 eV induces oscillations of the pulse arrival time in the sensitive frequency range of the pulsar timing arrays. We search for the ultralight scalar dark matter using the North American Nanohertz Observatory for Gravitational Waves 11-year Data Set. As a result of the Bayesian analysis, no significant evidence for the presence of the ultralight scalar dark matter is found. Therefore, the 95% confidence upper limit is given to the signal induced by the ultralight scalar dark matter. In comparison with the published Bayesian upper limits on the amplitude of the ultralight scalar dark matter obtained by Bayesian analysis using the Parkes Pulsar Timing Array 12-year data set (Porayko et al. 2018), we find three times stronger upper limit in the frequency range from 10 −8.34 to 10 −8.19 Hz which corresponds to the mass range from 9.45 × 10 −24 to 1.34 × 10 −23 eV. In terms of the energy density of the dark matter, we find that the energy density near the Earth is less than 7 GeV cm 3 in the range from 10 −8.55 to 10 −8.01 Hz (from 5.83 × 10 −24 to 2.02 × 10 −23 eV). The strongest upper limit on the the energy density is given by 2 GeV cm 3 at a frequency 10 −8.28 Hz (corresponding to a mass 1.09 × 10 −23 eV). We also confirm that the existence of the signal induced by the ultralight scalar dark matter can not be excluded if the solar system ephemeris error is not included in the model of the observation data. Moreover, if we analyze noises other than the signal of the ultralight scalar dark matter in advance, we find that the noise of the pulsar PSR J1909-3744 becomes smaller as expected but the noise of the other pulsars becomes larger. | 10.1088/1475-7516/2020/09/036 | [
"https://arxiv.org/pdf/1904.09143v1.pdf"
]
| 125,972,419 | 1904.09143 | 7ebd1e4c0570a92e74ccfe658458d254ef7bf363 |
Search for ultralight scalar dark matter with NANOGrav pulsar timing arrays
Ryo Kato
Department of Physics
Kobe University
Rokkodai 1-1657-8501KobeJapan
Jiro Soda
Department of Physics
Kobe University
Rokkodai 1-1657-8501KobeJapan
Search for ultralight scalar dark matter with NANOGrav pulsar timing arrays
(Dated: April 22, 2019)PACS numbers:
An ultralight scalar field is a candidate for the dark matter. The ultralight scalar dark matter with mass around 10 −23 eV induces oscillations of the pulse arrival time in the sensitive frequency range of the pulsar timing arrays. We search for the ultralight scalar dark matter using the North American Nanohertz Observatory for Gravitational Waves 11-year Data Set. As a result of the Bayesian analysis, no significant evidence for the presence of the ultralight scalar dark matter is found. Therefore, the 95% confidence upper limit is given to the signal induced by the ultralight scalar dark matter. In comparison with the published Bayesian upper limits on the amplitude of the ultralight scalar dark matter obtained by Bayesian analysis using the Parkes Pulsar Timing Array 12-year data set (Porayko et al. 2018), we find three times stronger upper limit in the frequency range from 10 −8.34 to 10 −8.19 Hz which corresponds to the mass range from 9.45 × 10 −24 to 1.34 × 10 −23 eV. In terms of the energy density of the dark matter, we find that the energy density near the Earth is less than 7 GeV cm 3 in the range from 10 −8.55 to 10 −8.01 Hz (from 5.83 × 10 −24 to 2.02 × 10 −23 eV). The strongest upper limit on the the energy density is given by 2 GeV cm 3 at a frequency 10 −8.28 Hz (corresponding to a mass 1.09 × 10 −23 eV). We also confirm that the existence of the signal induced by the ultralight scalar dark matter can not be excluded if the solar system ephemeris error is not included in the model of the observation data. Moreover, if we analyze noises other than the signal of the ultralight scalar dark matter in advance, we find that the noise of the pulsar PSR J1909-3744 becomes smaller as expected but the noise of the other pulsars becomes larger.
I. INTRODUCTION
The dark matter problem is clearly one of the most important issues in modern cosmology. Recently, motivated by string theory, an ultralight scalar dark matter has been intensively studied [1,2]. In particular, an ultralight scalar field with mass 10 −23 eV can behave like the cold dark matter (CDM) on cosmological scales and resolve a cusp problem [3,4]. In this article, we call it simply the fuzzy dark matter (FDM). The FDM can be treated as a classical scalar field because the occupation number of the FDM accounting for the energy density of the dark matter is very large. The main difference between FDM and CDM is that the pressure of the FDM is coherently oscillating, while that of CDM almost vanishes. Khmelnitsky and Rubakov have pointed out that the effect of oscillating pressure might be detected with the pulsar timing arrays (PTAs) [5]. Indeed, the oscillation of the pressure induces the oscillation of the gravitational potential, and as a result, it induces the oscillation of the arrival time of the pulse passing through the gravitational potential.
It would be worth noting that there exists experimental duality between gravitational wave and scalar dark matter detections. More precisely, the detection method for gravitational waves is useful for scalar field dark matter, and vice versa. The idea of Khmelnitsky and Rubakov inspired us to use the gravitational wave interferomters for detecting scalar dark matter [6]. Recently, the importance of the reverse direction has been promoted and a novel constraint on GHz gravitational waves was obtained [7]. Hence, it is importnt to investigate thoroughly the duality.
An attempt to search for long wavelength gravitational waves with the PTAs composed of long-term observation of many pulsars was proposed in the articles [8][9][10]. Nowadays, the PTAs are most sensitive to the gravitational waves with a few nanohertz frequency. There are three major pulsar timing projects aimed at observing the pulsars and searching for the gravitational waves: the European Pulsar Timing Array (EPTA) [11], the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) [12], and the Parkes Pulsar Timing Array (PPTA) [13]. The collaboration of the three projects is called the International Pulsar Timing Array (IPTA) [14]. The point is that we can utilize the PTAs for searching the ultralight scalar dark matter.
Porayko and Postnov [15] gave upper limits for the FDM with the Bayesian analysis using the NANOGrav 5-year Data Set. Moreover, Porayko et al. [16] gave upper limits for the FDM with the Bayesian and the Frequentist analyses using the year Data Set. In this article, following the previous articles, we search for the FDM by the Bayesian analysis in the time domain using the NANOGrav 11-year Data Set. We quantitatively investigate whether the ultralight scalar dark matter is detectable or not using the Bayesian model selection approach. We clarify the prior dependence of constraints on the amplitude of the FDM and obtain three times stronger constraints on the amplitude of the FDM in the frequency range from 10 −8.34 to 10 −8.19 Hz. We also discuss how the results of Bayesian analysis depend on the the solar system ephemeris noise in the model describing the observation data.
This article is organized as follows. In Section II we describe a model of FDM signal. In Section III we briefly review the Bayesian statistics, and explain how to use it for our analysis. In Section IV we describe the model, the data, and the function used in the Bayesian analysis. In Section V we briefly review the MCMC simulation. In Section VI we describe the analysis of the white noise that is performed before the main analysis. In Section VII we summarize the results of Bayesian analysis using NANOGrav 11-year data set. In particular, we show no evidence of the FDM is obtained and instead we give an upper limit on the amplitude of the FDM. The last Section is devoted to conclusion. In Appendix A we describe how accurately the FDM signal can be detected by the Bayesian analysis using simulated signals. In this article, we will use the units c = ̵ h = 1.
II. FDM SIGNAL
As we mentioned in the introduction, the oscillation of the scalar field with the mass m induces the oscillation of the gravitational potential and the oscillation of the arrival time of the pulse passing through the gravitational potential. The oscillation of the arrival time of the pulse induced by the FDM is given by [17]
s(t) = − 1 2π f [Ψ(x e )sin(2π f t + α(x e )) − Ψ(x p )sin(2π f (t − D) + α(x p ))],(2.1)
where f = m π is a frequency, D is the distance between the pulsar and the earth, each x e and x p are the position of the Earth and the pulsar, and α denotes the phase. Here, we used the gravitational potential
Ψ(x) ≡ πGρ(x) m 2 ,(2.2)
where ρ(x) is the energy density of the dark matter. Therefore, the signal of the FDM can be observed as the periodic signal with the frequency determined by the mass of the FDM. The parameters used in the Bayesian estimation are defined as follows:
s(t) = − Ψ 2π f [sin(2π f t + α e ) − sin(2π f t + α p )],(2.3)
where we auumed Ψ ≡ Ψ(x e ) = Ψ(x p ) and defined α e ≡ α(x e ), α p ≡ α(x p ) − 2π f D . (2.4) Here, since we do not aim to estimate the distance, we put together the phase α(x p ) and the distance D. In fact, the distance has an uncertainty of tens to hundreds of parsec currently that is too large to determine the phase [18]. Since, as is mentiond in the article [16], the distance between the Earth and the pulsar D is not so large, it is reasonable to assume that the amplitudes at the earth Ψ(x e ) and the pulsar Ψ(x p ) are equal. Regarding the numerical value of Ψ, we have where ρ = 0.4Gev cm 3 is the estimated energy density of the dark matter at the position of the Earth [19].
III. BAYESIAN PARAMETER ESTIMATION AND MODEL COMPARISON
In this section we review the Bayesian parameter estimation and the model comparison. For further details about the Bayesian data analysis, see for example [20][21][22].
The purpose of the Bayesian parameter estimation is to estimate the posterior probability distribution p(θ D) of the parameters θ given the data D. Having the observed data, we can update our belief about the parameters using Bayes' rule, namely
p(θ D) = p(D θ )p(θ ) p(D) . (3.1)
In the above expression, the posterior probability distribution is interpreted as the strength of belief in the parameters based on the data, and p(θ ) is the prior probability distribution, which is interpreted as the strength of belief in the parameters without the data. Then p(D θ ) is the likelihood function, which is the probability of the data given the parameters. Lastly, p(D) is the evidence, which is the probability of the data. Using the law of total probability, the evidence is given by
p(D) = Ω dθ p(D θ )p(θ ),(3.2)
where Ω denotes the parameter space. For the purpose of the parameter estimation, the evidence can be regarded as a normalization constant, because it does not involve the parameter. It is reasonable to explicitly include in Eq. (3.1) the model M which assigns a meaning to the parameters. Given a model, we can rewrite Eq. (3.1):
p(θ D,M) = p(D θ ,M)p(θ M) p(D M) , (3.3) where p(D M) = Ω dθ p(D θ ,M)p(θ M). (3.4)
More generally, considering a hierarchical model in which the parameters depend on the parameters, the prior probability distribution becomes p(θ ,η M), where η is called as a hyperparameter which is the parameter of the parameter θ . Applying the product rule for the conditional probability, the prior probability distribution can be written as
p(θ ,η M) = p(θ η,M)p(η M). (3.5)
Then, using Bayes' rule Eq. (3.3), the posterior probability distribution would be written as
p(θ ,η D,M) = p(D θ ,η,M)p(θ ,η M) p(D M) = p(D θ ,M)p(θ ,η M) p(D M) . (3.6)
The above equation tells us that the hyperparameter only affects the posterior probability distribution through parameters, that is, the likelihood function does not depend on the hyperparameter. Although the model used in this study contains many parameters and hyperparameters, we are interested in only the amplitude of the FDM. Therefore, the posterior probability distribution is integrated over the parameters and the hyperparameters except for the amplitude of the FDM:
p(A D,M) = Ω ′ p(A,θ θ θ ′ ,η η η D,M)dθ θ θ ′ dη η η,(3.7)
where θ θ θ ′ is the vector which denotes the parameters except for the amplitude of the FDM, η η η is the vector which denotes the hyperparameters, and Ω ′ denotes the parameter space for θ θ θ ′ and η η η. This procedure is called marginalization. Using the posterior probability distribution for the amplitude of the FDM, we define the upper limit R by The above equation means that the probability that the amplitude of the FDM is less than or equal to R is 95%. The purpose of parameter estimation in this article is to obtain this upper limit R. It is not practical to calculate the posterior probability distribution for a lot of parameters, because the multiple integration of the denominator is generally difficult. Even if one can calculate the posterior distribution, one must calculate an integral like Eq. (3.7) to discuss the probability of a specific parameter. One way to avoid these problems is to use Markov Chain Monte Carlo (MCMC) method to generate samples from the posterior probability distribution instead of calculating the posterior probability distribution itself. Since the unnormalized posterior probability distribution is used in MCMC, we do not need to calculate the normalization constant, that is, the evidence. In order to discuss a specific parameter after using the MCMC method, we simply select the samples of the parameter from the derived samples.
Analogous to the parameter estimation, we can also update our belief about the the model through the data with the Bayes' rule:
p(M D) = p(D M)p(M) p(D) . (3.9)
If we have two competing models M 1 and M 2 , for the Bayesian model comparison, it is often considered the ratio of Eq. (3.4) of two models. The ratio
p(M 1 D) p(M 2 D) = p(M 1 ) p(M 2 ) p(D M 1 ) p(D M 2 ) , ≡ p(M 1 ) p(M 2 ) B 12 ,(3.10)
is called the odds ratio and the first ratio on the right-hand side is the prior odds ratio and the second ratio is the Bayes factor. The purpose of the model comparison procedures is to calculate the Bayes factor according to Eq. (3.4), and therefore the evidence becomes critically important unlike in the case of the parameter estimation. However, as described in the following paragraph, if two models compared are nested models, it is not necessary to calculate Eq. (3.4). The Table I gives the interpretation of the Bayes factor in terms of the strength of the evidence. The second column of the Table I refers to the probability p(M 1 D) under the assumption that the prior odds ratio is equal to unity: p(M 1 ) = p(M 2 ) = 0.5 [23][24][25]. In this article, since we have no prior knowledge of models, we set the prior odds ratio to 1. Therefore we can use the probability of the second column in the Table I. For the nested models where two models contain the common parameters and one model has at least one additional parameter [26], calculation of the Bayes factor is significantly simplified. We compare the model M 1 in which the parameters include the amplitude of the FDM and the model M 2 in which the amplitude of the FDM is a fixed value Ψ 0 and the other parameters are same as those in the model M 1 . In the case of nested models, we can use the Savege-Dickey density ratio to calculate the Bayes factor [24,[26][27][28], namely,
B 12 = p(Ψ = Ψ 0 M 1 ) p(Ψ = Ψ 0 D,M 1 ) ,(3.11)
where we assumed the statistical independence between the amplitude of the FDM and the other parameters given the model M 1 and assumed that the prior probability distribution of the parameters are the same for both models except for the amplitude of the FDM, that is,
p(Ψ,θ ′ M 1 ) = p(Ψ M 1 )p(θ ′ M 2 ). (3.12)
From the equation Eq. (3.11), it can be seen that this Bayes factor requires only the prior and the posterior probability distribution for Ψ at Ψ 0 under the model M 1 instead of the evidences of each model. Since the prior probability distribution is given before the parameter estimation and the samples of the posterior probability distribution are obtained from the result of the parameter estimation, it is possible to calculate this Bayes factor immediately after the parameter estimation. Specifically, we calculate Eq. (3.11) for multiple small bins around the fixed value Ψ 0 , then the average is used as the Bayes factor, and the unbiased standard deviation is used as the error bar. In this article, we use the lower limit in the log-uniform distribution for the amplitude of the FDM as the fixed value Ψ 0 . Since this lower limit is sufficiently small, the model M 2 can be regarded as a model with no FDM, and the Bayes factor can be used to know which model, with or without the FDM, is preferred.
IV. BAYESIAN ANALYSIS IN THE TIME DOMAIN
In this section, we explain the data D, the model M, and the parameter θ used in Bayesian analysis, and define the posterior probability distribution p(θ D,M), the likelihood function p(D θ ,M), and the prior probability distribution p(θ M).
A. Data
We used the NANOGrav 11-year data set [18] and chose six pulsars: PSRs J0613-0200, J1012+5307, J1600-3053, J1713+0747, J1744-1134, and J1909-3744. In this dataset, these pulsars have relatively good time-of-arrival (TOA) precision and long observation time, which would be suitable for detecting the signal of the FDM which becomes larger as the frequency becomes lower.
The data D we use for the Bayesian analysis are timing residuals which are calculated by subtracting the timing model from the TOAs [18,29,30]. In order to obtain the timing residuals, we use the libstempo 1 which is the PYTHON interface to TEMPO2 2 [31] timing package. 3 For the parameter files which include the timing model parameters and for timing files which include TOAs and the uncertainties of TOAs, we used the identical data set except for the parameter file of PSR J1713+0747. In the parameter file of the PSR J1713+0747, we changed only a parameter EPHEM from DE430 [32] to DE436 [33], where this parameter specifies which ephemerides to be used. Then we used libstempo to fit the timing parameters of the PSR J1713+0747 and created a new parameter file. We verified that change in ephemeris did not make much difference to the timing parameters. We iterated the parameter fitting five times, which would be sufficient for parameters to converge to certain values. All of our Bayesian analysis was done using this new parameter file of the PSR J1713+0747.
B. Model
Following the paper [30,34], the timing residuals δt t t for each pulsar can be written as follows δt t t = s s s + n n n TM + n n n red + n n n SSE + n n n white , (4.1)
where these variables are N TOA dimensional vectors and N TOA denotes the number of TOAs of the pulsar. In the Bayesian framework, Eq. (4.1) is the model M for the residuals δt t t which are the data D. Each term on the right-hand side is described below. The first term on the right-hand side s s s is the FDM signal, which is given by Eq. (2.3). The second term n n n TM is the noise due to inaccuracies of the timing model, which is represented by n n n TM = M M Mε ε ε. M M M is a N TOA × N TM design matrix whose rows describe the dependence of the pulsar timing residuals on respective timing model parameters, where N TM is the number of the timing model parameters. ε ε ε is a N TM dimensional vector, which denotes small offset for the timing model parameters. We will refer to this noise as the TM noise. We obtain the design matrix using the TEMPO2 via libstempo, and the timing model parameters used are listed in [18]. The third term n n n red is the red noise for which the power spectral density has most of their power at low frequencies in a given data set. The red noise is known to have achromatic (observing-frequency-independent) and chromatic (observing-frequencydependent) components [35]. The achromatic components are thought to be caused by a random walk in one of the pulsar spin parameters [36][37][38][39] and contributions to TOAs by an asteroid belt around the pulsar [40]. The chromatic components are thought to be caused by the pulse propagating through the ionized interstellar medium if the dispersion measure of the timing model does not describe all this effect [18]. This components therefore would be induced either by diffractive and refractive interstellar effects [35,41] not included in the timing model or by unmodeled propagation effects. Although the origins of red noise are various, simple power-law spectrum form is often used as the power spectral density. Under the assumption of the stationary Gaussian process, the power spectral density P( f ) can be written as
P( f ) = A 2 red 12π 2 f f yr 3−γ red f −3 , (4.3)
where f is a red noise frequency, f yr is 1yr −1 , A red is a dimensionless amplitude of the red noise, and γ red is a spectral index of the red noise. Note that this parameterization is the analogy of the power-law model for the stochastic gravitational wave background [42,43]. In order to improve computational efficiency, the red noise was described by the Fourier series rather than by analytical solution of the covariance matrix calculated from the power spectral density Eq. (4.3) [44][45][46]. In particular, by defining red noise with Fourier series expansion, it is possible to use TM noise and red noise in a unified description when analytical marginalization of the posterior probability distribution is performed [30,47]. We use the same formulation in the next section. Therefore, the red noise in component form is defined as
n red,i = N red j=1 a j cos 2π jt i T + b j sin 2π jt i T , (4.4)
where n red,i is the red noise at the t i which is i th TOA, a j and b j are the the Fourier series coefficients, N red is a number of frequencies used, T is the total observation time span which is unique for each pulsar. Then like the second term n n n TM , the red noise is represented by
n n n red = F F Fa a a (4.5)
where F F F is a N TOA ×2N red matrix which has columns of alternating cosine and sine functions, and a a a is a 2N red dimensional vector which has coefficients corresponding to cosine and sine functions, that is, in a component form,
F ik = ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ cos 2πkt i T , (k = odd) sin 2π(k−1)t i T , (k = even) , a k = ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ a k , (k = odd) b k−1 , (k = even) , (4.6)
where k is the number from 1 to 2N red . Assuming the independence of each Fourier series coefficient, the relation between Fourier series coefficients and power spectral density Eq. (4.3) is defined as
⟨a k a k ′ ⟩ = ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ P k T ∆ f δ k,k ′ , (k = odd) P k−1 T ∆ f δ k,k ′ , (k = even) ≡ Ξ k (4.7)
where < ... > denotes an ensemble average, ∆ f is a frequency resolution, which is about 1 T , and δ k,k ′ is the Kronecker delta. With this expression, the relation between the cross-correlation function of red noise C red and the power spectral density is given by
C red,i,i ′ = ⟨n red,i n red,i ⟩ = 2N red k 2N red k ′ F i,k ⟨a k a k ′ ⟩F i ′ ,k ′ = N red j P j T cos 2π j(t i −t i ′ ) T ∆ f . (4.8)
This relation is expected from the Wiener-Khinchin theorem for the stationary process. In this article we use N red = 30. The fourth term n SSE is a noise due to inaccuracies of a Solar System ephemeris (SSE) which is used to convert the TOAs at the geocenter to those at the Solar System barycenter (SSB). We will refer to this noise as the SSE noise. It is known that SSE errors affect upper limits and Bayes factors for amplitudes of the stochastic gravitational wave background [34]. The stochastic gravitational wave background can be distinguished from the SSE errors by using the two-point correlation analysis in principle [48], on the other hand the FDM signal cannot be distinguished from the SSE errors using the correlation analysis, because the correlation function characterizing the FDM signal is not defined. Therefore, the presence of the SSE errors would have a stronger influence on the analysis of the FDM signal than in the case of the stochastic gravitational wave background. Following [34], we assume that the SSE errors only affect the Rømer delay ∆ R which is the vacuum light travel time between the geocenter and the SSB. Therefore, the Rømer delay at the t i is
∆ R,i = r r r i ⋅ R R R i , (4.9)
where r r r i is the vector from the geocenter to the SSB, and R R R i is the unit vector from the SSB to the pulsar barycenter [29]. In the case that the position shift of the SSB is induced by the error of the planet mass from the SSE, this shift changes the vector r r r i , so that the induced residuals n mass SSE,i at the t i can be written as [49] n mass
SSE,i = −δ M(b b b i ⋅ R R R i ),(4.10)
where δ M is the error of the planet mass in solar mass M ⊙ unit and b b b i is the vector from the planet barycenter to the SSB. The planets we consider the error of mass are Jupiter, Saturn, Uranus, and Neptune. As in the above case, the error of the planet orbit from the SSE induce the residuals n orbit
SSE,i n orbit SSE,i = −M ⎛ ⎝ 6 µ ∂ b b b i ∂ a µ δ a µ ⎞ ⎠ ⋅ R R R i , (4.11)
where M is the planet mass in solar mass unit, a µ are set-III parameters [50] which are composed of six parameters and characterize an osculating elliptical orbit at a given osculation epoch, and δ a µ are small offsets of the set-III parameters. We have to consider the error of the orbit of Jupiter. We also consider a rotation of the vector r r r i around the ecliptic pole,
n rotation SSE,I = (r r r i − R z (θ )r r r i ) ⋅ R R R i , R z (θ ) ≡ ⎛ ⎜ ⎜ ⎝ cosθ −sinθ 0 sinθ cosθ 0 0 0 1 ⎞ ⎟ ⎟ ⎠ , θ ≡ δ z second year (t i −t 0 ) (4.12)
where R z (θ ) is a rotation matrix, δ z is a rotation rate which has the unit rad year, and t 0 is the offset of time. Among the noises mentioned above, the dominant contribution to the residuals comes from Jupiter, because Jupiter has a large mass and is thought to have a relatively large orbital error compared to Saturn [51]. Uranus and Neptune also have large uncertainty, but the orbital periods are sufficiently longer than the observation time of pulsars, hence the induced residuals are proportional to the time and absorbed by fitting of timing model for the intrinsic pulsar spin periods [52,53]. Thus, the noise due to inaccuracies of the SSE reads n n n SSE = n n n mass,J SSE + n n n mass,S SSE + n n n mass,U SSE + n n n mass,N SSE + n n n orbit,J SSE + n n n rotation SSE , (4.13)
where n n n mass,J SSE ,n n n mass,S SSE ,n n n mass,U SSE ,n n n mass,N SSE,i are the noises due to the mass errors of Jupiter, Saturn, Uranus, and Neptune, respectively, n n n orbit,J SSE,i is the noise due to the orbit errors of Jupiter, and n n n rotation SSE is the noise due to the rotation rate around the ecliptic pole. We used the values and the data implemented in ENTERPRISE (Enhanced Numerical Toolbox Enabling a Robust PulsaR Inference SuitE) which is a pulsar timing analysis code 4 . Thus, the value of the Jupiter's mass M J is the value of the IAU 2009 system of astronomical constants [54] and the value of t 0 corresponds to MJD 55197, and the data of
∂ b b b i ∂ a µ are the same in ENTERPRISE. Note that, in the data of ∂ b b b i ∂ a µ , the principal component analysis (PCA) was performed for six ∂ b b b i ∂ a µ
, so that small offsets δ a µ do not correspond to the set-III parameters a µ themselves but correspond to parameters based on PCA bases. In the calculation of the shifted r r r i due to the SSE errors, to reduce the N TOA for efficient computation, b b b i and r r r i are averaged within the TOAs obtained in one observation at one combination of receivers and backend systems, and the data of ∂ b b b i ∂ a µ are interpolated into the corresponding averaged TOAs. 5 After that, assuming the value of SSE noise is same within the TOAs obtained in one observation at one combination of receivers and backend systems, the shift of r r r i due to the SSE errors is calculated. We obtained the unit vector from the SSB to the pulsar barycenter R R R i using the TEMPO2 via the libstempo.
The last term n n n white is roughly called as a white noise. Assuming that this noise follows the Gaussian distribution, we characterize it by a correlation function:
C C C white = ⟨n n n white n n n T white ⟩ = C C C EFAC +C C C EQUAD +C C C Jitter , (4.14)
where C C C EFAC , C C C EQUAD , and C C C ECORR are correlation functions for EFAC, EQUAD, and ECORR parameters, respectively. Each term on the right-hand side is described below. When sorting TOAs by what combination of receivers and backend systems were used, the first term C C C EFAC can be written as follows: 15) where N back denotes the number of the combinations of receivers and backend systems, a is the subscript for N back , e a is called as a EFAC parameter, W W W a is a N TOA a × N TOA a diagonal matrix composed of TOA measurement uncertainties obtained by the a th combination, and N TOA a denotes the number of the TOAs obtained by the a th combination. From the above equation, it can be seen that EFAC parameters depend on the a th combination and changes the size of the error bars of TOAs. This noise characterize systematic errors of TOA measurement uncertainties. As in the case of the EFAC parameters, the second term C C C EQUAD can be written as follows:
C C C EFAC = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ e 2 1 W W W 1 0 e 2 2 W W W 2 ⋱ 0 e 2 N back W W W N back ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ ,(4.C C C EQUAD = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ q 2 1 I I I 1 0 q 2 2 I I I 2 ⋱ 0 q 2 N back I I I N back ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ , (4.16)
where q a is the EQUAD parameter and I a is the N TOA a × N TOA a identity matrix. This noise is an additional white Gaussian noise. When sorting TOAs in the order of observation for each combination separately, the last term C C C ECORR can be written as follows: 17) where N obs,a denotes the number of the observation using the a th combination, b is the subscript for N obs,a , j a is the ECORR parameter, u u u ab is the N TOA ab dimensional vector of which all the components are one, and N TOA ab denotes the number of the TOAs obtained within the b th observation using the a th combination. This noise shows that there is a correlation between the TOAs obtained during one observation and there is no correlation between the TOAs obtained by other observations. This noise characterizes pulse jitter caused by stochastic amplitude and phase variations in pulse, which correlates in a certain frequency band and doesn't correlate in time [41].
C C C ECORR = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ J J J 1 0 J J J 2 ⋱ 0 J J J N back ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ , J J J a ≡ ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ u u u a1 j 2 a u u u T a1 0 u u u a2 j 2 a u u u T a2 ⋱ 0 u u u aN obs,a j 2 a u u u T aN obs,a ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ , u u u ab ≡ ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 1 1 ⋮ 1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭ N TOA ab ,(4.
To summarize, the parameters θ for the Bayesian data analysis are Ψ, f ,α e , and α p of the FDM signal, ε ε ε of the TM noise, a a a of the red noise, δ M,δ a µ , and δ z of the SSE noise, and e a ,q a , and j a of the white noise. Note that the parameters Ψ, f ,α e ,δ M,δ a µ , and δ z are common to all pulsars. The red noise defined in this section is the hierarchical model in the Bayesian framework, and A red and γ red are called hyperparameters which are the parameters of the parameter a a a. In addition, the TM noise defined in this section is not the hierarchical model, but it is defined in the same way as the red noise. Therefore the parameters ε ε ε of the TM noise follow the Gaussian distribution, and the variance-covariance matrix in component form becomes as follows:
⟨ε l ε l ′ ⟩ = Φ l δ l,l ′ ,(4.18)
where l is the subscript for N TM , and Φ l are the hyperparameters which are the parameter of the parameter ε l . The reason for doing this is to use the TM noise and the red noise in a unified description as mentioned in the explanation of red noise. Then, we can avoid problems in marginalizing the posterior probability distribution with uniform prior.
C. Likelihood Function and Posterior Probability Distribution
In this section we derive the likelihood function and the posterior probability distribution used in the Bayesian estimation. We basically follow the article [34].
If the model M for the data D is given, the likelihood function p(D θ ,M) can be obtained. In the model M for the residuals δt t t given by Eq. (4.1) , the white noise has the statistical uncertainty, on the other hand, the others are determined by given parameters. 6 In this case, since the white noise has the Gaussian distribution, the likelihood for each pulsar can be written as [20] p(δt t t θ θ θ ,M) = 1
det(2πC C C white ) exp − 1 2
(δt t t − s s s − n n n TM − n n n red − n n n SSE ) T C C C −1 white (δt t t − s s s − n n n TM − n n n red − n n n SSE ) .
(4.19)
The likelihood function for all pulsars can be written by multiplying the likelihood function of each pulser, because it is considered that there is no correlation between the residuals of each pulsar. If one wants to estimate all the parameters, one can use this likelihood function. In most cases in analyses with PTAs, the parameters ε ε ε of the TM noise and the parameters a a a of the red noise are eliminated by marginalizing the posterior probability distribution before analyzing the data. Since the parameters ε ε ε and a a a are unique to each pulsar, the marginalization can be done independently for each pulsar, we can calculate the posterior probability distribution in the case of a single pulsar. Our formulation of the marginalization is the same as that in the papers [30,47]. Following these papers, the likelihood function Eq. (4.19) is rewritten as
p(δt t t θ θ θ ,M) = 1 det(2πC C C white ) exp − 1 2 (δ r r r − T T T b b b) T C C C −1 white (δ r r r − T T T b b b) , δ r r r ≡ δt t t − s s s − n n n SSE , T T T ≡ (M M M F F F), b b b ≡ ⎛ ⎝ ε ε ε a a a ⎞ ⎠ ,(4.20)
where δ r r r is defined only for simplifying notation, T T T is the N TOA × (N TM + 2N red ) matrix in which the matrices M M M and F F F are concatenated along the row axis, and b b b is N TM + 2N red dimensional vector in which ε ε ε and a a a are concatenated along the column axis. Since each noise was assumed to be Gaussian, the prior probability distribution for the parameter b b b can be obtained by using Eq. (3.5) as follows:
p(b b b,η η η M) = p(b b b η η η,M)p(η η η M) = 1 det(2πB B B) exp − 1 2 b b b T B B B −1 b b b p(η η η M), B B B ≡ diag(Φ 1 ,Φ 2 ,⋯Φ l ,Ξ 1 ,Ξ 2 ,⋯Ξ k ),(4.21)
where B B B is a (N TM + 2N red ) × (N TM + 2N red ) diagonal matrix whose diagonal elements Φ l and Ξ k are defined by Eq. (4.18) and Eq. (4.7) respectively, and η η η denote the hyperparameters Φ l ,A red , and γ red . Note that we assume the statistical independence of the parameters and the hyperparameters. Then, using Eq. (3.6), the posterior probability distribution can be written as
p(θ θ θ ,η η η δt t t,M) = p(δt t t θ θ θ ,M)p(θ θ θ ,η η η M) p(D M) = p(δt t t φ φ φ ,b b b,M)p(b b b,η η η M)p(φ φ φ M) p(D M) = 1 (2π) N TOA +N TM +2N red det(C C C white )det(B B B) exp − 1 2 (δ r r r − T T T b b b) T C C C −1 white (δ r r r − T T T b b b) + b b b T B B B −1 b b b × p(η η η M)p(φ φ φ M) p(D M) ,(4.22)
where φ φ φ is the vector for all the parameters except for the parameters b b b, which have no hyperparameters. In order to marginalize over the parameters b b b, we performe completing the square in the exponent:
(δ r r r − T T T b b b) T C C C −1 white (δ r r r − T T T b b b) + b b b T B B B −1 b b b = δ r r r T C C C −1 white δ r r r −b b b T T T T T C C C −1 white T T T + B B B −1 b b b + b b b −b b b T T T T T C C C −1 white T T T + B B B −1 b b b −b b b , (4.23) whereb b b ≡ T T T T C C C −1 white T T T + B B B −1 −1 T T T T C C C −1 white δ r r r. (4.24)
As a result, only the last term depends on the parameters b b b and the Gaussian integration can be performed as
∞ −∞ exp − 1 2 b b b −b b b T T T T T C C C −1 white T T T + B B B −1 b b b −b b b db b b = (2π) N TM +2N red det T T T T C C C −1 white T T T + B B B −1 .
(4.25)
The marginalized posterior probability distribution therefore can be calculated as follows
p(φ φ φ ,η η η δt t t,M) = ∞ −∞ p(θ θ θ ,η η η δt t t,M)db b b = 1 (2π) N TOA det(C C C white )det(B B B)det T T T T C C C −1 white T T T + B B B −1 ×exp − 1 2 δ r r r T C C C −1 white δ r r r − (T T T T C C C −1 white δ r r r) T T T T T C C C −1 white T T T + B B B −1 −1 T T T T C C C −1 white δ r r r × p(η η η M)p(φ φ φ M) p(D M) . (4.26)
TM noise was defined as a hierarchical model in the previous section, but there is no prior knowledge about parameters ε ε ε. In order to take this into account, it is further assumed that the values of the each hyperparameter Φ l are much larger than the possible variances in the PTAs analysis. In this case, similar prior values are given over a range of possible values for each parameter, which means that there is no special value as prior information for each parameter 7 . The prior probability distribution of the hyperparameter Φ l are
p(Φ l M) = δ (Φ l − m l ),(4.27)
where m l is a extremely large value. Then, the marginalization over the parameters Φ l can be performed:
p(φ φ φ ,A red ,γ red δt t t,M) = ∞ −∞ p(φ φ φ ,η η η δt t t,M)dΦ Φ Φ = 1 (2π) N TOA det(C C C white )det(B B B)det T T T T C C C −1 white T T T + B B B −1 ×exp − 1 2 δ r r r T C C C −1 white δ r r r − (T T T T C C C −1 white δ r r r) T T T T T C C C −1 white T T T + B B B −1 −1 T T T T C C C −1 white δ r r r × p(A red ,γ red M)p(φ φ φ M) p(D M) ,(4.28)
where B B B = diag(m 1 ,m 2 ,⋯m l ,Ξ 1 ,Ξ 2 ,⋯Ξ k ). (4.29) Note that, in the absence of knowledge of parameters, a uniform distribution is often used as a prior probability distribution. In order to perform the marginalization, it is reasonable to assume the uniform prior p(θ M) ∝ 1 in the range θ ∈ (−∞,∞). However, this distribution is not a probability distribution, because it cannot be normalized. Such a prior distribution is called improper prior distribution and special attention must be paid when we use it [22]. Since we did not want to use improper prior distribution, we used a normal distribution with very large variance for the TM parameters. This distribution is proper and can be regarded as an approximation of the uniform distribution. We set each hyperparameter value m l to 10 80 , which is sufficiently large for PTAs analysis. As mentioned earlier, this marginalized posterior probability distribution is for a single pulsar. The marginalized posterior probability distribution using multiple pulsars can be obtained by multiplying the above equations of each pulsar except for the prior probability distribution of parameters common to all pulsars, and after that, multiplying the prior probability distribution of parameters common to all pulsars.
When actually calculating the posterior distribution, how to calculate a determinant and an inverse of a matrix is important to reduce computation time. Therefore we briefly describe the calculation. In the case of C C C white , since C C C white is the block diagonal matrix, the determinant and the inverse can be calculated independently for each block. For each block, the matrix determinant lemma 8 and the Sherman-Morrison formula 9 can be used. In the case of B B B, since B B B is the diagonal matrix, the determinant is the product of each element. In the case of T T T T C C C −1 white T T T + B B B −1 , the Cholesky decomposition which expresses a matrix as a product of an upper triangular matrix and its transpose can be used to this matrix. Using the upper triangular matrix, the equation (T T T T C C C −1 white T T T + B B B −1 )x x x = T T T T C C C −1 white δ r r r for x x x is solved rather than computing the inverse of T T T T C C C −1 white T T T + B B B −1 itself. The determinant can be calculated as twice the product of the diagonal elements of the upper triangular matrix.
D. Prior Probability Distribution
In this section, we describes the prior probability distribution. We use specific knowledge only for the mass errors of each planet as the prior information. Using the propagation of uncertainty law, the variances of δ M J , δ M S , and δ M N are calculated from the IAU 2009 system of astronomical constants, and the variance of δ M U is calculated from the values in the article [55] which is newer than the IAU 2009 system of astronomical constants. Then we assume a normal distribution for the mass errors of each planet and apply the obtained variances. For parameters without specific knowledge, we use a log-uniform distribution for parameters which are need to be searched over several orders of magnitude with only positive values, and we use a uniform distribution for the other parameters. The range of the log-uniform distribution and the uniform distribution is taken sufficiently wider than the value that the parameter would take. The parameters and their prior probability distribution used in this article are given in Table II.
Regarding the amplitude of the FDM, we especially consider both cases of uniform distribution and log-uniform distribution as in the article [34]. The uniform distribution is used to give upper limits, and the log-uniform distribution is used for the model comparison, and the reason for this is as follows. If there is a FDM signal, for example by inserting it into data, the prior probability distribution is updated to a posterior probability distribution having a peak at the correct values of the parameters of the FDM. In this case, both prior probability distributions give similar posterior probability distributions. In practice, however, it is not known whether there is a FDM signal in the data, and even if the data is used, the posterior probability distribution may not be updated much from the prior probability distributions of the parameters of the FDM. In this practical case, the posterior probability distribution is affected by the shape of the prior probability distribution of the amplitude of the FDM. Considering the shape of the prior probability distribution, the log-uniform distribution allows smaller amplitude of the FDM than the uniform distribution, so that the upper limit obtained using Eq. (3.8) also decreases accordingly. Consequently, if one wants to give a conservative upper limit, the log-uniform distribution is not suitable. Furthermore, when we actually analyze the data used in this article, the posterior distribution obtained by using the log-uniform distribution is often have a value up to the lower limit given to the log-uniform distribution. This means that the upper limit depends on the lower limit of the log-uniform distribution, so that if the lower limit is decreased, the upper limit can be reduced. This is another reason why the logarithmic uniform distribution is not suitable for giving an upper limit. On the other hand, this property of a logarithmic uniform distribution is preferable for computing the Bayes factor (3.11), because the Bayes factor often gives a finite value with a fixed value Ψ 0 = 10 −18 which is a very small value as the lower limit of the prior probability distribution.
As is done in [34,45,56], we analyze the white noises in advance before the main analysis. The resulting MCMC chains are used to calculate the value that maximizes the one-dimensional posterior probability distribution corresponding to each white noise, where this value is called the maximum a posteriori (MAP) value. The main analysis is performed by fixing the possible values of the white noise parameter to the MAP value. See the section VI for the pre-analysis.
V. MARKOV CHAIN MONTE CARLO SIMULATION
The MCMC simulation can be used to generate samples from the posterior probability distribution. The MCMC method we used is called parallel tempering. In the parallel tempering method, a concept of temperature is introduced, and the MCMC simulations of different temperatures are executed in parallel. The advantage of parallel tempering is that it is possible to reduce the tendency of the samples of the posterior distribution to be trapped in a local minimum, compared to the Metropolis-Hastings method which is the one of the most famous MCMC methods [20]. We carry out the analysis using four temperatures T = 1.00,4.64,21.5,100. [58] which is a Bayesian inference package for PTA and can include the PTMCMCSampler. Regarding models not implemented in the PAL2, the FDM signal is implemented like the continuous gravitational waves and the SSE noise is implemented like any other noises. Following the article [45], we use adaptive Metropolis [59], single component adaptive Metropolis [60], and differential evolution [61], as a proposal algorithm which is used to generate next samples using past samples. Furthermore, we also use a simple proposal algorithm to generate the next sample of each parameter by proposal distribution which is the same distribution as the probability distribution. All of these proposal algorithms are used in a single MCMC simulation and which one is used is chosen randomly for each proposal in the MCMC simulation. In this article, we use the value written in the PAL2 for each variable used in PTMCMCSampler, unless specifically mentioned.
8 det(A A A + u u uv v v T ) = (1 + v v v T A A A −1 u u u) det(A A A) 9 (A A A + u u uv v v T ) −1 = A A A −1 − A A A −1 u u uv v v T A A A −1 1 + v v v T A A A −1 u u u
VI. PRE-ANALYSIS
As is usual [34,45,56], in order to obtain the MAP values of the parameters of the white noise, we analyze the white noise first before the main analysis. By doing this, in the main analysis, the number of free parameters can be reduced, and the inverse matrix and determinant of the white noise mentioned in the section IV C only need to be calculated once at the beginning of MCMC simulation. In the pre-analysis, we performed independent analysis for each pulsar, and we used the model which contains the red noise in addition to white noise. We ran the MCMC simulation with 10 6 iterations and removed the first 25% as a burn-in period, where the burn-in period is the period during which samples have not yet been obtained from the target distribution.
The reason for including the red noise in the model is that the red noise is the stochastic noise same as the white noise and it can become white noise when the spectral index becomes zero. However, if the one-dimensional posterior probability distribution of the parameter of the white noise has a sharp peak, it was confirmed that the MAP value of the parameter does not change 10 https://github.com/jellis18/PTMCMCSampler 11 https://github.com/jellis18/PAL2 very much regardless of the presence or absence of the red noise model. In particular, it is know that the red noise of the PSR 1909-3744 can take wide parameter values [18], but the above result was obtained. Therefore, this result suggests that the white noise can be analyzed in advance.
VII. RESULT
In this section we describe the upper limits on the amplitude of the FDM and how much the FDM signal is absorbed by other noises. All our result was calculated using six pulsars: PSRs J0613-0200, J1012+5307, J1600-3053, J1713+0747, J1744-1134, and J1909-3744 in the NANOGrav 11-year data set.
A. Upper limits
We calculated the 95% confidence upper limits on the amplitude of the FDM by the Bayesian analysis. We ran all the MCMC simulation with 10 6 iterations and removed the first 25% as a burn-in period. As the prior probability distribution of the amplitude of the FDM we considerd two cases: the uniform prior and the log-uniform prior. The uniform prior was used to place the conservative upper limits, the log-uniform prior was used to calculate the Bayes factors, where the upper limits were caluculated using the Eq. (3.8) and the Bayes factors were calculated using Eq. (3.11). In order to see the effect of including the SSE noise in the model on the results, we also calculated the upper limits and the Bayes factors when the SSE noise is not included in the model. See Appendix A for how accurately the FDM can be detected by our Bayesian analysis.
In Figure 1, we show the upper limits and the Bayes factors of the amplitude Ψ as a function of the frequency f and the FDM mass m. The relation between the frequency f and the FDM mass m is given by Eq. (2.6). First, in the above plot, the black solid and dashed lines denote the upper limit using uniform prior and log-uniform prior, respectively. Here, we plotted the results obtained using log-uniform prior, but as we mentioned in the section IV D, we regard the results obtained by the uniform prior as conservative upper limits. The red solid and dashed lines denote the upper limit obtained when the SSE noise was not included in the model, and using uniform-prior and log-uniform prior, respectively. The bold black line denotes the upper limit of the Bayesian analysis obtained in [16] ( taken from Figure 3). The green line denotes the predicted amplitude of the FDM given by Eq. (2.5) with ρ = 0.4Gev cm 3 . Note that it does not mean that there is the FDM signal on all of this line, it is observed at one point on this line, depending on the mass of the FDM. The purple vertical lines denote the inverse of the observation times of pulsars and corresponds to PSRs J1744-1134, J1012+5307, J1909-3744, J1713+0747, J0613-0200, and J1600-3053 in order from the left. We regard the purple vertical line on the leftmost side as the lower limit of the frequency at which the PTA is sensitive to the signal of FDM. Therefore, we do not mention the result obtained in the shadow region of the plot. One simple reason why the inverse of the observation time is lower limit of the frequency that it would be difficult to detect a signal with a longer wavelength than the observation time. A slightly more specific reason is that some of the signal at lowest frequency is removed by fitting the pulsar spin periods when creating the residuals [52,53]. Furthermore, in the model used in this article, the TM noise is included to take into account the uncertainty of the fitting. The TM noise corresponding to pulsar spin periods induces uncertainty in the analysis of the FDM signal at the lowest frequency, because we marginalized the posterior probability distribution using the uninformative prior for the parameters ε ε ε. Next, in the bottom plot, the black and red dots denote the mean value of the Bayes factor using the model with and without the SSE noise, respectively. The unbiased standard deviation is used for error bars. Only to make the plot easier to see, when the Bayes factor exceeds 20, it is represented by the upper triangle and the mean value and the unbiased standard deviations of the Bayes factor is written above it.
First we consider the red results obtained when the SSE noise is not included in the model. It turns out that the upper limits for the log-uniform distribution gives stronger limits than for the uniform distribution, but the difference is small. The reason the difference is small is that the Bayes factor exceeds 3 when the frequency becomes 10 −8.19 Hz (1.34 × 10 −23 eV) or lower. According to the Table I, the Bayes factor exceeds 3 means that there is a signal that is somewhat similar to the FDM signal. Therefore, whichever prior probability distribution is used, the value of the posterior probability distribution tends to be large at the parameter values of that signal, and as a result the upper limits does not change so much. Also, in cases where the frequencies are 10 −8.52 and 10 −8.46 Hz (6.24×10 −23 and 7.17×10 −23 eV), the Bayes factor exceeds 20. Thus, at these frequencies, the presence of the FDM signal is strongly supported, but it should be noted that the PTAs relatively lose their sensitivity. We have found signals similar to the FDM signal in a relatively large frequency range, but it is hard to believe that this is the FDM signal. Indeed, it is about an order of magnitude larger than the expected amplitude, and that there is no chance that the FDM signal is found in this region. Thus, it turns out that a waveform similar to the FDM signal (e.g., the noise induced by Jupiter in the SSE noise as expected and/or a signal from gravitational waves possibly) has to be included in the model additionally.
Next, we consider the black result obtained when the SSE noise is included in the model. This result is our main result. Compared with the case where SSE noise is included in the model, it can be seen that the upper limits of the amplitude of the FDM obtained using uniform distribution does not change much. However, the difference between the upper limits obtained from the different prior probability distributions is large. The reason for this is that all Bayes factors are smaller than 3 and in most cases they do not exceed 1, for the opposite reason to that mentioned in the previous paragraph. The Bayes factor is less than 1 means that the probability that the model without the signal of the FDM p(M 2 d) is superior to the model with it p (M 1 d). Therefore, in this case we conclude that no FDM signal has been detected. At frequencies 10 −8.46 Hz (7.17×10 −24 eV) or lower, the Bayes factor is larger than 1, but it is too early to mention the existence of the FDM signal, because it is close to the lower limit of the frequency. Thus, we conclude that we have not obtained a significant evidence for the FDM, and therefore it can be considered that we gave the 95% confidence upper limit on the amplitude of the FDM. In comparison with the published Bayesian upper limits of the amplitude of the FDM using the PPTA 12-year data set [16], i.e. comparing the black and the bold black lines, we found that stronger upper limits were obtained when the frequency was in the range from 10 −8.34 to 10 −8.19 Hz (from 9.45 × 10 −24 to 1.34 × 10 −23 eV). In this range, up to three times stronger upper limits were obtained, and in other region, about the same upper limits were obtained. It is also important to see the upper limit on the energy density of the dark matter near the Earth rather than the amplitude of the FDM signal. Thus, we convert the amplitude of the FDM signal into the energy density using Eq. (2.5), and the result is plotted in Figure 2. Note that the bold black line denotes the upper limit on the energy density with the Bayesian analysis in [16] (taken from Figure 4). As we can see from Figure 2, our main upper limit represented by the black line is 7 or less in the range from 10 −8.55 to 10 −8.01 Hz (from 5.83 × 10 −24 to 2.02 × 10 −23 eV) where we analyzed. The strongest upper limit on the the energy density is 2GeV cm 3 at the frequency 10 −8.28 Hz (1.09 × 10 −23 eV). The 95% upper limits on the energy density of the FDM ρ using the NANOGrav 11-year data set. This plot is the same as in Figure 1 except that the amplitude is converted to energy density. The bold black line denotes the upper limit obtained by the Bayesian analysis of the PPTA data ( taken from Figure 4 in [16]). Bottom: The values of the Bayes factor obtained when using log-uniform prior. This plot is the same as in Figure 1.
B. Fixed noise analysis
We analyzed the red noise and the SSE noise first and calculated the upper limits of the amplitude of the FDM using the obtained MAP values of the parameters. We ran the MCMC simulation with 10 6 iterations for analysis of red noise and SSE noise and with 10 5 iterations for analysis of the FDM signal, and in both cases we removed the first 25% as a burn-in period. As in the previous section, we calculated two cases of uniform and log-unifrom distributions as the prior probability distribution of the amplitude of the FDM. The results are plotted in Figure 3 which is the similar plot as Figure 1. The solid and dashed lines indicate that the unifrom and the loguniform prior were used, respectively. This analysis is incorrect because the FDM signal, the red noise and the noise induced by Jupiter in the SSE noise have similar waveforms and any of these can not be analyzed in advance. The red noise is a random process, but it is known that it mimics a periodic waveform with a lowest frequency in the case of the steep power law, for example, please see [62] and references therein. The noise induced by the mass error of Jupiter has the frequency which corresponding to the inverse of the orbital period, and the noises induced by each error of orbital elements of Jupiter also have its frequency or twice of its frequency. Since Jupiter's orbital period is 11.86 yr, Jupiter causes noises with frequencies close to the lowest frequency in the 11-year dataset we used. From the above, we consider that this analysis is not suitable for giving an upper limit to the amplitude of the FDM, and the obtained results are not regarded as the upper limits of the amplitude of the FDM. The reason for doing this analysis is to know how much the red noise and the SSE noise can absorb the signal of the FDM.
It can be seen from Figure 3 that the values of the upper limits are drastically smaller than the result obtained in the previous section. In particular, when using the log-uniform prior, surprisingly, the upper limits are smaller than the predicted amplitude in some range. As for Bayesian factors, they are all smaller than 1, which is consistent with the fact that the upper limits are strongly influenced by the prior probability distribution. From this result, it is inferred that the FDM signal is well absorbed by the red noise and the SSE noise.
In order to investigate the impact of analyzing red noise and SSE noise first, we made simulated noise using MAP values of the SSE noise, and calculated the Lomb-Scargle periodogram of the timing residual by subtracting it, where the Lomb-Scargle periodogram can be used to search for periodic signals in non-uniformly spaced time series data [63,64]. The reason for not subtracting the red noise from the original timing residual is that it is difficult to create the noise included in the actual data because the red noise that we can create is only one realization of the stochastic process. For comparison, we also calculated the Lomb-Scargle periodogram of the original timing residual and the simulated timing residual induced by the red noise only. In order to calculate the Lomb-Scargle periodogram, we used Astropy 12 [65,66] which is a Python package for the astronomy. For the purpose of expressing red noise, the original residual has short observation time and lacks frequency resolution, so that we made red noise using simulated observation data that the observation time is 10 5 days and the data points are every day. The model used to create the red noise is the same one as mentioned in the section IV B, and we set the number of frequencies N red to 10 4 .
In Figure 4, we show the Lomb-Scargle periodogram, where the horizontal axis is the frequency and the vertical axis is the Lomb-Scargle power. The black, red, and blue lines represent the periodogram of the original timing residual, the red noise only, and the timing residual subtracted by the SSE noise, respectively. As for PSR J1744-1134, a plot focused on other than the red noise is also displayed below it, because the red noise is large and the other lines are difficult to see. The purple vertical line represents the inverse of the observation time, and we will not mention the shadow region in the left of this.
First, with regard to the black line representing the periodogram of the original timing residual, it can be seen that the power in the frequency smaller than 10 −8 Hz is larger than the power in the frequency larger than 10 −8 Hz. Therefore, it can be understood that there is noise in the low frequency region, and the analysis of this article is considered to be meaningful. Second, with regard to the red line representing the periodogram of the red noise only, the red noise can be considered to characterize low frequency noise in PSRs J0613-0200 and J1012+5307, but it can not be considered so in the other pulsars. In particular, the amplitude of red noise is large in PSR J1744-1134, so that in the analysis in which the MAP value of the red noise is fixed, it can be seen that this pulsar will contribute little to the analysis result of the amplitude of the FDM. The reason why the red noise can be detected properly in PSRs J0613-0200 and J1012+5307 is that, according to the result of analyzing only the red noise and the SSE noise, the posterior distribution of the red noise is obtained with a sharp peak. On the other hand, in the other pulsars, the posterior probability distribution of the red noise has a peak which spreads over a wide parameter space, which indicates that the MAP value has little meaning. Finally, with regard to the blue line representing the original timing residual after subtracting the SSE noise, it is found that the low frequency noise are reduced in PSRs J0613-0200 and J1909-3744. On the other hand, this is not the case with the other pulsars, and it can be seen that the noise increases rather than decreases. If the SSE noise is properly found, it is thought that the noise will be reduced for all pulsars. Hence, this result seems strange. Incidentally, we see that noise is induced at frequency 3 × 10 −8 Hz, which is caused by the rotation rate around the ecliptic pole. The reason is that the parameter fitting of the pulsar position is performed when timing residuals are created. Due to the part of the design matrix that corresponds to this fitting, the pulsars lose the sensitivity at the frequency 3 × 10 −8 Hz and the analysis of the rotation rate around the ecliptic pole does not work well. From the above, we conclude that the reason why the upper limits are smaller in fixed noise analysis than in the main analysis is mainly due to PSR J1909-3744, because only this pulsar has small red noise and the low frequency noise is removed by the SSE noise. Conversely, the other pulsars have not improved much, which may suggest that fixed noise analysis is not successful. However, these issues are beyond the scope of this article.
VIII. CONCLUSION
We searched for the FDM by performing the Bayesian analysis in the time domain using the NANOGrav 11-year Data Set. In Section VII A, we found that the probability of the detection of the FDM signal was less than 75% in all frequency region. Therefore, we could not obtain any significant evidence for the FDM. Instead, we obtained the 95% cinfidence upper limit on the amplitude of the FDM. The upper limit on the the amplitude of the FDM was about one order of magnitude larger than the theoretically expected amplitude. Compared with the published Bayesian upper limit of the FDM using the PPTA 12-year data set [16], we found that our upper limit was up to 3 times stronger than the previous study when the frequency was in the range from 10 −8.34 to 10 −8.19 Hz (9.45 × 10 −24 to 1.34 × 10 −23 eV in terms of the FDM mass). In other region, we also obtained the similar upper limit on the amplitude of the FDM. Since the amplitude of FDM can be converted to the energy density of the dark matter near the Earth, it is easy to obtain the upper limit of the energy density. The upper limit on the energy density was lower than 7GeV cm 3 in the range from 10 −8.55 to 10 −8.01 Hz (from 5.83 × 10 −24 to 2.02 × 10 −23 eV) where we analyze. In particular, at a frequency of 10 −8.28 Hz (a mass of 1.09 × 10 −23 eV), we obtained the strongest upper limit 2GeV cm 3 . In addition to the main analysis, we also investigated the case where the SSE noise was not included in the model. In this case, we showed that we can not exclude the existence of the FDM, because the probability that the FDM should be included in the model was more than 75% in the frequency region 10 −8.19 Hz or less. We showed that the upper limit did not change much with or without SSE noise in the model, that is, the upper limit obtained in this case was also about an order of magnitude larger than the theoretically expected amplitude. Therefore, although the signal of the FDM can not be excluded in this case, we can say that the PTA was not sensitive enough to detect the FDM. It indicates that noise with a waveform similar to the signal of the FDM (e.g., the noise induced by Jupiter in the SSE noise as expected and/or a signal from gravitational waves possibly) should be included in the model.
In Section VII B, by analyzing the noise in advance, we examined how much the signal of the FDM was absorbed. In this case, we clarified that the probability that the FDM should be included in the model was much lower than 50% in all frequency region. Compared to our main analysis, we found that the upper limit on the amplitude of the FDM became very small. Note that it is inappropriate to analyze only the noise in advance, and we do not consider this to be an actual upper limit for the FDM. From this, it is expected that the signal of the FDM will be absorbed very well by analyzing the noise in advance. Thus, we made a simulated noise from the parameters obtained by the analysis earlier, and investigated whether removing it from the observed data would reduce the power of the low frequency region of the data. We found that only the power of the PSR J1909-3744 has become smaller, and we conclude that only this pulsar contributed to the improvement of the sensitivity. With other pulsars, we also found that the power increased on the contrary. This result seems to indicate that analysis of noise is not successful, but further discussion is beyond the scope of this article.
In order to investigate whether the signal can actually be detected by the MCMC simulation, we test the MCMC simulation using data composed of virtual signals. We made the following two data: Data1 ∶ δt t t = s s s + n n n red + n n n SSE + n n n equad , f = 10 −8.0 ,
Data2 ∶ δt t t = s s s + n n n red + n n n SSE + n n n equad , f = 10 −8.55 ,
where the Data1 has the FDM frequency of 10 −8.0 Hz, and the Data2 has the FDM frequency of 10 −8.55 Hz. The frequencies 10 −8.0 and 10 −8. 55 Hz are respectively the highest and lowest frequencies of the upper limit of the amplitude of the FDM calculated by us. It is considered that 10 −8.0 Hz is easy to distinguish from other noises, while 10 −8.55 Hz is not. We do not specifically mention the parameter values we used, but the RMS value is 10 −4 s for the FDM signal, the red noise and the SSE noise, and 10 −6 s for the equad. The model we used is
Model ∶ δt t t = s s s + n n n TM + n n n red + n n n SSE + n n n equad .
As in Section IV C, the posterior probability distribution is marginalized over the parameters of the red noise and TM noise. The timing fit has not been performed on the data, but the TM noise is added to investigate the decrease in sensitivity due to the design matrix. The values of the prior probability distribution in Section IV D are rewritten so that the prior probability distribution contains the value of simulated data parameters. The prior probability distribution of the amplitude of the FDM is log-uniform distribution. The prior probability distribution of the equad noise is fixed to the MAP value obtained by the pre-analysis, where both the data and the model of the pre-analysis include only the equad noise. The MCMC simulation is perfomed with 10 6 iterations and removed the first 25% as a burn-in period.
In order to confirm that our implementation is done correctly, we also examine the case where the TM noise is not included in the model except for a constant part with respect to the time of the TM noise. The reason for leaving only a constant part with respect to the time of TM noise is to ignore the effect of subtracting the average when creating the data. Since it turned out that the posterior probability distribution did not converge when the iteration was 10 6 , we fix the red noise and the SSE noise to the MAP value. For the MAP values, the red noise and the SSE noise are analyzed independently by creating noise data corresponding to each noise. In this pre-analysis, the data and the model include white noise, but the parameter of the white noise is fixed. Figure 5 shows the posterior probability distribution of the frequency and the amplitude of the FDM. Since it is redundant to plot the other parameters, the posterior distribution has been marginalized over them. The top and bottom plots are plots with and without the design matrix, respectively. The plot on the left uses the Data1, and the one on the right uses Data2. The twodimensional contour plot represents the posterior probability distribution of the two parameters, and each solid and dashed line represent the 68% and the 95% credible region, respectively. The one-dimensional plot represents the the posterior probability distribution marginalized either one of the parameters, and the value above it is the MAP value. The brue vertical and horizontal lines denote the value of simulated data parameters.
For the case where TM noise is included in the model, It can be seen that the signal of the FDM can be detected when the frequency is 10 −8.0 Hz. On the other hand, when the frequency is 10 −8.55 Hz, the FDM can not be detected. The frequency has no peak in all region of the prior probability distribution, and the amplitude has a peak but the MAP value is not accurately determined. Since the amplitude has a finite value up to the lower limit of the prior probability distribution, the Bayes factor can be calculated. As a result, it is found that the value of Bayes factor is less than 1, so that the model which does not include the FDM is superior to the model which include it. By including the TM noise into the model we found that the low frequency signal of the FDM was not detected in the data we made, but we believe that there is no problem for the purpose of giving the upper limit. For the case where TM noise is not included in the model, it can be seen that the signal of the FDM is detected at either frequency. The uncertainty of the pre-analysis of the red noise and the SSE noise creates a bias, but the MAP values and the values of the simulated data parameters are very close. Therefore, we conclude that our implementation was done correctly. The onedimensional plot shows the posterior probability distribution marginalized over either the frequency or the amplitude, and the value above the plot denotes the MAP value. The two-dimensional contour plot shows the posterior probability distribution of the frequency or the amplitude, and each solid and dashed line represent the 68% and the 95% credible region, respectively. The blue vertical and horizontal lines denote the value of simulated data parameters.
Ψ ≃ 6 .48 × 10
610
R 0 p
0(A D,M)dA = 0.95. (3.8)
FIG. 1 :
1Top: The 95% upper limits on the amplitude of the FDM Ψ using the NANOGrav 11-year data set. As a prior probability distribution of the amplitude of the FDM, the uniform prior was used for the black solid lines and the log-uniform prior was used for black dashed lines. The red lines are the upper limit obtained when the SSE noise is not included in the model describing the observed data, and the solid and dashed lines indicate that unifrom and loguniform were used, respectively. The bold black line is the upper limit obtained by the Bayesian analysis of the PPTA data ( taken fromFigure 3in[16]). The green line is the predicted amplitude of the FDM. The purple vertical line is the inverse of the observation time of the pulsars. We do not mention the results obtained in the shadow region. Bottom: The values of the Bayes factor obtained when using log-uniform prior. The black and red indicate when the SSE noise is included in the model or not, respectively. To improve the visibility of the plot, when the value of the Bayes factor exceeds 20, the upper triangle is used.
FIG. 2 :
2Top:
FIG. 3 :
3Similar plot to Figure 1. We do not regard this plot as the upper limits of the amplitude of the FDM, because it was obtained by an inappropriate analysis.
FIG. 4 :
4The Lomb-Scargle periodogram for PSRs J0613-0200, J1012+5307, J1600-3053, J1713+0747, J1744-1134, and J1909-3744. The black, red, and blue lines denote theLomb-Scargle periodogram of the original timing residual, only the red noise, and the original timing residual after subtracting the SSE noise. As for PSR J1744-1134, the red noise is large, hence a figure with a different scale is depicted below that figure to see the other lines.
FIG. 5 :
5The posterior probability distribution of the frequency and the amplitude of the FDM. The top and bottom plots are for the model with and without TM noise, respectively. The left and right plots are for the data with frequencies 10 −8.0 and 10 −8.55 Hz, respectively.
TABLE I :
IInterpretation of the Bayes factor and the probability p(M 1 d)
B 12
p(M 1 d)
Evidence in favor of M 1 against M 2
1-3
0.500-0.750
Not worth more than a bare mention
3-20
0.750-0.952
Positive
20-150 0.952-0.993
Strong
>150
>0.993
Very strong
TABLE II :
IIa f is a center frequency of the bin: f = {10 −9 , 10 −8.97 , ⋯, 10 −8.01 }In order to perform the parallel tempering, we use the software package PTMCMCSampler 10[57] via PAL211 Prior Probability Distribution
parameter
description
prior probability distribution
FDM signal
Ψ
amplitude
Uniform[10 −18 , 10 −11 ] (for upper limit)
logUniform[−18, −11] (for model comparison)
f [Hz]
frequency
logUniform[log f − 0.015, log f + 0.015] a
αe [rad]
phase at Earth
Uniform[0, 2π]
αp [rad]
phase at pulsar
Uniform[0, 2π]
red noise
A red
amplitude
logUniform[−20, −11]
γ
spectral index
Uniform[0.02, 6.98]
SSE noise
δ M J
[M⊙]
mass error of Jupiter
N (0, 1.55 × 10 −11 )
δ M S
[M⊙] mass error of Saturn
N (0, 8.17 × 10 −12 )
δ M U
[M⊙] mass error of Uranus
N (0, 5.72 × 10 −11 )
δ M N
[M⊙] mass error of Neptune
N (0, 7.96 × 10 −11 )
δ a J
µ
small offsets of parameters based on PCA bases Uniform[−0.05, 0.05])
δ z [rad year] rotation rate around ecliptic pole
Uniform[−10 −9 , 10 −9 ]
white noise
ea
EFAC parameter
Uniform[0.001, 10] (for pre-analysis)
qa [s]
EQUAD parameter
logUniform[−10, −4] (for pre-analysis)
ja [s]
ECORR parameter
logUniform[−8.5, −4] (for pre-analysis)
http://vallis.github.io/libstempo 2 https://bitbucket.org/psrsoft/tempo2.git3 We confirmed that each pulsar's value of the chi-square and the degrees of freedom which can be derived by TEMPO2 are consistent with values listed in the file 'stats 11y 20180226.dat', where this file is included in the data set and can be used as to see if TEMPO2 is properly constructed. Therefore, TEMPO2 was installed as expected.
https://github.com/nanograv/enterprise5 In the pulsar we used, we confirmed that there was no overlap between each observation of receivers and backend systems. Therefore, each observation can be divided appropriately.
The statistical uncertainty of the TM noise and the red noise are parametrized by not the parameter but the hyperparameter. Therefore, as can be seen from Eq. (3.6), when constructing a likelihood function, the TM noise and the red noise are determined by given parameters.
As a matter of fact, we know that the equation n n n TM = M M Mε ε ε is correct when ε ε ε is small enough, and we can obtain uncertainties of the timing model parameters. Therefore, we might have to use the uncertainties as variances of the Gaussian prior distribution.
http://www.astropy.org
. Peter Svrcek, Edward Witten, Axions In String Theory. JHEP. 0651Peter Svrcek and Edward Witten. Axions In String Theory. JHEP, 06:051, 2006.
. Asimina Arvanitaki, Savas Dimopoulos, Sergei Dubovsky, Nemanja Kaloper, John March-Russell, String Axiverse. Phys. Rev. 81123530Asimina Arvanitaki, Savas Dimopoulos, Sergei Dubovsky, Nemanja Kaloper, and John March-Russell. String Axiverse. Phys. Rev., D81:123530, 2010.
Fuzzy cold dark matter: The wave properties of ultralight particles. Wayne Hu, Rennan Barkana, Andrei Gruzinov, Phys. Rev. Lett. 85Wayne Hu, Rennan Barkana, and Andrei Gruzinov. Fuzzy cold dark matter: The wave properties of ultralight particles. Phys. Rev. Lett., 85:1158-1161, Aug 2000.
. J E David, Marsh, Axion Cosmology. Phys. Rept. 643David J. E. Marsh. Axion Cosmology. Phys. Rept., 643:1-79, 2016.
Pulsar timing signal from ultralight scalar dark matter. Andrei Khmelnitsky, Valery Rubakov, Journal of Cosmology and Astroparticle Physics. 02Andrei Khmelnitsky and Valery Rubakov. Pulsar timing signal from ultralight scalar dark matter. Journal of Cosmology and Astroparticle Physics, 2014(02):019-019, feb 2014.
Detecting ultralight axion dark matter wind with laser interferometers. Arata Aoki, Jiro Soda, Int. J. Mod. Phys. 26071750063Arata Aoki and Jiro Soda. Detecting ultralight axion dark matter wind with laser interferometers. Int. J. Mod. Phys., D26(07):1750063, 2016.
Probing GHz Gravitational Waves with Graviton-magnon Resonance. Asuka Ito, Tomonori Ikeda, Kentaro Miuchi, Jiro Soda, Asuka Ito, Tomonori Ikeda, Kentaro Miuchi, and Jiro Soda. Probing GHz Gravitational Waves with Graviton-magnon Resonance. 1903.04843, 2019.
Pulsar timing measurements and the search for gravitational waves. S Detweiler, Astrophys. J. 234S. Detweiler. Pulsar timing measurements and the search for gravitational waves. Astrophys. J. , 234:1100-1104, December 1979.
Timing a Millisecond Pulsar Array. Roger W Romani, SpringerNetherlandsRoger W. Romani. Timing a Millisecond Pulsar Array, pages 113-117. Springer Netherlands, 1989.
Constructing a pulsar timing array. R S Foster, D C Backer, Astrophys. J. 361R. S. Foster and D. C. Backer. Constructing a pulsar timing array. Astrophys. J. , 361:300-308, September 1990.
The european pulsar timing array and the large european array for pulsars. Michael Kramer, J David, Champion, Classical and Quantum Gravity. 3022224009Michael Kramer and David J Champion. The european pulsar timing array and the large european array for pulsars. Classical and Quantum Gravity, 30(22):224009, nov 2013.
The north american nanohertz observatory for gravitational waves. M A Mclaughlin, Classical and Quantum Gravity. 3022224008M A McLaughlin. The north american nanohertz observatory for gravitational waves. Classical and Quantum Gravity, 30(22):224008, nov 2013.
The parkes pulsar timing array project. R N Manchester, G Hobbs, M Bailes, W A Coles, W Van Straten, M J Keith, R M Shannon, N D R Bhat, A Brown, S G Burke-Spolaor, Publications of the Astronomical Society of Australia. 3017R. N. Manchester, G. Hobbs, M. Bailes, W. A. Coles, W. van Straten, M. J. Keith, R. M. Shannon, N. D. R. Bhat, A. Brown, S. G. Burke-Spolaor, and et al. The parkes pulsar timing array project. Publications of the Astronomical Society of Australia, 30:e017, 2013.
The international pulsar timing array project: using pulsars as a gravitational wave detector. G Hobbs, Archibald, Arzoumanian, Backer, N D R Bailes, M Bhat, Burgay, D Burke-Spolaor, Champion, Cognard, J Coles, P Cordes, G Demorest, R Desvignes, L D Ferdman, P Finn, M Freire, J Gonzalez, Hessels, G Hotan, F Janssen, Jenet, Jessner, Jordan, M Kaspi, V Kramer, Kondratiev, Lazio, K J Lazaridis, Lee, Levin, D Lommen, R Lorimer, Lynch, Lyne, M Manchester, D Mclaughlin, Nice, M Oslowski, Pilia, M Possenti, Purver, Ransom, Reynolds, Sanidas, Sarkissian, Sesana, Shannon, Siemens, Stairs, Stappers, G Stinebring, Theureau, Van Haasteren, J P W Van Straten, D R B Verbiest, X P Yardley, You, Classical and Quantum Gravity. 27884013G Hobbs, A Archibald, Z Arzoumanian, D Backer, M Bailes, N D R Bhat, M Burgay, S Burke-Spolaor, D Champion, I Cognard, W Coles, J Cordes, P Demorest, G Desvignes, R D Ferdman, L Finn, P Freire, M Gonzalez, J Hessels, A Hotan, G Janssen, F Jenet, A Jessner, C Jordan, V Kaspi, M Kramer, V Kondratiev, J Lazio, K Lazaridis, K J Lee, Y Levin, A Lommen, D Lorimer, R Lynch, A Lyne, R Manchester, M McLaughlin, D Nice, S Oslowski, M Pilia, A Possenti, M Purver, S Ransom, J Reynolds, S Sanidas, J Sarkissian, A Sesana, R Shannon, X Siemens, I Stairs, B Stappers, D Stinebring, G Theureau, R van Haasteren, W van Straten, J P W Verbiest, D R B Yardley, and X P You. The international pulsar timing array project: using pulsars as a gravitational wave detector. Classical and Quantum Gravity, 27(8):084013, apr 2010.
Constraints on ultralight scalar dark matter from pulsar timing. N K Porayko, K A Postnov, Phys. Rev. D. 9062008N. K. Porayko and K. A. Postnov. Constraints on ultralight scalar dark matter from pulsar timing. Phys. Rev. D, 90:062008, Sep 2014.
Parkes Pulsar Timing Array constraints on ultralight scalar-field dark matter. K Nataliya, Porayko, Phys. Rev. 9810102002Nataliya K. Porayko et al. Parkes Pulsar Timing Array constraints on ultralight scalar-field dark matter. Phys. Rev., D98(10):102002, 2018.
Pulsar timing signal from ultralight scalar dark matter. Andrei Khmelnitsky, Valery Rubakov, JCAP. 19Andrei Khmelnitsky and Valery Rubakov. Pulsar timing signal from ultralight scalar dark matter. JCAP, 1402:019, 2014.
The NANOGrav 11-year Data Set: High-precision timing of 45 Millisecond Pulsars. Zaven Arzoumanian, Astrophys. J. Suppl. 235237Zaven Arzoumanian et al. The NANOGrav 11-year Data Set: High-precision timing of 45 Millisecond Pulsars. Astrophys. J. Suppl., 235(2):37, 2018.
The dark matter halo of the milky way, AD 2013. Fabrizio Nesti, Paolo Salucci, Journal of Cosmology and Astroparticle Physics. 07Fabrizio Nesti and Paolo Salucci. The dark matter halo of the milky way, AD 2013. Journal of Cosmology and Astroparticle Physics, 2013(07):016-016, jul 2013.
Bayesian Logical Data Analysis for the Physical Sciences: A Comparative Approach with Mathematica Support. Phil Gregory, Cambridge University PressPhil Gregory. Bayesian Logical Data Analysis for the Physical Sciences: A Comparative Approach with Mathematica Support. Cam- bridge University Press, 2005.
Doing Bayesian Data Analysis: A Tutorial Introduction with R. J Kruschke, Elsevier ScienceJ. Kruschke. Doing Bayesian Data Analysis: A Tutorial Introduction with R. Elsevier Science, 2010.
Bayesian Data Analysis, Third Edition. Chapman & Hall/CRC Texts in Statistical Science. A Gelman, J B Carlin, H S Stern, D B Dunson, A Vehtari, D B Rubin, Taylor & FrancisA. Gelman, J.B. Carlin, H.S. Stern, D.B. Dunson, A. Vehtari, and D.B. Rubin. Bayesian Data Analysis, Third Edition. Chapman & Hall/CRC Texts in Statistical Science. Taylor & Francis, 2013.
The Theory of Probability. H Jeffreys, Clarenson Press2nd editionH. Jeffreys. The Theory of Probability. Clarenson Press, 2nd edition, 1948.
Bayes factors. Robert E Kass, Adrian E Raftery, Journal of the American Statistical Association. 90430Robert E. Kass and Adrian E. Raftery. Bayes factors. Journal of the American Statistical Association, 90(430):773-795, 1995.
Bayesian model selection in social research. Adrian E Raftery, Sociological Methodology. 25Adrian E. Raftery. Bayesian model selection in social research. Sociological Methodology, 25:111-163, 1995.
Kendall's advanced theory of statistics. A O'hagan, M Kendall, ArnoldA. O'Hagan and M. Kendall. Kendall's advanced theory of statistics: Vol. IIB. Edward Arnold, 1994.
The weighted likelihood ratio, linear hypotheses on normal location parameters. James M Dickey, The Annals of Mathematical Statistics. 421James M. Dickey. The weighted likelihood ratio, linear hypotheses on normal location parameters. The Annals of Mathematical Statistics, 42(1):204-223, 1971.
Computing bayes factors using a generalization of the savage-dickey density ratio. Isabella Verdinelli, Larry Wasserman, Journal of the American Statistical Association. 90430Isabella Verdinelli and Larry Wasserman. Computing bayes factors using a generalization of the savage-dickey density ratio. Journal of the American Statistical Association, 90(430):614-618, 1995.
Tempo2, a new pulsar timing package. 2. The timing model and precision estimates. Russell T Edwards, G B Hobbs, R N Manchester, Mon. Not. Roy. Astron. Soc. 372Russell T. Edwards, G. B. Hobbs, and R. N. Manchester. Tempo2, a new pulsar timing package. 2. The timing model and precision estimates. Mon. Not. Roy. Astron. Soc., 372:1549-1574, 2006.
The NANOGrav Nine-year Data Set: Observations, Arrival Time Measurements. Z Arzoumanian, NANOGrav CollaborationA Brazier, NANOGrav CollaborationS Burke-Spolaor, NANOGrav CollaborationS Chamberlin, NANOGrav CollaborationS Chatterjee, NANOGrav CollaborationB Christy, NANOGrav CollaborationJ M Cordes, NANOGrav CollaborationN Cornish, NANOGrav CollaborationK Crowter, NANOGrav CollaborationP B Demorest, NANOGrav CollaborationT Dolch, NANOGrav CollaborationJ A Ellis, NANOGrav CollaborationR D Ferdman, NANOGrav CollaborationE Fonseca, NANOGrav CollaborationN Garver-Daniels, NANOGrav CollaborationM E Gonzalez, NANOGrav CollaborationF A Jenet, NANOGrav CollaborationG Jones, NANOGrav CollaborationM L Jones, NANOGrav CollaborationV M Kaspi, NANOGrav CollaborationM Koop, NANOGrav CollaborationM T Lam, NANOGrav CollaborationT J W Lazio, NANOGrav CollaborationL Levin, NANOGrav CollaborationA N Lommen, NANOGrav CollaborationD R Lorimer, NANOGrav CollaborationJ Luo, NANOGrav CollaborationR S Lynch, NANOGrav CollaborationD Madison, NANOGrav CollaborationM A Mclaughlin, NANOGrav CollaborationS T Mcwilliams, NANOGrav CollaborationD J Nice, NANOGrav CollaborationN Palliyaguru, NANOGrav CollaborationT T Pennucci, NANOGrav CollaborationS M Ransom, NANOGrav CollaborationX Siemens, NANOGrav CollaborationI H Stairs, NANOGrav CollaborationD R Stinebring, NANOGrav CollaborationK Stovall, NANOGrav CollaborationJ K Swiggum, NANOGrav CollaborationM Vallisneri, NANOGrav CollaborationR Van Haasteren, NANOGrav CollaborationY Wang, NANOGrav CollaborationW Zhu, NANOGrav CollaborationAstrophys. J. 81365and Analysis of 37 Millisecond PulsarsThe NANOGrav Collaboration, Z. Arzoumanian, A. Brazier, S. Burke-Spolaor, S. Chamberlin, S. Chatterjee, B. Christy, J. M. Cordes, N. Cornish, K. Crowter, P. B. Demorest, T. Dolch, J. A. Ellis, R. D. Ferdman, E. Fonseca, N. Garver-Daniels, M. E. Gonzalez, F. A. Jenet, G. Jones, M. L. Jones, V. M. Kaspi, M. Koop, M. T. Lam, T. J. W. Lazio, L. Levin, A. N. Lommen, D. R. Lorimer, J. Luo, R. S. Lynch, D. Madison, M. A. McLaughlin, S. T. McWilliams, D. J. Nice, N. Palliyaguru, T. T. Pennucci, S. M. Ransom, X. Siemens, I. H. Stairs, D. R. Stinebring, K. Stovall, J. K. Swiggum, M. Vallisneri, R. van Haasteren, Y. Wang, and W. Zhu. The NANOGrav Nine-year Data Set: Observations, Arrival Time Measurements, and Analysis of 37 Millisecond Pulsars. Astrophys. J. , 813:65, November 2015.
tempo2, a new pulsar-timing package i. an overview. G B Hobbs, R T Edwards, R N Manchester, Monthly Notices of the Royal Astronomical Society. 3692G. B. Hobbs, R. T. Edwards, and R. N. Manchester. tempo2, a new pulsar-timing package i. an overview. Monthly Notices of the Royal Astronomical Society, 369(2):655-672, 2006.
The Planetary and Lunar Ephemerides DE430 and DE431. W M Folkner, J G Williams, D H Boggs, R S Park, P Kuchynka, 196Interplanetary Network Progress ReportW. M. Folkner, J. G. Williams, D. H. Boggs, R. S. Park, and P. Kuchynka. The Planetary and Lunar Ephemerides DE430 and DE431. Interplanetary Network Progress Report, 196:1-81, February 2014.
JPL planetary and Lunar ephemeris DE436. W M Folkner, R S Park, Jet Propulsion LaboratoryW. M. Folkner and R. S. Park. JPL planetary and Lunar ephemeris DE436. Jet Propulsion Laboratory, September 2016.
The NANOGrav 11-year Data Set: Pulsar-timing Constraints On The Stochastic Gravitational-wave Background. Z Arzoumanian, Astrophys. J. 859147Z. Arzoumanian et al. The NANOGrav 11-year Data Set: Pulsar-timing Constraints On The Stochastic Gravitational-wave Background. Astrophys. J., 859(1):47, 2018.
The NANOGrav Nine-year Data Set: Excess Noise in Millisecond Pulsar Arrival Times. M T Lam, J M Cordes, S Chatterjee, Z Arzoumanian, K Crowter, P B Demorest, T Dolch, J A Ellis, R D Ferdman, E Fonseca, M E Gonzalez, G Jones, M L Jones, L Levin, D R Madison, M A Mclaughlin, D J Nice, T T Pennucci, S M Ransom, R M Shannon, X Siemens, I H Stairs, K Stovall, J K Swiggum, W W Zhu, Astrophys. J. 83435M. T. Lam, J. M. Cordes, S. Chatterjee, Z. Arzoumanian, K. Crowter, P. B. Demorest, T. Dolch, J. A. Ellis, R. D. Ferdman, E. Fonseca, M. E. Gonzalez, G. Jones, M. L. Jones, L. Levin, D. R. Madison, M. A. McLaughlin, D. J. Nice, T. T. Pennucci, S. M. Ransom, R. M. Shannon, X. Siemens, I. H. Stairs, K. Stovall, J. K. Swiggum, and W. W. Zhu. The NANOGrav Nine-year Data Set: Excess Noise in Millisecond Pulsar Arrival Times. Astrophys. J. , 834:35, January 2017.
Optical Timing of the Crab Pulsar, NP 0532. P E Boynton, E J Groth, D P Hutchinson, G P NanosJr, R B Partridge, D T Wilkinson, Astrophys. J. 175217P. E. Boynton, E. J. Groth, D. P. Hutchinson, G. P. Nanos, Jr., R. B. Partridge, and D. T. Wilkinson. Optical Timing of the Crab Pulsar, NP 0532. Astrophys. J. , 175:217, July 1972.
Timing of the Crab Pulsar III. The Slowing Down and the Nature of the Random Process. E J Groth, The Astrophysical Journal Supplement Series. 29E. J. Groth. Timing of the Crab Pulsar III. The Slowing Down and the Nature of the Random Process. The Astrophysical Journal Supplement Series, 29, November 1975.
Assessing the role of spin noise in the precision timing of millisecond pulsars. M Ryan, James M Shannon, Cordes, The Astrophysical Journal. 72521607Ryan M. Shannon and James M. Cordes. Assessing the role of spin noise in the precision timing of millisecond pulsars. The Astrophysical Journal, 725(2):1607, 2010.
Switched magnetospheric regulation of pulsar spin-down. Andrew Lyne, George Hobbs, Michael Kramer, Ingrid Stairs, Ben Stappers, Science. 3295990Andrew Lyne, George Hobbs, Michael Kramer, Ingrid Stairs, and Ben Stappers. Switched magnetospheric regulation of pulsar spin-down. Science, 329(5990):408-412, 2010.
An Asteroid Belt Interpretation for the Timing Variations of the Millisecond Pulsar B1937+21. R M Shannon, J M Cordes, T S Metcalfe, T J W Lazio, I Cognard, G Desvignes, G H Janssen, A Jessner, M Kramer, K Lazaridis, M B Purver, B W Stappers, G Theureau, Astrophys. J. 7665R. M. Shannon, J. M. Cordes, T. S. Metcalfe, T. J. W. Lazio, I. Cognard, G. Desvignes, G. H. Janssen, A. Jessner, M. Kramer, K. Lazaridis, M. B. Purver, B. W. Stappers, and G. Theureau. An Asteroid Belt Interpretation for the Timing Variations of the Millisecond Pulsar B1937+21. Astrophys. J. , 766:5, March 2013.
The NANOGrav Nine-year Data Set: Noise Budget for Pulsar Arrival Times on Intraday Timescales. M T Lam, J M Cordes, S Chatterjee, Z Arzoumanian, K Crowter, P B Demorest, T Dolch, J A Ellis, R D Ferdman, E F Fonseca, M E Gonzalez, G Jones, M L Jones, L Levin, D R Madison, M A Mclaughlin, D J Nice, T T Pennucci, S M Ransom, X Siemens, I H Stairs, K Stovall, J K Swiggum, W W Zhu, Astrophys. J. 819155M. T. Lam, J. M. Cordes, S. Chatterjee, Z. Arzoumanian, K. Crowter, P. B. Demorest, T. Dolch, J. A. Ellis, R. D. Ferdman, E. F. Fonseca, M. E. Gonzalez, G. Jones, M. L. Jones, L. Levin, D. R. Madison, M. A. McLaughlin, D. J. Nice, T. T. Pennucci, S. M. Ransom, X. Siemens, I. H. Stairs, K. Stovall, J. K. Swiggum, and W. W. Zhu. The NANOGrav Nine-year Data Set: Noise Budget for Pulsar Arrival Times on Intraday Timescales. Astrophys. J. , 819:155, March 2016.
Upper bounds on the low-frequency stochastic gravitational wave background from pulsar timing observations: Current limits and future prospects. F A Jenet, G B Hobbs, W Van Straten, R N Manchester, M Bailes, J P W Verbiest, R T Edwards, A W Hotan, J M Sarkissian, S M Ord, The Astrophysical Journal. 65321571F. A. Jenet, G. B. Hobbs, W. van Straten, R. N. Manchester, M. Bailes, J. P. W. Verbiest, R. T. Edwards, A. W. Hotan, J. M. Sarkissian, and S. M. Ord. Upper bounds on the low-frequency stochastic gravitational wave background from pulsar timing observations: Current limits and future prospects. The Astrophysical Journal, 653(2):1571, 2006.
TEMPO2, a new pulsar timing package. III: Gravitational wave simulation. G Hobbs, F Jenet, K J Lee, J P W Verbiest, D Yardley, R Manchester, A Lommen, W Coles, R Edwards, C Shettigara, Mon. Not. Roy. Astron. Soc. 3941945G. Hobbs, F. Jenet, K. J. Lee, J. P. W. Verbiest, D. Yardley, R. Manchester, A. Lommen, W. Coles, R. Edwards, and C. Shettigara. TEMPO2, a new pulsar timing package. III: Gravitational wave simulation. Mon. Not. Roy. Astron. Soc., 394:1945, 2009.
Hyper-efficient model-independent Bayesian method for the analysis of pulsar timing data. P Lindley Lentati, M P Alexander, S Hobson, J Taylor, S T Gair, R Balan, Van Haasteren, Phys. Rev. 8710104021Lindley Lentati, P. Alexander, M. P. Hobson, S. Taylor, J. Gair, S. T. Balan, and R. van Haasteren. Hyper-efficient model-independent Bayesian method for the analysis of pulsar timing data. Phys. Rev., D87(10):104021, 2013.
Z Arzoumanian, NANOGrav CollaborationA Brazier, NANOGrav CollaborationS Burke-Spolaor, NANOGrav CollaborationS J Chamberlin, NANOGrav CollaborationS Chatterjee, NANOGrav CollaborationJ M Cordes, NANOGrav CollaborationP B Demorest, NANOGrav CollaborationX Deng, NANOGrav CollaborationT Dolch, NANOGrav CollaborationJ A Ellis, NANOGrav CollaborationR D Ferdman, NANOGrav CollaborationN Garver-Daniels, NANOGrav CollaborationF Jenet, NANOGrav CollaborationG Jones, NANOGrav CollaborationV M Kaspi, NANOGrav CollaborationM Koop, NANOGrav CollaborationM T Lam, NANOGrav CollaborationT J W Lazio, NANOGrav CollaborationA N Lommen, NANOGrav CollaborationD R Lorimer, NANOGrav CollaborationJ Luo, NANOGrav CollaborationR S Lynch, NANOGrav CollaborationD R Madison, NANOGrav CollaborationM A Mclaughlin, NANOGrav CollaborationS T Mcwilliams, NANOGrav CollaborationD J Nice, NANOGrav CollaborationN Palliyaguru, NANOGrav CollaborationT T Pennucci, NANOGrav CollaborationS M Ransom, NANOGrav CollaborationA Sesana, NANOGrav CollaborationX Siemens, NANOGrav CollaborationI H Stairs, NANOGrav CollaborationD R Stinebring, NANOGrav CollaborationK Stovall, NANOGrav CollaborationJ Swiggum, NANOGrav CollaborationM Vallisneri, NANOGrav CollaborationR Van Haasteren, NANOGrav CollaborationY Wang, NANOGrav CollaborationW W Zhu, NANOGrav CollaborationGravitational waves from individual supermassive black hole binaries in circular orbits: Limits from the north american nanohertz observatory for gravitational waves. 794141Z. Arzoumanian, A. Brazier, S. Burke-Spolaor, S. J. Chamberlin, S. Chatterjee, J. M. Cordes, P. B. Demorest, X. Deng, T. Dolch, J. A. Ellis, R. D. Ferdman, N. Garver-Daniels, F. Jenet, G. Jones, V. M. Kaspi, M. Koop, M. T. Lam, T. J. W. Lazio, A. N. Lommen, D. R. Lorimer, J. Luo, R. S. Lynch, D. R. Madison, M. A. McLaughlin, S. T. McWilliams, D. J. Nice, N. Palliyaguru, T. T. Pennucci, S. M. Ransom, A. Sesana, X. Siemens, I. H. Stairs, D. R. Stinebring, K. Stovall, J. Swiggum, M. Vallisneri, R. van Haasteren, Y. Wang, , W. W. Zhu, and NANOGrav Collaboration. Gravitational waves from individual supermassive black hole binaries in circular orbits: Limits from the north american nanohertz observatory for gravitational waves. The Astrophysical Journal, 794(2):141, 2014.
Searching for Gravitational Waves Using Pulsar Timing Arrays. J Ellis, The Univ. Wisconsin-MilwaukeePhD thesisEllis J. Searching for Gravitational Waves Using Pulsar Timing Arrays. PhD thesis, The Univ. Wisconsin-Milwaukee, 2014.
All correlations must die: Assessing the significance of a stochastic gravitational-wave background in pulsar-timing arrays. S R Taylor, L Lentati, S Babak, P Brem, J R Gair, A Sesana, A Vecchio, Phys. Rev. 95442002S. R. Taylor, L. Lentati, S. Babak, P. Brem, J. R. Gair, A. Sesana, and A. Vecchio. All correlations must die: Assessing the significance of a stochastic gravitational-wave background in pulsar-timing arrays. Phys. Rev., D95(4):042002, 2017.
Upper limits on the isotropic gravitational radiation background from pulsar timing analysis. R W Hellings, G S Downs, The Astrophysical Journal, Letters. 265R. W. Hellings and G. S. Downs. Upper limits on the isotropic gravitational radiation background from pulsar timing analysis. The Astrophysical Journal, Letters, 265:L39-L42, February 1983.
Measuring the mass of solar system planets using pulsar timing. D J Champion, G B Hobbs, R N Manchester, R T Edwards, D C Backer, M Bailes, N D R Bhat, S Burke-Spolaor, W Coles, P B Demorest, R D Ferdman, W M Folkner, A W Hotan, M Kramer, A N Lommen, D J Nice, M B Purver, J M Sarkissian, I H Stairs, W Van Straten, J P W Verbiest, D R B Yardley, The Astrophysical Journal Letters. 7202201D. J. Champion, G. B. Hobbs, R. N. Manchester, R. T. Edwards, D. C. Backer, M. Bailes, N. D. R. Bhat, S. Burke-Spolaor, W. Coles, P. B. Demorest, R. D. Ferdman, W. M. Folkner, A. W. Hotan, M. Kramer, A.N. Lommen, D. J. Nice, M. B. Purver, J. M. Sarkissian, I. H. Stairs, W. van Straten, J. P. W. Verbiest, and D.R.B. Yardley. Measuring the mass of solar system planets using pulsar timing. The Astrophysical Journal Letters, 720(2):L201, 2010.
Methods of celestial mechanics. D Brouwer, G M Clemence, Academic PressD. Brouwer and G.M. Clemence. Methods of celestial mechanics. Academic Press, 1961.
Solar System Ephemerides. T , Joseph W Lazio, S Bhaskaran, C Cutler, W M Folkner, R S Park, J A Ellis, T Ely, S R Taylor, M Vallisneri, Pulsar Timing, Gravitational Waves, & Navigation. IAU Symp. 337T. Joseph W. Lazio, S. Bhaskaran, C. Cutler, W. M. Folkner, R. S. Park, J. A. Ellis, T. Ely, S. R. Taylor, and M. Vallisneri. Solar System Ephemerides, Pulsar Timing, Gravitational Waves, & Navigation. IAU Symp., 337:150-153, 2017.
Arrival-time analysis for a millisecond pulsar. R Blandford, R Narayan, R W Romani, Journal of Astrophysics and Astronomy. 5R. Blandford, R. Narayan, and R. W. Romani. Arrival-time analysis for a millisecond pulsar. Journal of Astrophysics and Astronomy, 5:369-388, December 1984.
The Gravitational-Wave Discovery Space of Pulsar Timing Arrays. Curt Cutler, Sarah Burke-Spolaor, Michele Vallisneri, Joseph Lazio, Walid Majid, Phys. Rev. 89442003Curt Cutler, Sarah Burke-Spolaor, Michele Vallisneri, Joseph Lazio, and Walid Majid. The Gravitational-Wave Discovery Space of Pulsar Timing Arrays. Phys. Rev., D89(4):042003, 2014.
Brian Luzum, Nicole Capitaine, Agnès Fienga, William Folkner, Toshio Fukushima, James Hilton, Catherine Hohenkerk, George Krasinsky, Gérard Petit, Elena Pitjeva, Michael Soffel, Patrick Wallace, The iau 2009 system of astronomical constants: the report of the iau working group on numerical standards for fundamental astronomy. 110293Brian Luzum, Nicole Capitaine, Agnès Fienga, William Folkner, Toshio Fukushima, James Hilton, Catherine Hohenkerk, George Krasin- sky, Gérard Petit, Elena Pitjeva, Michael Soffel, and Patrick Wallace. The iau 2009 system of astronomical constants: the report of the iau working group on numerical standards for fundamental astronomy. Celestial Mechanics and Dynamical Astronomy, 110(4):293, Jul 2011.
The orbits of the uranian satellites and rings, the gravity field of the uranian system, and the orientation of the pole of uranus. R A Jacobson, The Astronomical Journal. 148576R. A. Jacobson. The orbits of the uranian satellites and rings, the gravity field of the uranian system, and the orientation of the pole of uranus. The Astronomical Journal, 148(5):76, 2014.
The nanograv nine-year data set. Z Arzoumanian, NANOGrav CollaborationA Brazier, NANOGrav CollaborationS Burke-Spolaor, NANOGrav CollaborationS J Chamberlin, NANOGrav CollaborationS Chatterjee, NANOGrav CollaborationB Christy, NANOGrav CollaborationJ M Cordes, NANOGrav CollaborationN J Cornish, NANOGrav CollaborationK Crowter, NANOGrav CollaborationP B Demorest, NANOGrav CollaborationX Deng, NANOGrav CollaborationT Dolch, NANOGrav CollaborationJ A Ellis, NANOGrav CollaborationR D Ferdman, NANOGrav CollaborationE Fonseca, NANOGrav CollaborationN Garver-Daniels, NANOGrav CollaborationM E Gonzalez, NANOGrav CollaborationF Jenet, NANOGrav CollaborationG Jones, NANOGrav CollaborationM L Jones, NANOGrav CollaborationV M Kaspi, NANOGrav CollaborationM Koop, NANOGrav CollaborationM T Lam, NANOGrav CollaborationT J W Lazio, NANOGrav CollaborationL Levin, NANOGrav CollaborationA N Lommen, NANOGrav CollaborationD R Lorimer, NANOGrav CollaborationJ Luo, NANOGrav CollaborationR S Lynch, NANOGrav CollaborationD R Madison, NANOGrav CollaborationM A Mclaughlin, NANOGrav CollaborationS T Mcwilliams, NANOGrav CollaborationC M F Mingarelli, NANOGrav CollaborationD J Nice, NANOGrav CollaborationN Palliyaguru, NANOGrav CollaborationT T Pennucci, NANOGrav CollaborationS M Ransom, NANOGrav CollaborationL Sampson, NANOGrav CollaborationS A Sanidas, NANOGrav CollaborationA Sesana, NANOGrav CollaborationX Siemens, NANOGrav CollaborationJ Simon, NANOGrav CollaborationI H Stairs, NANOGrav CollaborationD R Stinebring, NANOGrav CollaborationK Stovall, NANOGrav CollaborationJ Swiggum, NANOGrav CollaborationS R Taylor, NANOGrav CollaborationM Vallisneri, NANOGrav CollaborationR Van Haasteren, NANOGrav CollaborationY Wang, NANOGrav CollaborationW W Zhu, NANOGrav CollaborationLimits on the isotropic stochastic gravitational wave background. 82113Z. Arzoumanian, A. Brazier, S. Burke-Spolaor, S. J. Chamberlin, S. Chatterjee, B. Christy, J. M. Cordes, N. J. Cornish, K. Crowter, P. B. Demorest, X. Deng, T. Dolch, J. A. Ellis, R. D. Ferdman, E. Fonseca, N. Garver-Daniels, M. E. Gonzalez, F. Jenet, G. Jones, M. L. Jones, V. M. Kaspi, M. Koop, M. T. Lam, T. J. W. Lazio, L. Levin, A. N. Lommen, D. R. Lorimer, J. Luo, R. S. Lynch, D. R. Madison, M. A. McLaughlin, S. T. McWilliams, C. M. F. Mingarelli, D. J. Nice, N. Palliyaguru, T. T. Pennucci, S. M. Ransom, L. Sampson, S. A. Sanidas, A. Sesana, X. Siemens, J. Simon, I. H. Stairs, D. R. Stinebring, K. Stovall, J. Swiggum, S. R. Taylor, M. Vallisneri, R. van Haasteren, Y. Wang, W. W. Zhu, and The NANOGrav Collaboration. The nanograv nine-year data set: Limits on the isotropic stochastic gravitational wave background. The Astrophysical Journal, 821(1):13, 2016.
Justin Ellis, Rutger Van Haasteren, jellis18/ptmcmcsampler: Official release. Justin Ellis and Rutger van Haasteren. jellis18/ptmcmcsampler: Official release, October 2017.
. Justin Ellis, Rutger Van Haasteren, jellis18/pal2: Pal2Justin Ellis and Rutger van Haasteren. jellis18/pal2: Pal2, January 2017.
An adaptive metropolis algorithm. Heikki Haario, Eero Saksman, Johanna Tamminen, Bernoulli. 72Heikki Haario, Eero Saksman, and Johanna Tamminen. An adaptive metropolis algorithm. Bernoulli, 7(2):223-242, 2001.
Componentwise adaptation for high dimensional mcmc. Heikki Haario, Eero Saksman, Johanna Tamminen, Computational Statistics. 202Heikki Haario, Eero Saksman, and Johanna Tamminen. Componentwise adaptation for high dimensional mcmc. Computational Statistics, 20(2):265-273, 2005.
A markov chain monte carlo version of the genetic algorithm differential evolution: easy bayesian computing for real parameter spaces. Ter Cajo, Braak, Statistics and Computing. 163Cajo JF Ter Braak. A markov chain monte carlo version of the genetic algorithm differential evolution: easy bayesian computing for real parameter spaces. Statistics and Computing, 16(3):239-249, 2006.
False periodicities in quasar time-domain surveys. S Vaughan, P Uttley, A G Markowitz, D Huppenkothen, M J Middleton, W N Alston, J D Scargle, W M Farr, Mon. Not. Roy. Astron. Soc. 4613S. Vaughan, P. Uttley, A. G. Markowitz, D. Huppenkothen, M. J. Middleton, W. N. Alston, J. D. Scargle, and W. M. Farr. False periodicities in quasar time-domain surveys. Mon. Not. Roy. Astron. Soc., 461(3):3145-3152, 2016.
Least-squares frequency analysis of unequally spaced data. N R Lomb, Astrophysics & Space Science. 39N. R. Lomb. Least-squares frequency analysis of unequally spaced data. Astrophysics & Space Science, 39:447-462, February 1976.
Studies in astronomical time series analysis. II -Statistical aspects of spectral analysis of unevenly spaced data. J D Scargle, Astrophys. J. 263J. D. Scargle. Studies in astronomical time series analysis. II -Statistical aspects of spectral analysis of unevenly spaced data. Astrophys. J. , 263:835-853, December 1982.
. T P Robitaille, Astropy CollaborationE J Tollerud, Astropy CollaborationP Greenfield, Astropy CollaborationM Droettboom, Astropy CollaborationE Bray, Astropy CollaborationT Aldcroft, Astropy CollaborationM Davis, Astropy CollaborationA Ginsburg, Astropy CollaborationA M Price-Whelan, Astropy CollaborationW E Kerzendorf, Astropy CollaborationA Conley, Astropy CollaborationN Crighton, Astropy CollaborationK Barbary, Astropy CollaborationD Muna, Astropy CollaborationH Ferguson, Astropy CollaborationF Grollier, Astropy CollaborationM M Parikh, Astropy CollaborationP H Nair, Astropy CollaborationH M Unther, Astropy CollaborationC Deil, Astropy CollaborationJ Woillez, Astropy CollaborationS Conseil, Astropy CollaborationR Kramer, Astropy CollaborationJ E H Turner, Astropy CollaborationL Singer, Astropy CollaborationR Fox, Astropy CollaborationB A Weaver, Astropy CollaborationV Zabalza, Astropy CollaborationZ I Edwards, Astropy CollaborationK Bostroem, Astropy CollaborationD J Burke, Astropy CollaborationA R Casey, Astropy CollaborationS M Crawford, Astropy CollaborationN Dencheva, Astropy CollaborationJ Ely, Astropy CollaborationT Jenness, Astropy CollaborationK Labrie, Astropy CollaborationP L Lim, Astropy CollaborationF Pierfederici, Astropy CollaborationA Pontzen, Astropy CollaborationA Ptak, Astropy CollaborationB Refsdal, Astropy CollaborationM Servillat, Astropy CollaborationO Streicher, Astropy Collaboration55833Astropy: A community Python package for astronomy. A&AAstropy Collaboration, T. P. Robitaille, E. J. Tollerud, P. Greenfield, M. Droettboom, E. Bray, T. Aldcroft, M. Davis, A. Ginsburg, A. M. Price-Whelan, W. E. Kerzendorf, A. Conley, N. Crighton, K. Barbary, D. Muna, H. Ferguson, F. Grollier, M. M. Parikh, P. H. Nair, H. M. Unther, C. Deil, J. Woillez, S. Conseil, R. Kramer, J. E. H. Turner, L. Singer, R. Fox, B. A. Weaver, V. Zabalza, Z. I. Edwards, K. Azalee Bostroem, D. J. Burke, A. R. Casey, S. M. Crawford, N. Dencheva, J. Ely, T. Jenness, K. Labrie, P. L. Lim, F. Pierfederici, A. Pontzen, A. Ptak, B. Refsdal, M. Servillat, and O. Streicher. Astropy: A community Python package for astronomy. A&A, 558:A33, October 2013.
. A M Price-Whelan, Astropy CollaborationB M Sipőcz, Astropy CollaborationH M Günther, Astropy CollaborationP L Lim, Astropy CollaborationS M Crawford, Astropy CollaborationS Conseil, Astropy CollaborationD L Shupe, Astropy CollaborationM W Craig, Astropy CollaborationN Dencheva, Astropy CollaborationA Ginsburg, Astropy CollaborationJ T Vanderplas, Astropy CollaborationL D Bradley, Astropy CollaborationD Pérez-Suárez, Astropy CollaborationM Val-Borro, Astropy CollaborationT L Aldcroft, Astropy CollaborationK L Cruz, Astropy CollaborationT P Robitaille, Astropy CollaborationE J Tollerud, Astropy CollaborationC Ardelean, Astropy CollaborationT Babej, Astropy CollaborationY P Bach, Astropy CollaborationM Bachetti, Astropy CollaborationA V Bakanov, Astropy CollaborationS P Bamford, Astropy CollaborationG Barentsen, Astropy CollaborationP Barmby, Astropy CollaborationA Baumbach, Astropy CollaborationK L Berry, Astropy CollaborationF Biscani, Astropy CollaborationM Boquien, Astropy CollaborationK A Bostroem, Astropy CollaborationL G Bouma, Astropy CollaborationG B Brammer, Astropy CollaborationE M Bray, Astropy CollaborationH Breytenbach, Astropy CollaborationH Buddelmeijer, Astropy CollaborationD J Burke, Astropy CollaborationG Calderone, Astropy CollaborationJ L Cano Rodríguez, Astropy CollaborationM Cara, Astropy CollaborationJ V M Cardoso, Astropy CollaborationS Cheedella, Astropy CollaborationY Copin, Astropy CollaborationL Corrales, Astropy CollaborationD Crichton, Astropy CollaborationD D'avella, Astropy CollaborationC Deil, Astropy CollaborationÉ Depagne, Astropy CollaborationJ P Dietrich, Astropy CollaborationA Donath, Astropy CollaborationM Droettboom, Astropy CollaborationN Earl, Astropy CollaborationT Erben, Astropy CollaborationS Fabbro, Astropy CollaborationL A Ferreira, Astropy CollaborationT Finethy, Astropy CollaborationR T Fox, Astropy CollaborationL H Garrison, Astropy CollaborationS L J Gibbons, Astropy CollaborationD A Goldstein, Astropy CollaborationR Gommers, Astropy CollaborationJ P Greco, Astropy CollaborationP Greenfield, Astropy CollaborationA M Groener, Astropy CollaborationF Grollier, Astropy CollaborationA Hagen, Astropy CollaborationP Hirst, Astropy CollaborationD Homeier, Astropy CollaborationA J Horton, Astropy CollaborationG Hosseinzadeh, Astropy CollaborationL Hu, Astropy CollaborationJ S Hunkeler, Astropy CollaborationŽ Ivezić, Astropy CollaborationA Jain, Astropy CollaborationT Jenness, Astropy CollaborationG Kanarek, Astropy CollaborationS Kendrew, Astropy CollaborationN S Kern, Astropy CollaborationW E Kerzendorf, Astropy CollaborationA Khvalko, Astropy CollaborationJ King, Astropy CollaborationD Kirkby, Astropy CollaborationA M Kulkarni, Astropy CollaborationA Kumar, Astropy CollaborationA Lee, Astropy CollaborationD Lenz, Astropy CollaborationS P Littlefair, Astropy CollaborationZ Ma, Astropy CollaborationD M Macleod, Astropy CollaborationM Mastropietro, Astropy CollaborationC Mccully, Astropy CollaborationS Montagnac, Astropy CollaborationB M Morris, Astropy CollaborationM Mueller, Astropy CollaborationS J Mumford, Astropy CollaborationD Muna, Astropy CollaborationN A Murphy, Astropy CollaborationS Nelson, Astropy CollaborationG H Nguyen, Astropy Collaboration; V. Reddy Janga, Astropy CollaborationJ Sabater, Astropy CollaborationP Sakurikar, Astropy CollaborationM Seifert, Astropy CollaborationL E Sherbert, Astropy CollaborationH Sherwood-Taylor, Astropy CollaborationA Y Shih, Astropy CollaborationJ Sick, Astropy CollaborationM T Silbiger, Astropy CollaborationS Singanamalla, Astropy CollaborationL P Singer, Astropy CollaborationP H Sladen, Astropy CollaborationK A Sooley, Astropy CollaborationS Sornarajah, Astropy CollaborationO Streicher, Astropy CollaborationP Teuben, Astropy CollaborationS W Thomas, Astropy CollaborationG R Tremblay, Astropy CollaborationJ E H Turner, Astropy CollaborationThe Astronomical Journal. Kerkwijk, A. de la Vega, L. L. Watkins, B. A. Weaver, J. B. Whitmore, J. Woillez156123and Astropy Contributors. The Astropy Project: Building an Open-science Project and Status of the v2.0 Core PackageAstropy Collaboration, A. M. Price-Whelan, B. M. Sipőcz, H. M. Günther, P. L. Lim, S. M. Crawford, S. Conseil, D. L. Shupe, M. W. Craig, N. Dencheva, A. Ginsburg, J. T. VanderPlas, L. D. Bradley, D. Pérez-Suárez, M. de Val-Borro, T. L. Aldcroft, K. L. Cruz, T. P. Robitaille, E. J. Tollerud, C. Ardelean, T. Babej, Y. P. Bach, M. Bachetti, A. V. Bakanov, S. P. Bamford, G. Barentsen, P. Barmby, A. Baumbach, K. L. Berry, F. Biscani, M. Boquien, K. A. Bostroem, L. G. Bouma, G. B. Brammer, E. M. Bray, H. Breytenbach, H. Buddelmeijer, D. J. Burke, G. Calderone, J. L. Cano Rodríguez, M. Cara, J. V. M. Cardoso, S. Cheedella, Y. Copin, L. Corrales, D. Crichton, D. D'Avella, C. Deil,É. Depagne, J. P. Dietrich, A. Donath, M. Droettboom, N. Earl, T. Erben, S. Fabbro, L. A. Ferreira, T. Finethy, R. T. Fox, L. H. Garrison, S. L. J. Gibbons, D. A. Goldstein, R. Gommers, J. P. Greco, P. Greenfield, A. M. Groener, F. Grollier, A. Hagen, P. Hirst, D. Homeier, A. J. Horton, G. Hosseinzadeh, L. Hu, J. S. Hunkeler,Ž. Ivezić, A. Jain, T. Jenness, G. Kanarek, S. Kendrew, N. S. Kern, W. E. Kerzendorf, A. Khvalko, J. King, D. Kirkby, A. M. Kulkarni, A. Kumar, A. Lee, D. Lenz, S. P. Littlefair, Z. Ma, D. M. Macleod, M. Mastropietro, C. McCully, S. Montagnac, B. M. Morris, M. Mueller, S. J. Mumford, D. Muna, N. A. Murphy, S. Nelson, G. H. Nguyen, J. P. Ninan, M. Nöthe, S. Ogaz, S. Oh, J. K. Parejko, N. Parley, S. Pascual, R. Patil, A. A. Patil, A. L. Plunkett, J. X. Prochaska, T. Rastogi, V. Reddy Janga, J. Sabater, P. Sakurikar, M. Seifert, L. E. Sherbert, H. Sherwood-Taylor, A. Y. Shih, J. Sick, M. T. Silbiger, S. Singanamalla, L. P. Singer, P. H. Sladen, K. A. Sooley, S. Sornarajah, O. Streicher, P. Teuben, S. W. Thomas, G. R. Tremblay, J. E. H. Turner, V. Terrón, M. H. van Kerkwijk, A. de la Vega, L. L. Watkins, B. A. Weaver, J. B. Whitmore, J. Woillez, V. Zabalza, and Astropy Contributors. The Astropy Project: Building an Open-science Project and Status of the v2.0 Core Package. The Astronomical Journal, 156:123, September 2018.
| [
"https://github.com/jellis18/PTMCMCSampler",
"https://github.com/jellis18/PAL2",
"https://github.com/nanograv/enterprise5"
]
|
[]
| [
"Tomasz Weiss [email protected] \nInstitute of Mathematics\nEinstein Institute of Mathematics\nHebrew University of Jerusalem\nAkademia Podlaska, Givat Ram08-119, 91904Siedlce, JerusalemPoland, Israel\n",
"Boaz Tsaban [email protected] \nInstitute of Mathematics\nEinstein Institute of Mathematics\nHebrew University of Jerusalem\nAkademia Podlaska, Givat Ram08-119, 91904Siedlce, JerusalemPoland, Israel\n"
]
| [
"Institute of Mathematics\nEinstein Institute of Mathematics\nHebrew University of Jerusalem\nAkademia Podlaska, Givat Ram08-119, 91904Siedlce, JerusalemPoland, Israel",
"Institute of Mathematics\nEinstein Institute of Mathematics\nHebrew University of Jerusalem\nAkademia Podlaska, Givat Ram08-119, 91904Siedlce, JerusalemPoland, Israel"
]
| []
| The Hausdorff dimension of a product X×Y can be strictly greater than that of Y , even when the Hausdorff dimension of X is zero. But when X is countable, the Hausdorff dimensions of Y and X × Y are the same. Diagonalizations of covers define a natural hierarchy of properties which are weaker than "being countable" and stronger than "having Hausdorff dimension zero". Fremlin asked whether it is enough for X to have the strongest property in this hierarchy (namely, being a γ-set) in order to assure that the Hausdorff dimensions of Y and X × Y are the same.We give a negative answer: Assuming the Continuum Hypothesis, there exists a γ-set X ⊆ R and a set Y ⊆ R with Hausdorff dimension zero, such that the Hausdorff dimension of X +Y (a Lipschitz image of X ×Y ) is maximal, that is, 1. However, we show that for the notion of a strong γ-set the answer is positive. Some related problems remain open. | 10.1285/i15900932v22n2p83 | [
"https://arxiv.org/pdf/math/0212009v5.pdf"
]
| 15,030,734 | math/0212009 | 665077501d52959526ae8371056666d304a42176 |
17 May 2007
Tomasz Weiss [email protected]
Institute of Mathematics
Einstein Institute of Mathematics
Hebrew University of Jerusalem
Akademia Podlaska, Givat Ram08-119, 91904Siedlce, JerusalemPoland, Israel
Boaz Tsaban [email protected]
Institute of Mathematics
Einstein Institute of Mathematics
Hebrew University of Jerusalem
Akademia Podlaska, Givat Ram08-119, 91904Siedlce, JerusalemPoland, Israel
17 May 2007TOPOLOGICAL DIAGONALIZATIONS AND HAUSDORFF DIMENSION 11991 Mathematics Subject Classification Primary: 03E75; Secondary: 37F2026A03 Key words and phrases Hausdorff dimensionGerlits-Nagy γ propertyGalvin-Miller strong γ property
The Hausdorff dimension of a product X×Y can be strictly greater than that of Y , even when the Hausdorff dimension of X is zero. But when X is countable, the Hausdorff dimensions of Y and X × Y are the same. Diagonalizations of covers define a natural hierarchy of properties which are weaker than "being countable" and stronger than "having Hausdorff dimension zero". Fremlin asked whether it is enough for X to have the strongest property in this hierarchy (namely, being a γ-set) in order to assure that the Hausdorff dimensions of Y and X × Y are the same.We give a negative answer: Assuming the Continuum Hypothesis, there exists a γ-set X ⊆ R and a set Y ⊆ R with Hausdorff dimension zero, such that the Hausdorff dimension of X +Y (a Lipschitz image of X ×Y ) is maximal, that is, 1. However, we show that for the notion of a strong γ-set the answer is positive. Some related problems remain open.
Introduction
The Hausdorff dimension of a subset of R k is a derivative of the notion of Hausdorff measures [4]. However, for our purposes it will be more convenient to use the following equivalent definition. Denote the diameter of a subset A of R k by diam(A). The Hausdorff dimension of a set X ⊆ R k , dim(X), is the infimum of all positive δ such that for each positive ǫ there exists a cover {I n } n∈N of X with n∈N diam(I n ) δ < ǫ.
From the many properties of Hausdorff dimension, we will need the following easy ones. Lemma 1.
(1) If X ⊆ Y ⊆ R k , then dim(X) ≤ dim(Y ).
(2) Assume that X 1 , X 2 , . . . are subsets of R k such that dim(X n ) = δ for each n. Then dim( n X n ) = δ. (3) Assume that X ⊆ R k and Y ⊆ R m is such that there exists a Lipschitz surjection φ : X → Y . Then dim(X) ≥ dim(Y ). (4) For each X ⊆ R k and Y ⊆ R m , dim(X × Y ) ≥ dim(X) + dim(Y ).
Equality need not hold in item (4) of the last lemma. In particular, one can construct a set X with Hausdorff dimension zero and a set Y such that dim(X × Y ) > dim(Y ). On the other hand, when X is countable, X × Y is a union of countably many copies of Y , and therefore (1) dim(X × Y ) = dim(Y ).
Having Hausdorff dimension zero can be thought of as a notion of smallness. Being countable is another notion of smallness, and we know that the first notion is not enough restrictive in order to have Equation 1 hold, but the second is. Notions of smallness for sets of real numbers have a long history and many applications -see, e.g., [11]. We will consider some notions which are weaker than being countable and stronger than having Hausdorff dimension zero.
According to Borel [3], a set X ⊆ R k has strong measure zero if for each sequence of positive reals {ǫ n } n∈N , there exists a cover {I n } n∈N of X such that diam(I n ) < ǫ n for all n. Clearly strong measure zero implies Hausdorff dimension zero. It does not require any special assumptions in order to see that the converse is false. A perfect set can be mapped onto the unit interval by a uniformly continuous function and therefore cannot have strong measure zero.
Proposition 2 (folklore). There exists a perfect set of reals X with Hausdorff dimension zero.
Proof. For 0 < λ < 1, denote by C(λ) the Cantor set obtained by starting with the unit interval, and at each step removing from the middle of each interval a subinterval of size λ times the size of the interval (So that C(1/3) is the canonical middle-third Cantor set, which has Hausdorff dimension log 2/ log 3.) It is easy to see that if λ n ր 1, then dim(C(λ n )) ց 0.
Thus, define a special Cantor set C({λ n } n∈N ) by starting with the unit interval, and at step n removing from the middle of each interval a subinterval of size λ n times the size of the interval. For each n, C({λ n } n∈N ) is contained in a union of 2 n (shrunk) copies of C(λ n ), and therefore dim(C({λ n } n∈N )) ≤ dim(C(λ n )).
As every countable set has strong measure zero, the latter notion can be thought of an "approximation" of countability. In fact, Borel conjectured in [3] that every strong measure zero set is countable, and it turns out that the usual axioms of mathematics (ZFC) are not strong enough to prove or disprove this conjecture: Assuming the Continuum Hypothesis there exists an uncountable strong measure zero set (namely, a Luzin set), but Laver [10] proved that one cannot prove the existence of such an object from the usual axioms of mathematics.
The property of strong measure zero (which depends on the metric) has a natural topological counterpart. A topological space X has Rothberger's property C ′′ [13] if for each sequence {U n } n∈N of covers of X there is a sequence {U n } n∈N such that for each n U n ∈ U n , and {U n } n∈N is a cover of X. Using Scheepers' notation [15], this property is a particular instance of the following selection hypothesis (where U and V are any collections of covers of X):
S 1 (U, V): For each sequence {U n } n∈N of members of U, there is a sequence {U n } n∈N
such that U n ∈ U n for each n, and {U n } n∈N ∈ V.
Let O denote the collection of all open covers of X. Then the property considered by Rothberger is S 1 (O, O). Fremlin and Miller [5] proved that a set X ⊆ R k satisfies S 1 (O, O) if, and only if, X has strong measure zero with respect to each metric which generates the standard topology on R k .
But even Rothberger's property for X is not strong enough to have Equation 1 hold: It is well-known that every Luzin set satisfies Rothberger's property (and, in particular, has Hausdorff dimension zero).
Lemma 3. The mapping (x, y) → x + y from R 2 to R is Lipschitz.
Proof. Observe that for nonnegative reals a and b, (a − b) 2 ≥ 0 and therefore
a 2 + b 2 ≥ 2ab. Consequently, a + b = a 2 + 2ab + b 2 ≤ 2(a 2 + b 2 ) = √ 2 a 2 + b 2 . Thus, |(x 1 +y 1 )−(x 2 +y 2 )| ≤ √ 2 (x 1 − x 2 ) 2 + (y 1 − y 2 ) 2 for all (x 1 , y 1 ), (x 2 , y 2 ) ∈ R 2 .
Assuming the Continuum Hypothesis, there exists a Luzin set L ⊆ R such that
L + L, a Lipschitz image of L × L, is equal to R [9].
We therefore consider some stronger properties. An open cover U of X is an ω-cover of X if each finite subset of X is contained in some member of the cover, but X is not contained in any member of U. U is a γ-cover of X if it is infinite, and each element of X belongs to all but finitely many members of U. Let Ω and Γ denote the collections of open ω-covers and γ-covers of X, respectively. Then Γ ⊆ Ω ⊆ O, and these three classes of covers introduce 9 properties of the form S 1 (U, V). If we remove the trivial ones and check for equivalences [9,20], then it turns out that only six of these properties are really distinct, and only three of them imply Hausdorff dimension zero:
S 1 (Ω, Γ) → S 1 (Ω, Ω) → S 1 (O, O).
The properties S 1 (Ω, Γ) and S 1 (Ω, Ω) were also studied before. S 1 (Ω, Ω) was studied by Sakai [14], and S 1 (Ω, Γ) was studied by Gerlits and Nagy in [8]: A topological space X is a γ-set if each ω-cover of X contains a γ-cover of X. Gerlits and Nagy proved that X is a γ-set if, and only if, X satisfies S 1 (Ω, Γ). It is not difficult to see that every countable space is a γ-set. But this property is not trivial: Assuming the Continuum Hypothesis, there exist uncountable γ-sets [7]. S 1 (Ω, Ω) is closed under taking finite powers [9], thus the Luzin set we used to see that Equation 1 need not hold when X satisfies S 1 (O, O) does not rule out that possibility that this Equation holds when X satisfies S 1 (Ω, Ω). However, in [2] it is shown that assuming the Continuum Hypothesis, there exist Luzin sets L 0 and L 1 satisfying S 1 (Ω, Ω), such that L 0 + L 1 = R. Thus, the only remaining candidate for a nontrivial property of X where Equation 1 holds is S 1 (Ω, Γ) (γ-sets). Fremlin (personal communication) asked whether Equation 1 is indeed provable in this case. We give a negative answer, but show that for a yet stricter (but nontrivial) property which was considered in the literature, the answer is positive.
The notion of a strong γ-set was introduced in [7]. However, we will adopt the following simple characterization from [20] as our formal definition. Assume that {U n } n∈N is a sequence of collections of covers of a space X, and that V is a collection of covers of X. Define the following selection hypothesis. S 1 ({U n } n∈N , V): For each sequence {U n } n∈N where U n ∈ U n for each n, there is a sequence {U n } n∈N such that U n ∈ U n for each n, and {U n } n∈N ∈ V. A cover U of a space X is an n-cover if each n-element subset of X is contained in some member of U. For each n denote by O n the collection of all open n-covers of a space X. Then X is a strong γ-set if X satisfies S 1 ({O n } n∈N , Γ).
In most cases S 1 ({O n } n∈N , V) is equivalent to S 1 (Ω, V) [20], but not in the case V = Γ: It is known that for a strong γ-set G ⊆ {0, 1} N and each A ⊆ {0, 1} N of measure zero, G ⊕ A has measure zero too [7]; this can be contrasted with Theorem 5 below. In Section 3 we show that Equation 1 is provable in the case that X is a strong γ-set, establishing another difference between the notions of γ-sets and strong γ-sets, and giving a positive answer to Fremlin's question under a stronger assumption on X.
The product of a γ-set and a set of Hausdorff dimension zero
Theorem 4. Assuming the Continuum Hypothesis (or just p = c), there exist a γset X ⊆ R and a set Y ⊆ R with Hausdorff dimension zero such that the Hausdorff dimension of the algebraic sum
X + Y = {x + y : x ∈ X, y ∈ Y } (a Lipschitz image of X × Y in R) is 1. In particular, dim(X × Y ) ≥ 1.
Our theorem will follow from the following related theorem. This theorem involves the Cantor space {0, 1} N of infinite binary sequences. The Cantor space is equipped with the product topology and with the product measure. Observe that the assumption in Theorem 5 holds whenever n 2 −(kn+1−kn) converges. Lemma 6. There exists an increasing sequence of natural numbers {k n } n∈N such that n 2 −(kn+1−kn) converges, and such that for the sequence {B n } n∈N defined by
B n = i∈N f (i) 2 i+1 : f ∈ {−1, 0, 1} N and f ↾ [k n , k n+1 ) ≡ 0 for each n, the set Y = m∈ω n≥m B n
has Hausdorff dimension zero.
Proof. Fix a sequence p n of positive reals which converges to 0. Let k 0 = 0. Given k n find k n+1 satisfying 3 kn · 1 2 pn(kn+1−2) ≤ 1 2 n . Clearly, every B n is contained in a union of 3 kn intervals such that each of the intervals has diameter 1/2 kn+1−2 . For each positive δ and ǫ, choose m such that n≥m 1/2 n < ǫ and such that p n < δ for all n ≥ m. Now, Y is a subset of n≥m B n , and
n≥m 3 kn 1 2 kn+1−2 δ < n≥m 3 kn 1 2 kn+1−2 pn < n≥m 1 2 n < ǫ.
Thus, the Hausdorff dimension of Y is zero.
The following lemma concludes the proof of Theorem 4.
Lemma 7.
There exists a γ-set X ⊆ R and a set Y ⊆ R with Hausdorff dimension zero such that X + Y = R. In particular, dim(X + Y ) = 1.
Proof. Choose a sequence {k n } n∈N and a set Y as in Lemma 6. Then n 2 −(kn+1−kn) converges, and the corresponding set A defined in Theorem 5 has measure zero. Thus, there exists a γ-set G such that
G ⊕ A = {0, 1} N . Define Φ : {0, 1} N → R by Φ(f ) = i∈N f (i) 2 i+1 . As Φ is continuous, X = Φ[G] is a γ-set of reals. Assume that z is a member of the interval [0, 1], let f ∈ {0, 1} N be such that z = i f (i)/2 i+1 . Then f = g ⊕ a for appropriate g ∈ G and a ∈ A. Define h ∈ {−1, 0, 1} N by h(i) = f (i) − g(i). For infinitely many n, a ↾ [k n , k n+1 ) ≡ 0 and therefore f ↾ [k n , k n+1 ) ≡ g ↾ [k n , k n+1 ), that is, h ↾ [k n , k n+1
) ≡ 0 for infinitely many n. Thus, y = i h(i)/2 i+1 ∈ Y , and for x = Φ(g),
x + y = i∈N g(i) 2 i+1 + i∈N h(i) 2 i+1 = i∈N g(i) + h(i) 2 i+1 = i∈N f (i) 2 i+1 = z.
This shows that [0, 1] ⊆ X + Y . Consequently, X + (Y + Q) = (X + Y ) + Q = R. Now, observe that Y + Q has Hausdorff dimension zero since Y has.
3. The product of a strong γ-set and a set of Hausdorff dimension zero
Theorem 8. Assume that X ⊆ R k is a strong γ-set. Then for each Y ⊆ R l , dim(X × Y ) = dim(Y ).
Proof. The proof for this is similar to that of Theorem 7 in [7]. It is enough to show that dim(X × Y ) ≤ dim(Y ).
Lemma 9. Assume that Y ⊆ R l is such that dim(Y ) < δ. Then for each positive ǫ there exists a large cover {I n } n∈N of Y (i.e., such that each y ∈ Y is a member of infinitely many sets I n ) such that n diam(I n ) δ < ǫ.
Proof. For each m choose a cover {I m n } n∈N of Y such that n diam(I m n ) δ < ǫ/2 m . Then {I m n : m, n ∈ N} is a large cover of Y , and m,n diam(I m n ) δ < n ǫ/2 m = ǫ.
Lemma 10. Assume that Y ⊆ R l is such that dim(Y ) < δ. Then for each sequence {ǫ n } n∈N of positive reals there exists a large cover {A n } n∈N of Y such that for each n A n is a union of finitely many sets, I n 1 , . . . , I n mn , such that j diam(I n j ) δ < ǫ n .
Proof. Assume that {ǫ n } n∈N is a sequence of positive reals. By Lemma 9, there exists a large cover {I n } n∈N of Y such that n diam(I n ) δ < ǫ 1 . For each n let k n = min{m : j≥m diam(I j ) δ < ǫ n }. Take
A n = kn+1−1 j=kn I j .
Fix δ > dim(Y ) and ǫ > 0. Choose a sequence {ǫ n } n∈N of positive reals such that n 2nǫ n < ǫ, and use Lemma 10 to get the corresponding large cover {A n } n∈N . For each n we define an n-cover U n of X as follows. Let F be an n-element subset of X. For each x ∈ F , find an open interval I x such that x ∈ I x and mn j=1 diam(I x × I n j ) δ < 2ǫ n .
Let U F = x∈F I x . Set U n = {U F : F is an n-element subset of X}.
As X is a strong γ-set, there exist elements U Fn ∈ U n , n ∈ N, such that {U Fn } n∈N is a γ-cover of X. Consequently, diam(I x × I n j ) δ < n n · 2ǫ n < ǫ.
X × Y ⊆ n∈N (U Fn × A n ) ⊆
Open problems
There are ways to strengthen the notion of γ-sets other than moving to strong γsets. Let B Ω and B Γ denote the collections of countable Borel ω-covers and γ-covers of X, respectively. As every open ω-cover of a set of reals contains a countable ωsubcover [9], we have that Ω ⊆ B Ω and therefore S 1 (B Ω , B Γ ) implies S 1 (Ω, Γ). The converse is not true [17].
Problem 11. Assume that X ⊆ R satisfies S 1 (B Ω , B Γ ). Is it true that for each Y ⊆ R, dim(X × Y ) = dim(Y )?
We conjecture that assuming the Continuum Hypothesis, the answer to this problem is negative. We therefore introduce the following problem. For infinite sets of natural numbers A, B, we write A ⊆ * B if A \ B is finite. Assume that F is a family of infinite sets of natural numbers. A set P is a pseudointersection of F if it is infinite, and for each B ∈ F , A ⊆ * B. F is centered if each finite subcollection of F has a pseudointersection. Let p denote the minimal cardinality of a centered family which does not have a pseudointersection. In [17] it is proved that p is also the minimal cardinality of a set of reals which does not satisfy S 1 (B Ω , B Γ ).
Problem 12. Assume that the cardinality of X is smaller than p. Is it true that
for each Y ⊆ R, dim(X × Y ) = dim(Y )?
Another interesting open problem involves the following notion [18,19]. A cover U of X is a τ -cover of X if it is a large cover, and for each x, y ∈ X, one of the sets {U ∈ U : x ∈ U and y ∈ U } or {U ∈ U : y ∈ U and x ∈ U } is finite. Let T denote the collection of open τ -covers of X. Then Γ ⊆ T ⊆ Ω, therefore S 1 ({O n } n∈N , Γ) implies S 1 ({O n } n∈N , T).
Problem 13. Assume that X ⊆ R satisfies S 1 ({O n } n∈N , T). Is it true that for each Y ⊆ R, dim(X × Y ) = dim(Y )?
It is conjectured that S 1 ({O n } n∈N , T) is strictly stronger than S 1 (Ω, T) [20]. If this conjecture is false, then the results in this paper imply a negative answer to Problem 13.
Another type of problems is the following: We have seen that the assumption that X is a γ-set and Y has Hausdorff dimension zero is not enough in order to prove that X × Y has Hausdorff dimension zero. We also saw that if X satisfies a stronger property (strong γ-set), then dim(X × Y ) = dim(Y ) for all Y . Another approach to get a positive answer would be to strengthen the assumption on Y rather than X.
If we assume that Y has strong measure zero, then a positive answer follows from a result of Scheepers [16] (see also [21]), asserting that if X is a strong measure zero metric space which also has the Hurewicz property, then for each strong measure zero metric space Y , X × Y has strong measure zero. Indeed, if X is a γ-set then it has the required properties.
Finally, the following question of Krawczyk remains open.
Problem 14. Is it consistent (relative to ZFC) that there are uncountable γ-sets but for each γ-set X and each set Y , dim(X × Y ) = dim(Y )?
Theorem 5 (
5Bartoszyński and Rec law[1]). Assume the Continuum Hypothesis (or just p = c). Fix an increasing sequence {k n } n∈N of natural numbers, and for each n defineA n = {f ∈ {0, 1} N : f ↾ [k n , k n+1 ) ≡ 0}. If the set A = m∈N n≥m A nhas measure zero, then there exists a γ-set G ⊆ {0, 1} N such that the algebraic sum G ⊕ A is equal to {0, 1} N (where where ⊕ denotes the modulo 2 coordinatewise addition).
Not every γ-set is strongly meager. T Bartoszyński, I , Rec Law, Contemporary Mathematics. 192T. Bartoszyński and I. Rec law, Not every γ-set is strongly meager, Contemporary Mathe- matics 192 (1996), 25-29.
Additivity properties of topological diagonalizations. T Bartoszyński, S Shelah, B Tsaban, The Journal of Symbolic Logic. 68T. Bartoszyński, S. Shelah, and B. Tsaban, Additivity properties of topological diagonaliza- tions, The Journal of Symbolic Logic 68 (2003), 1254-1260.
Sur la classification des ensembles de mesure nulle. É Borel, Bulletin de la Société Mathématique de France. 47É. Borel, Sur la classification des ensembles de mesure nulle, Bulletin de la Société Mathématique de France 47 (1919), 97-125.
The geometry of fractal sets. K Falconer, Cambridge University PressK. Falconer, The geometry of fractal sets, Cambridge University Press, 1990.
On some properties of Hurewicz, Menger and Rothberger. D H Fremlin, A W Miller, Fundamenta Mathematica. 129D.H. Fremlin and A.W. Miller, On some properties of Hurewicz, Menger and Rothberger, Fundamenta Mathematica 129 (1988), 17-33.
Strong measure zero sets, Notices of the. F Galvin, J Mycielski, R Solovay, American Mathematical Society280F. Galvin, J. Mycielski, and R. Solovay, Strong measure zero sets, Notices of the American Mathematical Society (1973), A-280.
Miller, γ-sets and other singular sets of real numbers. F Galvin, A W , Topology and it Applications. 17F. Galvin and A.W. Miller, γ-sets and other singular sets of real numbers, Topology and it Applications 17 (1984), 145-155.
Some properties of C(X), I. J Gerlits, Zs, Nagy, Topology and its applications. 14J. Gerlits and Zs. Nagy, Some properties of C(X), I, Topology and its applications 14 (1982), 151-161.
W Just, A W Miller, M Scheepers, P Szeptycki, Combinatorics of open covers II. 73W. Just, A. W. Miller, M. Scheepers, and P. Szeptycki, Combinatorics of open covers II, Topology and Its Applications, 73 (1996), 241-266.
On the consistency of Borel's conjecture. R Laver, Acta Mathematica. 137R. Laver, On the consistency of Borel's conjecture, Acta Mathematica 137 (1976), 151-169.
Special subsets of the real line. A W Miller, K. Kunen and J.E. VaughanNorth Holland, AmsterdamA.W. Miller, Special subsets of the real line, in: Handbook of Set Theoretic Topology (eds. K. Kunen and J.E. Vaughan), 201-233, North Holland, Amsterdam: 1984.
The algebraic sum of sets of real numbers with strong measure zero sets. A Nowik, M Scheepers, T Weiss, The Journal of Symbolic Logic. 63A. Nowik, M. Scheepers, and T. Weiss, The algebraic sum of sets of real numbers with strong measure zero sets, The Journal of Symbolic Logic 63 (1998), 301-324.
F Rothberger, Sur des families indenombrables de suites de nombres naturels. 37et les problémes concernant la proprieté CF. Rothberger, Sur des families indenombrables de suites de nombres naturels, et les problémes concernant la proprieté C, Proceedings of the Cambridge Philosophical Society 37 (1941), 109-126.
Property C ′′ and function spaces. M Sakai, Proceedings of the American Mathematical Society. 104M. Sakai, Property C ′′ and function spaces, Proceedings of the American Mathematical Society 104 (1988), 917-919.
Combinatorics of open covers I: Ramsey theory. M Scheepers, Topology and its Applications. 69M. Scheepers, Combinatorics of open covers I: Ramsey theory, Topology and its Applications 69 (1996), 31-62.
Finite powers of strong measure zero sets. M Scheepers, The Journal of Symbolic Logic. 64M. Scheepers, Finite powers of strong measure zero sets, The Journal of Symbolic Logic 64 (1999), 1295-1306.
The combinatorics of Borel covers. M Scheepers, B Tsaban, Topology and its Applications. 121M. Scheepers and B. Tsaban, The combinatorics of Borel covers, Topology and its Applica- tions 121 (2002), 357-382. http://arxiv.org/abs/math.GN/0302322
A topological interpretation of t, Real Analysis Exchange 25. B Tsaban, B. Tsaban, A topological interpretation of t, Real Analysis Exchange 25 (1999/2000), 391- 404. http://arxiv.org/abs/math.LO/9705209
Selection principles and the minimal tower problem. B Tsaban, Note di Matematica. 22B. Tsaban, Selection principles and the minimal tower problem, Note di Matematica 22 (2003), 53-81. http://arxiv.org/abs/math.LO/0105045
Strong γ-sets and other singular spaces. B Tsaban, Topology and its Applications. 153B. Tsaban, Strong γ-sets and other singular spaces, Topology and its Applications 153 (2005), 620-639. http://arxiv.org/abs/math.LO/0208057
Products of special sets of real numbers, Real Analysis Exchange 30. B Tsaban, T Weiss, B. Tsaban and T. Weiss, Products of special sets of real numbers, Real Analysis Exchange 30 (2004/5), 819-836.
| []
|
[
"Stochastic collocation approach with adaptive mesh refinement for parametric uncertainty analysis",
"Stochastic collocation approach with adaptive mesh refinement for parametric uncertainty analysis"
]
| [
"Anindya Bhaduri \nDepartment of Civil Engineering\nJohns Hopkins University\nBaltimoreMDUSA\n",
"Yanyan He [email protected] \nDepartment of Mathematics\nNew Mexico Institute of Mining and Technology\nSocorroNMUSA\n",
"Michael D Shields \nDepartment of Civil Engineering\nJohns Hopkins University\nBaltimoreMDUSA\n",
"Lori Graham-Brady \nDepartment of Civil Engineering\nJohns Hopkins University\nBaltimoreMDUSA\n",
"Robert M Kirby \nSchool of Computing\nUniversity of Utah\nSalt Lake CityUTUSA\n"
]
| [
"Department of Civil Engineering\nJohns Hopkins University\nBaltimoreMDUSA",
"Department of Mathematics\nNew Mexico Institute of Mining and Technology\nSocorroNMUSA",
"Department of Civil Engineering\nJohns Hopkins University\nBaltimoreMDUSA",
"Department of Civil Engineering\nJohns Hopkins University\nBaltimoreMDUSA",
"School of Computing\nUniversity of Utah\nSalt Lake CityUTUSA"
]
| []
| Presence of a high-dimensional stochastic parameter space with discontinuities poses major computational challenges in analyzing and quantifying the effects of the uncertainties in a physical system. In this paper, we propose a stochastic collocation method with adaptive mesh refinement (SCAMR) to deal with high dimensional stochastic systems with discontinuities. Specifically, the proposed approach uses generalized polynomial chaos (gPC) expansion with Legendre polynomial basis and solves for the gPC coefficients using the least squares method. It also implements an adaptive mesh (element) refinement strategy which checks for abrupt variations in the output based on the second order gPC approximation error to track discontinuities or non-smoothness. In addition, the proposed method involves a criterion for checking possible dimensionality reduction and consequently, the decomposition of the full-dimensional problem to a number of lower-dimensional subproblems. Specifically, this criterion checks all the existing interactions between input dimensions of a specific problem based on the high-dimensional model representation (HDMR) method, and therefore automatically provides the subproblems which only involve interacting dimensions. The efficiency of the approach is demonstrated using both smooth and non-smooth function examples with input dimensions up to 300, and the 1 Corresponding author.approach is compared against other existing algorithms. | 10.1016/j.jcp.2018.06.003 | [
"https://arxiv.org/pdf/1709.04584v1.pdf"
]
| 49,544,833 | 1709.04584 | 45a8a63992bb9dc5927c536451ee4e4e0d219ba0 |
Stochastic collocation approach with adaptive mesh refinement for parametric uncertainty analysis
14 Sep 2017
Anindya Bhaduri
Department of Civil Engineering
Johns Hopkins University
BaltimoreMDUSA
Yanyan He [email protected]
Department of Mathematics
New Mexico Institute of Mining and Technology
SocorroNMUSA
Michael D Shields
Department of Civil Engineering
Johns Hopkins University
BaltimoreMDUSA
Lori Graham-Brady
Department of Civil Engineering
Johns Hopkins University
BaltimoreMDUSA
Robert M Kirby
School of Computing
University of Utah
Salt Lake CityUTUSA
Stochastic collocation approach with adaptive mesh refinement for parametric uncertainty analysis
14 Sep 2017Preprint submitted to Elsevier September 15, 2017Generalized polynomial chaosstochastic collocationadaptive mesh refinementinteraction check
Presence of a high-dimensional stochastic parameter space with discontinuities poses major computational challenges in analyzing and quantifying the effects of the uncertainties in a physical system. In this paper, we propose a stochastic collocation method with adaptive mesh refinement (SCAMR) to deal with high dimensional stochastic systems with discontinuities. Specifically, the proposed approach uses generalized polynomial chaos (gPC) expansion with Legendre polynomial basis and solves for the gPC coefficients using the least squares method. It also implements an adaptive mesh (element) refinement strategy which checks for abrupt variations in the output based on the second order gPC approximation error to track discontinuities or non-smoothness. In addition, the proposed method involves a criterion for checking possible dimensionality reduction and consequently, the decomposition of the full-dimensional problem to a number of lower-dimensional subproblems. Specifically, this criterion checks all the existing interactions between input dimensions of a specific problem based on the high-dimensional model representation (HDMR) method, and therefore automatically provides the subproblems which only involve interacting dimensions. The efficiency of the approach is demonstrated using both smooth and non-smooth function examples with input dimensions up to 300, and the 1 Corresponding author.approach is compared against other existing algorithms.
Introduction
Computer-based simulations are widely used for predicting the behavior of physical systems. However, due to uncertainties in the system and the simulation process, such as the inherently stochastic nature of some system parameters, boundary conditions or excitations and a lack of understanding of the true physics, predictions inevitably deviate from reality. Therefore, understanding and quantifying the uncertainty in simulations is necessary in order to incorporate potential variability into these predictions.
One of the main aspects of uncertainty quantification (UQ) is uncertainty propagation, also called forward UQ. It aims to quantify uncertainty in the model outputs that results from uncertainty in the model inputs, which are usually represented using random variables with an associated probability distribution. The goal is therefore to estimate the response surface, probability density function (PDF) or statistical moments for the model outputs efficiently.
Probabilistic approaches have been relatively well-developed for forward UQ.
For example, the most popular technique is the Monte Carlo method, which is robust, simple to understand, easy to implement, and typically serves as a baseline against which other methods are compared. However, it may require a large number of model evaluations to reach the desired accuracy due to its slow convergence rate.
Other efficient methods have been proposed to achieve a higher convergence rate and consequently reduce the computational cost. Polynomial chaos (PC) expansion is one such method which represents the output of interest by the expansion of orthogonal polynomials (with respect to positive weight measure) in the stochastic input space. It is based on the homogeneous chaos theory by Wiener [1] where a Gaussian process was essentially expressed by a set of Hermite polynomials. Ghanem and Spanos [2] have coupled this approach with finite element methods to effectively model uncertainty in solid mechanics problems. The generalized polynomial chaos (gPC) [3,4] method makes use of different types of orthogonal polynomials in the Askey scheme [5] as the bases to approximate random functions/processes. It is capable of reaching fast convergence for smooth functions when the PDF of the random variables is identical to the weighting function of the orthogonal polynomials from the Askey scheme. This idea has been further extended to arbitrary random distributions [6,7]. The gPC coefficients in the above works are determined by performing Galerkin projection on the model equations. Its intrusive nature requires the modification of the deterministic simulation code, which could be a difficult and time-consuming task.
By contrast, non-intrusive methods use the deterministic simulation code directly without requiring any modifications, which makes them more applicable to complex systems. For example, Xiu [8] proposed a gPC scheme based on the stochastic collocation method, where the gPC coefficients are obtained using the discrete projection approach. Babuska et.al. [9] used Gauss quadrature points to sample low dimensional random spaces and perform tensor product interpolation using 1-D basis functions. Tensor grid approaches suffer from the so-called 'curse of dimensionality' [10] as there is an exponential rise in the required number of full model evaluations with the increase in dimensionality of the input space. To alleviate this problem to some extent, sparse grid [11,12] based interpolations [13,14] have been performed with the global Lagrange polynomial basis as the interpolant in the random space. However, these global approaches may not be suitable for tracking local steepness or discontinuities in the random space, and the approximation may fail to converge to the true value.
To deal with non-smooth functions, multi-element schemes have been proposed for both intrusive and non-intrusive methods. Wan and Karniadakis [15] developed a multi-element generalized polynomial chaos (MEgPC) scheme based on the stochastic Galerkin method to handle the issue of discontinuities in the output response and long-term integration of stochastic differential equations. This approach adaptively splits the actual input domain into smaller subdomains by calculating the relative error in variance along each dimension and maintaining a relatively low polynomial order (less than 10) in critical subdomains. However, as an intrusive approach, it requires modification of the deterministic simulation code. Foo et. al. [16] introduced the non-intrusive multi-element probabilistic collocation method (MEPCM) with Lagrange polynomial basis to efficiently treat problems characterized by strong non-linearities or discontinuities and long-term integration. The criterion for adaptively splitting the input domain is similar to that in the MEgPC scheme.
Both the Galerkin and collocation versions of the multi-element gPC scheme are still dimension-dependent, since both the number of subdomains and the number of terms in the gPC expansion increase rapidly with the increase in dimensionality of the stochastic input. To mitigate the issue of high computational cost associated with the element decomposition in high dimensional problems, Foo and Karniadakis [17] developed the MEPCM-A method, which combines the MEPCM with the high dimensional model representation (HDMR) [18]. The HDMR represents a function as a hierarchical additive combination of lower dimensional functions starting from a one-dimensional input space to a full-dimensional input space. A way to estimate the correlation functions is to use the cut-HDMR approach [19]. In the MEPCM-A approach, a highdimensional stochastic problem is reduced to a series of low-dimensional problems by truncating the terms in the HDMR up to a certain dimensionality, ν, followed by the application of the MEPCM approach to each of these subproblems with maximum dimensionality ν. Parameter ν is generally chosen to be small enough compared to the high dimensionality of the original problem that element decomposition is not computationally prohibitive. Another important parameter in the MEPCM implementation is the number of points, µ, in the interpolation rule. Parameters ν and µ are pre-fixed without regard to the actual order of interaction among the input parameters. For problems with high nominal dimensions but low effective dimensions (i.e. only a few input variables strongly influence the response), the method proves to be efficient. However, the choice of a proper value for ν of the subproblems needs more exploration.
In addition, once ν is prescribed, all the interaction terms up to order ν in the HDMR are considered. Consequently, for complex systems with strong input interactions, ν may be chosen to be large for satisfactory error estimates and thus the number as well as the dimensionality of the subproblems could become prohibitively large. Even with a small value of ν, the number of interaction terms can become very large for very high dimensional problems. Moreover, the model output may not be sensitive to some interaction terms with order upto ν, and thus a significant number of unnecessary sub-problems are considered which increases the computational cost.
Approaches [20,21,9] based on local bases have also been proposed to deal with non-smoothness in the random space. Klimke and Wohlmuth [22] developed a sparse grid collocation interpolation scheme based on piecewise linear basis functions, which has the ability to resolve discontinuities in the response surface but suffers from slow convergence rates because of global refinement of the sparse grid. The approach is based on hierarchical sparse grid points where points are added in successive depth levels. The error indicator is known as the hierarchical surplus and acts as a stopping criterion for the algorithm. Ma and Zabaras [23] used a similar approach called adaptive sparse grid collocation (ASGC) but also incorporated an adaptive strategy that enables a local sparse grid refinement around the discontinuity region, which helps enhance the convergence rate. The ASGC approach checks the hierarchical surplus values at each point in the current depth level and creates new points in the next depth level only in the neighborhood of points whose surplus error exceeds the tolerance value. The approach is restricted to uniform grid points because of the adaptivity criterion. For the purpose of tracking discontinuities, ASGC uses piecewise linear basis function. This may lead to a slow convergence for the regions where the approximating response surface are smooth. To tackle high dimensional stochastic problems, Ma and Zabaras [24] In this paper, we propose a method of stochastic collocation with adaptive mesh refinement (SCAMR). Specifically, the proposed approach uses generalized polynomial chaos (gPC) expansion with Legendre polynomial basis and solves for the gPC coefficients using the least squares method. It also implements an adaptive mesh (element) refinement strategy to track any discontinuities or non-smoothness in the output. The adaptive criteria associated with the mesh refinement strategy check for abrupt variations in the output based on the observed error from a second order gPC approximation. SCAMR further introduces a criterion for possible dimensionality reduction, allowing for decomposition of the full-dimensional problem to a number of lower-dimensional subproblems. This criterion checks all the existing interactions between input dimensions of a specific problem based on HDMR, and consequently provides the subproblems which only involve interacting dimensions.
The paper is organized as follows: Section 2 presents the general framework for a stochastic problem. In Section 3, we discuss the proposed method of stochastic collocation with adaptive mesh refinement in detail. In Section 4, we demonstrate the effectiveness and efficiency of the proposed approach using various numerical examples compared to the ASGC, the HDMR-ASGC as well as the MEPCM-A approach. We finally conclude the paper with a discussion in Section 5.
Problem Definition
Let the triplet (Ω, F, P) represent a complete probability space, where Ω corresponds to the sample space of outcomes, F ⊂ 2 Ω is the σ-algebra of measurable events in Ω, and P : F → [0, 1] is the probability measure. Let ξ = {ξ 1 (ω), ξ 2 (ω), . . . , ξ n (ω)} : Ω → Ξ ∈ R n be a set of n independent random variables, which characterize the uncertainty in the system. In the current work, we assume that the random variables ξ i follow uniform distribution
with a constant PDF p(ξ) = ρ ξ ; ξ ∈ [a 1 , b 1 ] × [a 2 , b 2 ] × .... × [a n , b n ]. Let x ∈ D ⊂ R d (d ∈ {1
, 2, 3}) be the spatial variable, and t ∈ (0, T ] (T > 0) be the temporal variable.
Consider a general partial differential equation
u t (x, t, ξ) = L(u; x, t, ξ), D × (0, T ] × Ξ B(u; x, t, ξ) = 0, ∂D × [0, T ] × Ξ, u = u 0 ,D × {t = 0} × Ξ,(1)
where B is the operator for the boundary conditions, L is the differential operator, D is the spatial domain, and u = u 0 is the initial condition. The problem is assumed to be well-posed in parameter space Ξ. The model output u(x, t, ξ) is the quantity of our interest. For the convenience of notation, we do not consider the dependence of solution on the spatial and time variables x and t, and only discuss the problem for any fixed x ∈ D and t ∈ (0, T ]. As mentioned in [25], this is standard in the UQ literature. Our goal is to quantify the uncertainty in the quantity of interest u(·, ξ) : Ξ → R, due to the uncertainty in the input variables ξ. Without loss of generality, we consider scalar model output.
Stochastic Collocation with Adaptive Mesh Refinement
In this section, we propose a stochastic collocation method with adaptive mesh refinement (SCAMR). Specifically, SCAMR adopts a mesh refinement scheme with a proposed criteria that checks for discontinuities or abrupt vari-ations in the response surface, as well as interactions between different input dimensions. Details are provided in the following subsections.
Generalized Polynomial Chaos Based Stochastic Collocation
Let u(ξ) ∈ L 2 (Ξ) be a square-integrable function of the n-dimensional random vector ξ which can be represented using the generalized polynomial chaos expansion as
u(ξ(ω)) = ∞ i=0û i Φ i (ξ(ω)),(2)
whereû i are the gPC coefficients and Φ i are the Legendre polynomials for uniform ξ [3].
For numerical calculations, the series is truncated to N + 1 terms to approximate the exact output u(ξ(ω)) with polynomial order p
u p (ξ(ω)) = N i=0û i Φ i (ξ(ω)), N + 1 = (n + p)! n!p! ,(3)whereû i = 1 E[Φ 2 i ] Ξ u(ξ)Φ i (ξ)ρ(ξ)dξ.(4)
With collocation methods, the gPC coefficientsû i can be obtained using discrete projection asû
i = 1 E[Φ 2 i ] M j=1 u(ξ j )Φ i (ξ j )α j , i = 0, 1, . . . , N,(5)
where {ξ j , α j } M j=1 are sets of quadrature points and their corresponding weights. Another collocation method for estimating the gPC coefficients utilizes interpolation on the pairs {ξ j , u(ξ j )} N +1 j=1 . The gPC coefficient vectorû = {û 0 , . . . ,û N } is estimated by solving the following linear system N i=0û i Φ i (ξ j ) = u(ξ j ), ∀j = 1, 2, . . . , N + 1.
The interpolation method may not produce a proper approximation if u(ξ j ) is corrupted by observational or measurement errors. The projection method, on the other hand, produces the best approximation in the weighted L 2 norm [26].
However, the quadrature nodes used in the discrete projection method have restrictions, such as the structure of the nodes and the number of the nodes.
To allow more flexibility, in terms of the location and the number of nodes, we estimate the vector of gPC coefficients by solving the following least squares problem using M ( > N + 1) sets of points:
u = arg miñ u N i=0ũ i Φ i (ξ) − u(ξ) 2(6)
whereũ = {ũ 0 ,ũ 1 , . . . ,ũ N } is an arbitrary gPC coefficient vector which converges to the desired vectorû = {û 0 ,û 1 , . . . ,û N } through the minimization in Eq. (6). Consequently, the approximated output u p is estimated using Eq.
(3). It is to be noted here that the set of M points may have an unstructured arrangement in the input space.
Decomposition of Random Space
In this section, we introduce the standard decomposition method for random input space, where the L 2 error of the global approximation has been proven to be bounded by the local L 2 error approximations in the elements [15]. We assume a hypercube input domain in our present work. Without the loss of generality, we consider the original stochastic space as Ξ = [−1, 1] n . It is then decomposed into n e non-overlapping and space-filling elements Ξ k : ∪ ne k=1 Ξ k = Ξ, Ξ m ∩ Ξ k = ∅ for m = k and m, k ∈ [1, 2, . . . , n e ]. If a k i and b k i denote the minimum and maximum bounds of element Ξ k along dimension i (1 ≤ i ≤ n), Ξ k is the tensor product given by
Ξ k = [a k 1 , b k 1 ) × [a k 2 , b k 2 ) × .......... × [a k n , b k n ).(7)
Let the local input random vector in each element be defined as
ξ k = [ξ k 1 , ξ k 2 , . . . , ξ k n ].
For the purpose of applying the gPC formulation on each element locally, the local random vector can be transformed to a new random
vector η ∈ [−1, 1] n such that η = F k (ξ k ) = [η 1 , η 2 , . . . , η n ].
The transformation is a simple scaling relationship between the [−1, 1] n domain and the particular Ξ k domain:
F k : η i = −1 + 2 b k i − a k i (ξ k i − a k i ), ∀i = 1, 2, ..., n(8)
Adaptive Criteria
The SCAMR algorithm uses adaptive approaches for two purposes: detection of abrupt variations in the output function for non-smoothness and reduction of the high-dimensional input parameter space to a subset of interacting dimensions. Each of these are described in the following subsections.
Criterion for Detecting Abrupt Variation in One Dimension
In the current work, we propose to use first or second order Legendre polyno-
(i) = {ξ (i) 1 , ξ (i) 2 , . . . , ξ (i) m }, where each n- dimensional point is ξ (i) j = { a1+b1 2 , . . . , ai−1+bi−1 2 , z j , ai+1+bi+1 2 , . . . , an+bn 2 }, ∀j ∈ {1, 2, . . . , m}. Let u (i) = {u (i) 1 , u (i) 2 , . . . , u (i)
m } be the corresponding set of m exact outputs and u
(i) p = {u (i) p,1 , u (i) p,2 , . . . , u (i)
p,m } be the corresponding 1-D second-order gPC approximation along the i-th dimension for the current domain. The model output can then be reasonably approximated as quadratic if
u (i) p − u (i) ∞ < 1 ,(9)
where 1 is an error tolerance parameter. If criterion (9) is not satisfied, the i-th dimension is considered critical. All the critical dimensions are then stored in descending order of the error magnitude obtained from criterion (9) and the domain is further decomposed along the center of the two most critical dimensions. The domain subdivision is repeated for every newly formed element until the stopping criteria are satisfied.
Criterion for Dimensionality Reduction
The second criterion helps in achieving dimensionality reduction. It decomposes the original full-dimensional problem to a number of lower dimensional problems by identifying the absence of interactions between input dimensions with respect to the output of interest. This criterion is checked at two levels and takes advantage of the significant gains in computational efficiency by dealing with low-dimensional functions.
First level criterion. At the first level, a dimension i is assumed noninteracting with others if
||u (i) − u c || ∞ < 1 ,(10)
where u (i) is the centerline output vector along the i-th dimension (introduced earlier) and u c is the exact output value at the center point of the input domain Ξ. By implementing this first level criterion, the full-dimensional problem will be decomposed to a r ≤ n dimensional and n − r one-dimensional problems, where the one-dimensional problems depend on the input random variables which do not interact with others.
Second level criterion. At the second level, we further decompose the r-dimensional problem to a number of lower-dimensional sub-problems by verifying r is given by
f (Y ) = f 0 + n i=1 f i (Y i ) + 1≤i1<i2≤n f i1i2 (Y i1 , Y i2 ) + . . . + 1≤i1<..is≤n f i1....is (Y i1 , ..., Y is ) + ...... + f 12...n (Y 1 , Y 2 , ...Y n )(11)
where f 0 is a constant zeroth order function, f i () denotes a one-dimensional function, f i1i2 () is a two-dimensional function and so on.
As seen from Eq. (11), the HDMR breaks down the function f (Y ) into individual contributions from all possible orders of interactions among the di- The component functions [31] are given by: Using the HDMR representation, we will now derive the non-interaction criterion for dimensionality reduction. In the proposed method, we consider only pairwise interactions of inputs. We thus concentrate on the second order (2-dimensional) component function given by Eq. (14). Combining Eq. (13) with Eq. (14), we can write,
mensions. For example, f i (Y i ) represents how input Y i influences f (Y ) keeping the other input dimensions fixed. The third term f i1i2 (Y i1 , Y i2 ) represents the combined contribution of inputs Y i1 and Y i2 towards f (Y ) after their individual contributions have been accounted for through f i (Y i ). All dimensions except Y i1 and Y i2 are kept fixed in this case. Similarly, f 12...n (Y 1 , Y 2 , ...Y n ) denotesf 0 = f (c) (12) f i (Y i ) = f (Y i , c {i} ) − f 0 ∀i ∈ {1, 2, . . . , n} (13) f i1i2 (Y i1 , Y i2 ) = f (Y i1 , Y i2 , c {i1,i2} ) − f i1 (Y i1 ) − f i2 (Y i2 ) − f 0 ,(14)∀i 1 , i 2 ∈ {1, 2, . . . , n}, such that i 1 < i 2 f i1i2i3 (Y i1 , Y i2 , Y i3 ) = f (Y i1 , Y i2 , Y i3 , c {i1,i2,i3} ) − f i1i2 (Y i1 , Y i2 ) − f i1i3 (Y i1 , Y i3 ) − f i2i3 (Y i2 , Y i3 ) − f i1 (Y i1 ) − f i2 (Y i2 ) − f i3 (Y i3 ) − f 0 ,(15)∀i 1 , i 2 , i 3 ∈ {1, 2, . . . , n}, such that i 1 < i 2 < i 3 . . . f 12...n (Y 1 , Y 2 , . . . , Y n ) = f (Y ) − f 0 − n i=1 f i (Y i1 ) − 1≤i1<i2≤n f i1i2 (Y i1 , Y i2 ) − . . . − 1≤i1<..in−1≤n f i1...in−1 (Y i1 , . . . , Y in−1 ) (16) where c {i} = c\{Y i }, c {i1,i2} = c\{Y i1 , Y i2 }, c {i1,i2,i3} = c\{Y i1 , Y i2 , Y i3 }.f i1i2 (Y i1 , Y i2 ) = f (Y i1 , Y i2 , c {i1i2} ) − f (Y i1 , c {i1} ) − f (Y i2 , c {i2} ) + f 0 ,(17)
For a given error tolerance 2 , dimensions i 1 and i 2 can be considered non-
interacting if the second order component function f i1i2 (Y i1 , Y i2 ) is considered negligible, i.e., f i1i2 (Y i1 , Y i2 ) ≤ 2 . This implies, f (Y i1 , Y i2 , c {i1,i2} ) − f (Y i1 , c {i1} ) − f (Y i2 , c {i2} ) + f 0 ≤ 2 .(18)
Eq. (18) is the pairwise non-interaction criterion.
Let us take a two-dimensional input domain as an example (see Fig. 1
),
where the input domain is projected from a higher n-dimensional input space with all the dimensions fixed at the mean of their respective ranges except those two dimensions (i 1 and i 2 ). The cut center is given by c = {0, 0, . . . , 0} and is For example, assume the exact value at point A is g A i1i2 and the approximated value at A assuming non-interaction is given by g approx,A i1i2 = g A i1 + g A i2 − g 0 . The output values g A i1 and g A i2 correspond to input points at A 1 and A 2 which are orthogonal projections of A on axes i 1 and i 2 respectively passing through point O and g 0 is the corresponding output value. Let g true i1i2 be the true output vector corresponding to the square points and g approx i1i2 be the corresponding approximate output vector obtained from the outputs at the circular points such that g approx
Dimension i 1 Dimension i 2 A 2 A g A i1i2 g A i2 g A i1 A 1 g 0 O (0, 0, 0, .., 0) (a i1 , a i2 , c {i1,i2} ) (a i2 , c {i2} ) (a i1 , c {i1} )i1i2 = g i1 + g i2 − g 0 . Then, Eq. (18) is considered satisfactory if g true i1i2 − g approx i1i2 ∞ ≤ 2 ,(19)
As mentioned earlier, using the knowledge about each of the pairwise (2dimensional) interactions, we derive all the possible higher dimensional interactions. For example, we consider a 5-dimensional stochastic function where Sub-dimensional representation. After checking criteria in Eqs. (10) and (18)
R = {R 1 , R 2 , R 3 , ...., R N R } with |R 1 | = r, |R j | = 1 (∀j = 2, ...., N R ), ∪ N R i=1 R i = D, ∩ N R i=1 R i = Ø and N R = n − r + 1.
In the next step, using criterion (18) These additional low dimensional functions can be called "corrective" dimension index sets introduced in order to account for the overlapping in S. Each of the "corrective" sets has an associated constant factor U j (∀j = 1, 2, ..., N T ), which equals the difference between frequency of its occurrence in S and the frequency of its occurrence in T . The frequency of occurrence of an index set in S or T is the number of times an index set features in S or T by itself or as a subset in a larger index set. There is also a constant factor V associated with f 0 , the function value at the cut center. In case of no overlapping of elements in S, i.e., ∩ N S i=1 S i = Ø, then T = {Ø} and N T = 0. The function can thus have an HDMR-like representation and is given by
f (Y 1 , Y 2 , ....Y n ) = N S i=1 h i (Y Si , c Si ) − N T j=1 U j p j (Y Tj , c Tj ) − V f 0 ,(20)
where Y Si is the set of input variables with the elements in S i as the indices, Y Tj is the set of input variables with the elements in T j as the indices, h i () is
an |S i |-dimensional function, p j () is a |T j |-dimensional function, U j and V are integer constants where V = N S − N T j=1 U j − 1.
As an example, consider an 8-dimensional function f (Y ). It is assumed that from criterion (10), each of the last r = 3 dimensions is identified to be non-interacting with the remaining (n − 1) = 7 dimensions. We thus have the following set of non-interacting group of dimensions:
R = {{1, 2, . . . , 5}, {6}, {7}, {8}},
and the function can now be described by:
f (Y ) = f (Y 1 , Y 2 , . . . , Y 8 ) = g 0 (Y 1 , Y 2 , . . . , Y 5 , c {1,2,...,5} ) + h 1 (Y 6 , c {6} ) + h 2 (Y 7 , c {7} ) + h 3 (Y 8 , c {8} ) − 3f 0 .(21)
Eq. (21) thus shows that the 8-dimensional problem has been reduced to a maximum dimensionality of r = 5 using the first level check. Criterion (18) is then tested on the r (= 5) dimensional system with 5 2 = 10 cases. The set of interacting pairs of dimensions obtained from the interaction check is Let T be a collection of sets, which are the non-empty intersections between S i and S j . We then have
T = {{1}}
with U = [1] and V = 4. The function g 0 () will now be given by:
g 0 (Y 1 , Y 2 , ....Y 5 ) = h 4 (Y 1 , Y 2 , Y 3 , c {1,2,3} ) + h 5 (Y 1 , Y 4 , c {1,4} ) + h 6 (Y 5 , c {5} ) − p 1 (Y 1 , c {1} ) − f 0 = h 4 (Y S4 , c S4 ) + h 5 (Y S5 , c S5 ) + h 6 (Y S6 , c S6 ) − p 1 (Y T1 , c T1 ) − f 0(22)
Thus function f (Y ) is given by:
f (Y ) = h 4 (Y S4 , c S4 ) + h 5 (Y S5 , c S5 ) + h 6 (Y S6 , c S6 ) − p 1 (Y T1 , c T1 ) − f 0 + h 1 (Y 6 , c {6} ) + h 2 (Y 7 , c {7} ) + h 3 (Y 8 , c {8} ) − 3f 0 = h 1 (Y S1 , c S1 ) + h 2 (Y S2 , c S2 ) + h 3 (Y S3 , c S3 ) + h 4 (Y S4 , c S4 ) + h 5 (Y S5 , c S5 ) + h 6 (Y S6 , c S6 ) − p 1 (Y T1 , c T1 ) − 4f 0 = 6 i=1 h i (Y Si , c Si ) − 1 j=1 p j (Y Tj , c Tj ) − 4f 0(23)
gPC Approximation Error
u p − u ∞ < 1(24)
If criterion (24) is not satisfied, the domain is further subdivided into smaller elements along the center of its two most critical dimensions.
Numerical Implementation
The proposed algorithm is discussed below:
Initialization and stopping criteria. The dimension n of the problem is first determined by the number of input random parameters considered in the model problem. N iter is the maximum number of iterations in the adaptive mesh refinement algorithm. V min is a minimum hyper-volume fraction of the non-converged elements below which the subdivision into smaller elements is stopped. When N iter is reached or the total hyper-volume fraction of the nonconverged elements is less than V min , the remaining non-converged elements are approximated by a first order gPC expansion and the algorithm terminates. Error tolerance parameters 1 and 2 are related to criteria (9), (10), (18) and (24).
With decrease in the values of the chosen tolerance parameters, the approximation error also has a decreasing trend but with an increase in the computational cost because of more number of full model evaluations.
Checking global smoothness and possible dimensionality reduction.
This step initiates with the implementation of a first order gPC approximation
in the original n-dimensional input space. The gPC coefficients are evaluated using the discrete projection method given by Eq. Therefore, there is no extra computational cost involved for function evaluations in this step. The accuracy of the approximation is tested using criterion (24). If the criterion is satisfied, the second order gPC approximation is considered satisfactory and the algorithm skips to the surrogate value generation step. Otherwise, we go to the next step. The iteration count Iter starts here. For each of the E Pi elements formed in P i in a certain iteration, an abrupt variation check is performed as was done on the original n-dimensional domain. If the second-order approximation criterion (9) is not met, the element E j Pi (j ∈ {1, 2, ..., E Pi }) is again subdivided into subelements along its two most critical dimensions. Satisfaction of criterion (9) implies there are no abrupt variations in the current element. This leads to Global checks and dimensionality reduction perform first order gPC approximation using Eq. (5) if ||u p − u|| ∞ < 1 (see Eq. (24)) then go to the Surrogate value extraction step else perform abrupt variation check using criterion (9) perform dimensionality reduction using criteria (10) and (18) (9) if criterion (9) is satisfied then check gPC approximation using criterion (24) if criterion (24) checking criterion (24) for second order gPC approximation in the whole element. If that criterion is met, the element E j Pi is said to have converged for the given tolerance 1 and can be suitably approximated by a second order gPC approximation. The polynomial order, the coefficient vector and the range of the converged element is then stored for future surrogate retrieval. If criterion (24) is not satisfied, then the element is also subdivided into smaller elements. A summary of the all the above steps is given in Algorithm 1.
Numerical Results
In this section, SCAMR is applied to a variety of functions with smoothness as well as discontinuities and input dimensions as high as 300. Through these examples, its performance is tested against existing efficient algorithms, like, ASGC [23], HDMR-ASGC [24] and MEPCM-A [17].
Demonstration of SCAMR Performance
We first demonstrate the effectiveness and efficiency of the proposed SCAMR method using simple smooth functions with random input spaces of different dimensions. Then, we will focus on functions with non-smoothness or discontinuities in random space, as well as a high-dimensional stochastic elliptic problem.
Our results are compared to those from ASGC method since both approaches use low order polynomials as a basis and both use adaptivity to track discontinuities. Specifically, we compare the root mean squared error calculated using N = 10 5 randomly generated samples, given by
= 1 N N i=1 (f (x i ) −f (x i )) 2 ,(25)
where f is the exact function andf is the numerical approximation using ASGC or SCAMR.
Performance of SCAMR on Smooth Functions
We first implement the proposed method on a few simple smooth functions with random inputs in different dimensions. The two-dimensional test functions are quadratic and sine functions defined as follows.
f 1 (x 1 , x 2 ) = x 2 1 + x 2 2 ,(26)f 2 (x 1 , x 2 ) = sin(4x 1 ) sin(4x 2 ),(27)
where x i are i.i.d. uniform random variables in [0, 1] (i = 1, 2). The exact functions are provided in Fig. 2(a,b) for f 1 and f 2 respectively. Clearly, the product of sine functions f 2 exhibits more abrupt variations than the summation of quadratic functions f 1 in the [0, 1] 2 domain; therefore, one would expect slower convergence of the numerical approximation for f 2 . The numerical errors of SCAMR method are provided in Fig. 2(c,d), and compared to those from ASGC method. From the results, one can observe that i) both SCAMR and ASGC methods have slower convergence for f 2 compared to f 1 as we expected, and ii) our proposed SCAMR approach converges faster than ASGC for both the functions.
We extend two-dimensional quadratic and sine functions to four and ten dimensions as follows. function to a higher dimensional (10-D) function. Fig. 3 shows that SCAMR converges faster than ASGC for all four smooth functions.
f 3 (x 1 , x 2 , x 3 , x 4 ) = 4 i=1 x 2 i ,(28)f 4 (x 1 , x 2 , x 3 , x 4 ) = 4 i=1 sin(4x i ),(29)f 5 (x 1 , x 2 , x 3 , x 4 ) = sin(4x 1 ) sin(4x 2 ) + sin(4x 3 ) sin(4x 4 ),(30)f 6 (x 1 , x 2 , . . . , x 10 ) = 10 i=1 sin(4x i ),(31)
Having tested the SCAMR approach on smooth functions with random inputs in different dimensions, we will next discuss its performance on non-smooth functions.
Performance of SCAMR on Functions with Line Singularity
Here we adopt the same 2D function with line singularity as in [23].
f 7 (x 1 , x 2 ) = 1 |0.3 − x 2 1 − x 2 2 | + 0.1 .(32)
The function is plotted in Fig. 4. Clearly, the function has a C 1 discontinuity going across both x 1 and x 2 directions. The 4D and 10D extensions of the above function are defined as SCAMR is tested on another 2-D function, this one with a C 0 discontinuity as in [32]:
f 8 (x 1 , x 2 , x 3 , x 4 ) = 1 |0.3 − x 2 1 − x 2 2 | + 0.1 + 4 i=3 x i ,(33)f 9 (x 1 , x 2 , . . . , x 10 ) = 1 |0.3 − x 2 1 − x 2 2 | + 0.1 + 10 i=3 x i(34)f 10 (x 1 , x 2 ) = 0, if x 1 ≥ 0.5 or x 2 ≥ 0.5, sin(πx 1 ) sin(πx 2 ), otherwise
The function is plotted in Fig. 6.
Similarly, we extend it to 4-D and 10-D functions with discontinuity as Fig. 7(b-d). The numerical approximations are compared to those from ASGC method. From the results, similar conclusions to the previous example can be drawn.
f 11 (x 1 , x 2 , x 3 , x 4 ) = 4 i=3 x i , if x 1 ≥ 0.5 or x 2 ≥ 0.5, sin(πx 1 ) sin(πx 2 ) + 4 i=3 x i , otherwise
SCAMR in a Stochastic Elliptic Problem
Finally, we apply the SCAMR approach to a stochastic elliptic problem as in [14,23]. The model problem is given as
− (a n (ω, x) u(ω, x, y)) = f (x, y), in D × Γ u(ω, x, y) = 0, on ∂D × Γ(35)
where spatial variable (x, y) ∈ D = [0, 1] 2 , random variable ω ∈ Γ, f (x, y) = cos(x) sin(y).
The diffusion coefficient a n (ω, x) is assumed to be a random field that can be approximated in a finite n-dimensional stochastic space as:
log(a n (ω, x) − 0.5) = 1 + Y 1 (ω)(
√ πL 2 ) 1/2 + n i=2 ξ i φ i (x)Y i (ω),(36)
where Y i (ω) [i = 1, 2, . . . , n] are independent random variables which are uni-
formly distributed in [− √ 3, √ 3]
, and Fig. 9(a-e) for n = 2, 11, 25, 50, 75 respectively.
ξ i = ( √ πL) 1/2 exp( −( i 2 πL) 2 8 ), if i > 1(37)
The numerical approximations from SCAMR are compared to those from the ASGC method. From the figure, one can observe that the numerical approximation from SCAMR converges faster for very low dimension such as n = 2, but it achieves similar convergence rates for large dimensions such as n = 25, 50, 75.
The reason is that the tail terms of Eq. 36 for n > 25 could be negligible due to the fast decay of the eigenvalues ξ i . As with the previous examples, SCAMR converges faster or at a similar rate as ASGC for this problem.
Comparison to HDMR Guided Algorithms for High Dimensional Problems
To further illustrate the efficiency of SCAMR regarding the model reduction criterion, we implement our proposed approach for more high-dimensional problems and compare the results to those from HDMR-ASGC and MEPCM-A methods.
A 10-dimensional function is considered to compare the efficiency of SCAMR and HDMR-ASGC [24]. The error estimate used here is the normalized L 2 interpolation error given by
= N i=1 (f (x i ) −f (x i )) 2 N i=1 f (x i ) 2 ,(38)
where f is the exact function,f is the numerical approximation using HDMR-ASGC or SCAMR and N = 10 6 randomly generated samples.
A high dimensional integration problem is then used as an example to compare the SCAMR and the MEPCM-A methods. The error estimate used here is the mean relative error [17] given by
= |I exact − I approx | I exact(39)
where I exact is the true mean of the problem and I approx is the numerical approximation of the mean using either MEPCM-A or SCAMR.
Comparison to HDMR-ASGC
We consider a 10-D function [24] given by Table 1 shows a comparison of the normalized L 2 interpolation error and the number of points needed for the HDMR-ASGC and the SCAMR approach. It can be seen from the results that SCAMR proves to be more efficient than HDMR-ASGC in approximating f 13 . The HDMR-ASGC results are read directly from Fig. 8 (right) in [24]. Identification of the low effective dimensions using the interaction check in the SCAMR approach is achieved at a lower computational cost compared to the corresponding check in HDMR-ASGC [24]. The subsequent surrogate construction of the sub-dimensional problems also requires lesser number of samples when using the second order gPC approximation in SCAMR compared to the linear basis interpolation in the HDMR-ASGC approach. For example, the number of points needed for an L 2 error of approximately 6×10 −5 is around 1575 points in the case of HDMR-ASGC while the number of points needed for an L 2 error of 2.2921 × 10 −5 using SCAMR is 407.
f 13 (x) = 1 1+ 10 i=1 αixi(40)
Comparison to MEPCM-A
We consider a discontinuous Genz function given by:
f 14 (x) = 0, if x 1 ≥ 0.5 or x 2 ≥ 0.5, exp( n i=1 c i x i ), otherwise
where c i = e −35i/(n−1) . Using SCAMR, we evaluate the high dimensional integration I approx = f 14 (x)dx wheref 14 is the numerical approximation to f 14 .
The relative mean error is then calculated and compared with MEPCM-A results in Table 6 given in [17] with different dimensions n = 100, 200, and 300. It can be seen from the form of function f 14 that the importance of the dimensions decrease exponentially with increase in dimensions. Thus this is an example where the function has a high nominal dimensionality but low effective dimensionality depending on the error tolerance. Table 2 shows a comparison of the mean relative error and the number of points needed for the MEPCM-A approach and the SCAMR approach. For the SCAMR approach, mean value extraction is performed by generating weighted Clenshaw-Curtis sparse grid points in each of the elements in each subproblem. Then local means are calculated for each subproblem by assigning weights to each element according to their hypervolume. Local means are finally combined together to get the global mean. It can be seen from the results that SCAMR proves to be very efficient in identifying the low effective dimensions. In MEPCM-A, the effective dimensions depend on the parameter ν. Even though ν is chosen to be small (ν = 2 or 3), the number
Conclusion
In this paper, an efficient stochastic collocation method with adaptive mesh refinement has been proposed. Specifically, this approach utilizes generalized polynomial chaos as the basis and solves the gPC coefficient using the least squares method, which provides more flexibility on the number and locations of function evaluations. It also implements adaptive mesh refinement to track the discontinuities, and the adaptive criteria of the mesh refinement to check for abrupt variations in the output based on error measured from a second order gPC. In addition, this approach uses a criterion to check possible dimension-ality reduction and decomposes the full-dimensional problem to a number of lower-dimensional subproblems, based on the HDMR method. Therefore, for a specific problem, the highest dimensionality of subproblems which involve where new input points are to be generated according to the sparse grid quadrature. When there is significant non-linearity, the subdomains generally do not converge with the low-order gPC approximation and hence split up into further smaller domains.
combined a dimensionadaptive version of HDMR with ASGC (HDMR-ASGC). Initially, the impor-tance of the component functions in HDMR are estimated through a weight measure which is expressed as the integral value of a component function of certain order with respect to the sum of the integral values of all lower order component functions. Component functions with weight measures higher than a predefined error threshold are the ones considered important. ASGC is then applied to each of the lower dimensional sub-problems corresponding to the important component functions. The error indicator used in HDMR-ASGC is a function of the integral value of the basis function as well as the hierarchical surplus. It is different from the original ASGC approach [23] which uses only the surplus value as the error indicator.
mials to efficiently approximate any general response function with local abruptness or discontinuities. In any domain where the function deviates significantly from a second order polynomial approximation, we decompose the domain further. Specifically, we consider the output variation along the centerline (straight line passing through the center of the domain) along each dimension one at a time with the rest of the dimensions fixed at their midpoints. For example, let Γ be a given n-dimensional domain (element) such that Γ = [a 1 , b 1 ) × [a 2 , b 2 ) × ..... × [a n , b n ). For the i-th dimension, let z = {z 1 , . . . , z m } be m Chebyshev points of depth level l in the range [a i , b i ) such that m = 2 l + 1. In this study, depth level l = 2 is taken and hence m = 5. Then the set of input points along the centerline in the i-th dimension is ξ
the contribution of all inputs taken together towards f (Y ) after having accounted for all lower dimensional function contributions. Cut-HDMR [28, 29] is an efficient technique for estimating the component functions in f (Y ) which involves evaluating f (Y ) on lines, planes and hyperplanes (or cuts) passing through a cut center c which is a point in the input variable space. The choice of c is important as it influences the convergence of the HDMR expansion. It has been shown [30] that a suitable choice of c can be the mean of the input random vector.
For sets A and B, A\B denotes a set with only those elements in A that are not included in B.
Figure 1 :
1Square points denote new points introduced for the interaction check between dimensions denoted by point O in Fig. 1. All the square points in Fig. 1 are used to test for interaction between the two dimensions. The exact values at all the square points are calculated by full model evaluations and compared with the values at those points obtained assuming both dimensions are non-interacting.
{Y 3 , Y 1
31} out of total 5 2 = 10 pairs are interacting based on the criterion Eq. (18), we decompose the full five-dimensional problem into a three-dimensional problem in the space of {Y 1 , Y 2 , Y 3 } and two one-dimensional problems in the space of Y 4 and Y 5 , respectively.
, an n-dimensional problem can be potentially reduced to a set of lower dimensional problems as mentioned in the beginning of this section. We discuss next the effects of applying the two criteria in successive steps and how to represent the full-dimensional function in terms of a number of lower dimensional functions. At first, using criterion(10), an n-dimensional input domain Ξ of dimension index set D = {1, 2, ..., n} can be potentially reduced to a group of N R non-interacting lower dimensional input domains of dimension index set
, the R 1 sub-dimensional problem can be further reduced to a group of N Q lower dimensional input domains withdimension index set Q = {Q 1 , Q 2 , ......, Q N Q } such that ∪ N Q i=1 Q i = R 1 . Thus,in total, an n-dimensional problem can be reduced to N S lower dimensional input domains with dimension index set S = {R 2 , R 3 , ....., R N R , Q 1 , Q 2 , ...., Q N Q } = {S 1 , S 2 , ....., S N S } such that N S = N R + N Q − 1. The N S index sets can be overlapping such that ∩ N S i=1 S i = Ø. In case of overlapping, common dimension indices will be present among different elements in S. These common dimension indices form N T additional low dimensional domains of dimension set T = {T 1 , T 2 , ..., T N T }, where T = {S i ∩ S l } \Ø, ∀i, l ∈ {1, ...., N S } such that i < l.
given by I = {{1, 2}, {1, 3}, {2, 3}, {1, 4}}. Using information from the set I,R 1 = {1, 2, .. . , 5} is reduced to the following dimension set Q:Q = {Q 1 , Q 2 , Q 3 } = {{1, 2, 3}, {1, 4}, {5}}We note that the presence of the 3-dimensional interaction {1, 2, 3} have been derived from the interacting pairs {1, 2}, {1, 3} and {2, 3}. This is how higher level interactions are derived from pairwise interaction results. Dimension set S will then be given by S = {S 1 , S 2 , S 3 , S 4 , S 5 , S 6 } = {{6}, {7}, {8}, {1, 2, 3}, {1, 4}, {5}}
Let us consider a d-dimensional domain where 1 ≤ d ≤ n. Let ξ a = {ξ a,1 , ξ a,2 , . . . , ξ a,m } be an array of m Clenshaw-Curtis sparse grid points in dimension d of depth level 2. There may also exist an additional array of q unstructured points ξ b = {ξ b,1 , ξ b,2 , . . . , ξ b,q } which have been previously evaluated. They correspond to sparse grid points in all "predecessor" elements that are contained in the current domain. Let u p be the second-order gPC approximation for the current domain corresponding to input points ξ where the gPC coefficients are calculated by solving a least squares problem given by Eq.(6) such that ξ = {ξ a , ξ b } and q + m = M . Assuming u is the corresponding exact solution vector, the domain can be suitably approximated by the second-order gPC approximation if
( 5 )
5using Clenshaw-Curtis sparse grid points of depth level 1. The accuracy of the approximation is tested using criterion(24). If the criterion is not satisfied, we go to the step of performing a one-dimensional (1-D) abrupt variation check. Otherwise, the first order gPC approximation is considered satisfactory and the algorithm skips to the surrogate value extraction step.The 1-D abrupt variation check is now performed on the input domain to identify the influence of each dimension towards the output of interest. Criterion (9) is used to identify the critical dimensions while criterion (10) helps to reduce the n-dimensional problem to a number of problems with a maximum of r dimensions where r < n. The interaction check is performed next, again on the global input domain using criterion(18) to further reduce the maximumdimensionality to w(< r) where w = max(|S i |), ∀S i ∈ S.If any of the dimensions are found to be critical based on the criterion of global abrupt variation, we directly go to the step of adaptive mesh refinement. Otherwise, a second order gPC approximation is now performed in the original n-dimensional input space using the discrete projection method. The function at the Clenshaw-Curtis sparse grid points of depth level 2 used for this approximation has already been evaluated in previous step of interaction check.
Adaptive mesh refinement. This part of the algorithm in general deals with (N S +N T ) low dimensional subproblems as mentioned in section 3.3.2. For a subproblem P i (1 ≤ i ≤ N S + N T )}), the algorithm initiates with the subdivision of the original domains into elements along its two most critical dimensions.
Algorithm 1 :
1Summarized steps Initialization Set n, N iter , V min , 1 , and 2 .
p
− u (i) || ∞ < 1 (see Eq. (9)) for all dimensions then perform second order gPC approximation using Eq.(5)if ||u p − u|| ∞ < 1 (see Eq.(24)) then go to the Surrogate value extraction step else go to the Adaptive mesh refinement step sub-dimensional problems do check abrupt variations using criterion
is not satisfied then subdivide the element along the center of its two most critical dimensions end if else subdivide the element along the center of its two most critical dimensions end if end for Surrogate value extraction extract output values corresponding to query inputs from the approximate model obtained.
This procedure is performed for all E Pi elements and all the new subelements formed undergo similar operations at the next iteration Iter = Iter + 1. At the end of each iteration, the hyper volume V of the subelements created and the number of iterations Iter are compared with the corresponding critical values V min and N iter respectively to check if either of the two stopping criteria is met. If the stopping condition gets satisified, then all the remaining subelements are approximated by a first order gPC approximation. After meeting the stopping criteria, the next subproblem is taken up and we repeat the process of characterizing it. Surrogate value extraction. After having characterized the n-dimensional problem through the various steps mentioned, the final step is to generate output values corresponding to arbitrary query input points in the n-dimensional domain and also output statistics, such as, mean. Output value estimation corresponding to a query input involves locating the element in which the query point lies in each subproblem. The stored information for that element is then retrieved to generate the local surrogate output values in each subproblem, which are then combined together to get the global output value. Mean value estimation is performed by evaluating the integration in each of the elements in each subproblem. For each subproblem, the global mean is calculated by the weighted average of local means corresponding to each element, and the weight is based on the ratio of the hyper-volume of the elements and the hyper-volume of the whole domain.
Figure 2 :
2Results for 2D smooth functions: (a) exact function output for f 1 ; (b) exact function output for f 2 ; (c) error of estimated f 1 using SCAMR and the ASGC method; and (d) error of estimated f 2 using SCAMR and the ASGC method.
where x i are i.i.d. uniform random variables in [0, 1] (i = 1, 2, . . . , 10). The functions f 3 , f 4 and f 6 are independent of the interaction terms between the inputs, while f 5 depends on some interaction terms between the inputs. The numerical errors of both the SCAMR and the ASGC methods are provided in Fig. 3 with respect to number of function evaluations. The numerical approximation from both methods converges slower as the complexity of the function increases, such as, from a polynomial function to a sine function, from an additive function to a multiplicative function or from a lower dimensional (4-D)
Figure 3 :
3Error analysis of SCAMR and ASGC methods for 4D and 10D smooth functions: (a) 4D f 3 , (b) 4D f 4 , (c) 4D f 5 , and (d) 10D f 6 .
Figure 4 :
4Surface plot of function f 7 (x 1 , x 2 ).
where x i are i.i.d. uniform random variables in [0, 1] (i = 1, 2, . . . , 10). Notice that the added dimensions in f 8 and f 9 are not interactive with x 1 and x 2 . Therefore one would expect that the computational cost will not increase dramatically as the dimension increases. The proposed SCAMR approach is implemented for the above 2-D, 4-D and 10-D functions. The locations of function evaluations for the 2-D function f 7 are plotted in Fig. 5a. The plot shows that the line singularity is well captured by the approach and more function evaluations are required in the area of line singularity as expected. The error analysis of the numerical approximations are provided in Fig. 5(b-d) for functions f 7 , f 8 and f 9 , respectively. From the figure, one can observe that the convergence rates of SCAMR are similar for the three functions with different dimensions as expected. The SCAMR approach converges faster than ASGC for all three functions. 4.1.3. Performance of SCAMR on Functions with C 0 discontinuity
Figure 5 :
5Input domain and error analysis for functions with line singularity: (a) input domain for function f 7 , (b) numerical error as a function of the number of samples for 2D f 7 , (c) numerical error as a function of the number of samples for 4D f 8 , and (d) numerical error as a function of the number of samples for 10D f 9 .
Figure 6 :Figure 7 :
67Surface plot of function f 10 (x 1 , x 2 ). Input domain and error analysis for functions with discontinuty: (a) input domain for function f 10 , (b) numerical error for 2D f 10 , (c) numerical error for 4D f 11 , and (d) numerical error for 10D f 12 . x i , if x 1 ≥ 0.5 or x 2 ≥ 0.5, sin(πx 1 ) sin(πx 2 ) + 10 i=3 x i , otherwise where x = {x 1 , x 2 , . . . , x 10 }. The proposed SCAMR approach is implemented for these 2-D, 4-D and 10-D functions. The function evaluation locations for 2-D function f 10 are plotted in Fig. 7a, and the error analysis of the numerical approximation from SCAMR for f 10 , f 11 and f 12 are provided in
), if i is odd where L p = max{1, 2L c }, and L = Lc Lp where L c = 0.5 is the correlation length. Without loss of generality, we consider the uncertainty in the output at a fixed point in space x = y = 0.5, which is the center of the spatial domain.
Figure 8
8displays two realizations of the output contour in the spatial domain for n = 50 using the deterministic code of the elliptic problem. The proposed SCAMR approach is implemented for the stochastic elliptic problem with different dimensions n in the random space. The error analysis of the numerical approximations are provided in
Figure 8 :
8Two realizations of the output u for n = 50 and correlation length Lc = 0.5.
Figure 9 :
9Error analysis of the stochastic elliptic problem with (a) n = 2, (b) n = 11, (c) n = 25, (d) n = 50, (e) n = 75 dimensions for correlation length Lc = 0.5.
where parameters α i = 0.1/2 i−1 , random input x i = σy i and y i are i.i.d. uni-form random variables in − √ 3, √ 3 , i ∈ {1, 2, . . . , 10}. Parameter σ is relatedto the standard deviation of the input and for this example, σ = 2. The weights drop drastically with increase in dimensions and hence the number of effective dimensions is low compared to 10 nominal dimensions.
interacting dimensions, are automatically provided. The effectiveness of this method has been shown using different low and high dimensional, smooth and non-smooth examples. It is noticeable that this approach is particularly efficient for high nominal dimensional problems, like the stochastic elliptic problem with a large number of terms for the diffusivity coefficient, where a significant number of dimensions can be less important (low effective dimensions) and thus noninteracting with other more important dimensions. However, if the dimensions are all coupled in their contribution towards the output of interest, then the efficiency of this method decreases with the increase in the dimensionality of the problem, especially when the response surface is highly non-linear. This is because of the generation of a large number of high dimensional subdomains,
Table 1 :
1HDMR-ASGC and SCAMR error and cost comparison for function f 13HDMR-ASGC
SCAMR
L 2 error
Number of points
L 2 Error
Number of points
≈ 9 × 10 −3
≈ 200
≈ 1 × 10 −3
≈ 700
3.9163 × 10 −4
101
≈ 1 × 10 −4
≈ 1144
8.3553 × 10 −5
165
≈ 6 × 10 −5
≈ 1575
2.2921 × 10 −5
407
Table 2 :
2MEPCM-A and SCAMR error and cost comparison for function f 14 of terms in HDMR becomes very large for high nominal dimensions. SCAMR thus achieves much better precision with less number of points compared to the MEPCM-A approach. For example, for the 300-dimensional case, the number of points needed for a relative error of 0.09 is around 123 million points in the case of MEPCM-A while the number of points needed for a relative mean error of around 0.02 is around 22, 000.MEPCM-A
SCAMR
n
Relative error Number of points Relative Error Number of points
100
O(1)
103
2.1308
201
0.0197
20, 801
0.0026
2777
0.0098
4, 677, 148
0.0017
5909
200
O(1)
203
2.3705
401
0.067
81, 601
0.0105
5121
0.047
36, 714, 298
0.0021
8397
300
O(1)
303
2.5435
601
0.12
182, 401
0.7468
2507
0.09
123, 111, 448
0.01985
21904
pairwise interactions in the r-dimensional domain. All higher dimensional interactions between the input dimensions are derived from the pairwise interaction results. This second level criterion is derived from the HDMR representation[19,27] and the details are provided in the following.Pairwise non-interaction criterion derivation. Let f (Y ) = f (Y 1 , Y 2 , ...., Y n ) be an n-dimensional function. Following the notation in[24], the general expression of the High Dimensional Model Representation (HDMR) for the function
The homogeneous chaos. N Wiener, American Journal of Mathematics. 604N. Wiener, The homogeneous chaos, American Journal of Mathematics 60 (4) (1938) 897-936.
Stochastic finite elements: a spectral approach, Courier Corporation. R G Ghanem, P D Spanos, R. G. Ghanem, P. D. Spanos, Stochastic finite elements: a spectral ap- proach, Courier Corporation, 2003.
The wiener-askey polynomial chaos for stochastic differential equations. D Xiu, G E Karniadakis, SIAM journal on scientific computing. 242D. Xiu, G. E. Karniadakis, The wiener-askey polynomial chaos for stochas- tic differential equations, SIAM journal on scientific computing 24 (2) (2002) 619-644.
Modeling uncertainty in flow simulations via generalized polynomial chaos. D Xiu, G E Karniadakis, Journal of computational physics. 1871D. Xiu, G. E. Karniadakis, Modeling uncertainty in flow simulations via generalized polynomial chaos, Journal of computational physics 187 (1) (2003) 137-167.
Some basic hypergeometric polynomials that generalize jacobi polynomials memoirs amer. R Askey, J Wilson, Math. Soc. AMSProvidence RI 319R. Askey, J. Wilson, Some basic hypergeometric polynomials that gener- alize jacobi polynomials memoirs amer, Math. Soc. AMS Providence RI 319.
Modeling arbitrary uncertainties using gramschmidt polynomial chaos. J A Witteveen, H Bijl, 44th AIAA aerospace sciences meeting and exhibit. 896J. A. Witteveen, H. Bijl, Modeling arbitrary uncertainties using gram- schmidt polynomial chaos, in: 44th AIAA aerospace sciences meeting and exhibit, 2006, p. 896.
Beyond wiener-askey expansions: handling arbitrary pdfs. X Wan, G E Karniadakis, Journal of Scientific Computing. 271X. Wan, G. E. Karniadakis, Beyond wiener-askey expansions: handling arbitrary pdfs, Journal of Scientific Computing 27 (1) (2006) 455-464.
Efficient collocational approach for parametric uncertainty analysis. D Xiu, Commun. Comput. Phys. 22D. Xiu, Efficient collocational approach for parametric uncertainty analysis, Commun. Comput. Phys 2 (2) (2007) 293-309.
A stochastic collocation method for elliptic partial differential equations with random input data. I Babuška, F Nobile, R Tempone, SIAM Journal on Numerical Analysis. 453I. Babuška, F. Nobile, R. Tempone, A stochastic collocation method for elliptic partial differential equations with random input data, SIAM Journal on Numerical Analysis 45 (3) (2007) 1005-1034.
Advances in multidimensional integration. R Cools, Journal of Computational and Applied Mathematics. 1491R. Cools, Advances in multidimensional integration, Journal of Computa- tional and Applied Mathematics 149 (1) (2002) 1-12.
Quadrature and interpolation formulas for tensor products of certain classes of functions. S Smolyak, Soviet Math. Dokl. 4S. Smolyak, Quadrature and interpolation formulas for tensor products of certain classes of functions, in: Soviet Math. Dokl., Vol. 4, 1963, pp. 240- 243.
H.-J Bungartz, M Griebel, Sparse grids. 13H.-J. Bungartz, M. Griebel, Sparse grids, Acta numerica 13 (2004) 147-269.
High-order collocation methods for differential equations with random inputs. D Xiu, J S Hesthaven, SIAM Journal on Scientific Computing. 273D. Xiu, J. S. Hesthaven, High-order collocation methods for differential equations with random inputs, SIAM Journal on Scientific Computing 27 (3) (2005) 1118-1139.
A sparse grid stochastic collocation method for partial differential equations with random input data. F Nobile, R Tempone, C G Webster, SIAM Journal on Numerical Analysis. 465F. Nobile, R. Tempone, C. G. Webster, A sparse grid stochastic collocation method for partial differential equations with random input data, SIAM Journal on Numerical Analysis 46 (5) (2008) 2309-2345.
An adaptive multi-element generalized polynomial chaos method for stochastic differential equations. X Wan, G E Karniadakis, Journal of Computational Physics. 2092X. Wan, G. E. Karniadakis, An adaptive multi-element generalized poly- nomial chaos method for stochastic differential equations, Journal of Com- putational Physics 209 (2) (2005) 617-642.
The multi-element probabilistic collocation method (me-pcm): Error analysis and applications. J Foo, X Wan, G E Karniadakis, Journal of Computational Physics. 22722J. Foo, X. Wan, G. E. Karniadakis, The multi-element probabilistic col- location method (me-pcm): Error analysis and applications, Journal of Computational Physics 227 (22) (2008) 9572-9595.
Multi-element probabilistic collocation method in high dimensions. J Foo, G E Karniadakis, Journal of Computational Physics. 2295J. Foo, G. E. Karniadakis, Multi-element probabilistic collocation method in high dimensions, Journal of Computational Physics 229 (5) (2010) 1536- 1557.
I M Sobol, Theorems and examples on high dimensional model representation. 79I. M. Sobol, Theorems and examples on high dimensional model represen- tation, Reliability Engineering & System Safety 79 (2) (2003) 187-193.
Efficient input-output model representations. H Rabitz, Ö F Aliş, J Shorter, K Shim, Computer Physics Communications. 1171-2H. Rabitz,Ö. F. Aliş, J. Shorter, K. Shim, Efficient input-output model representations, Computer Physics Communications 117 (1-2) (1999) 11- 20.
Uncertainty propagation using wiener-haar expansions. O Le Maıtre, O Knio, H Najm, R Ghanem, Journal of computational Physics. 1971O. Le Maıtre, O. Knio, H. Najm, R. Ghanem, Uncertainty propagation using wiener-haar expansions, Journal of computational Physics 197 (1) (2004) 28-57.
A stochastic collocation algorithm for uncertainty analysis. L Mathelin, M Y Hussaini, T A Zang, L. Mathelin, M. Y. Hussaini, T. A. Zang, A stochastic collocation algorithm for uncertainty analysis.
Algorithm 847: spinterp: Piecewise multilinear hierarchical sparse grid interpolation in matlab. A Klimke, B Wohlmuth, ACM Transactions on Mathematical Software (TOMS). 314A. Klimke, B. Wohlmuth, Algorithm 847: spinterp: Piecewise multilin- ear hierarchical sparse grid interpolation in matlab, ACM Transactions on Mathematical Software (TOMS) 31 (4) (2005) 561-579.
An adaptive hierarchical sparse grid collocation algorithm for the solution of stochastic differential equations. X Ma, N Zabaras, Journal of Computational Physics. 2288X. Ma, N. Zabaras, An adaptive hierarchical sparse grid collocation algo- rithm for the solution of stochastic differential equations, Journal of Com- putational Physics 228 (8) (2009) 3084-3113.
An adaptive high-dimensional stochastic model representation technique for the solution of stochastic partial differential equations. X Ma, N Zabaras, Journal of Computational Physics. 22910X. Ma, N. Zabaras, An adaptive high-dimensional stochastic model repre- sentation technique for the solution of stochastic partial differential equa- tions, Journal of Computational Physics 229 (10) (2010) 3884-3915.
An efficient method for uncertainty propagation using fuzzy sets. X Chen, Y He, D Xiu, SIAM Journal on Scientific Computing. 376X. Chen, Y. He, D. Xiu, An efficient method for uncertainty propaga- tion using fuzzy sets, SIAM Journal on Scientific Computing 37 (6) (2015) A2488-A2507.
Numerical methods for stochastic computations: a spectral method approach. D Xiu, Princeton university pressD. Xiu, Numerical methods for stochastic computations: a spectral method approach, Princeton university press, 2010.
General foundations of high-dimensional model representations. H Rabitz, Ö F Aliş, Journal of Mathematical Chemistry. 252-3H. Rabitz,Ö. F. Aliş, General foundations of high-dimensional model rep- resentations, Journal of Mathematical Chemistry 25 (2-3) (1999) 197-233.
High dimensional model representations generated from low dimensional data samples. i. mp-cut-hdmr. G Li, S.-W Wang, C Rosenthal, H Rabitz, Journal of Mathematical Chemistry. 301G. Li, S.-W. Wang, C. Rosenthal, H. Rabitz, High dimensional model rep- resentations generated from low dimensional data samples. i. mp-cut-hdmr, Journal of Mathematical Chemistry 30 (1) (2001) 1-30.
High dimensional model representations. G Li, C Rosenthal, H Rabitz, The Journal of Physical Chemistry A. 10533G. Li, C. Rosenthal, H. Rabitz, High dimensional model representations, The Journal of Physical Chemistry A 105 (33) (2001) 7765-7777.
A generalized dimension-reduction method for multidimensional integration in stochastic mechanics. H Xu, S Rahman, International Journal for Numerical Methods in Engineering. 6112H. Xu, S. Rahman, A generalized dimension-reduction method for multi- dimensional integration in stochastic mechanics, International Journal for Numerical Methods in Engineering 61 (12) (2004) 1992-2019.
Metamodeling for high dimensional simulation-based design problems. S Shan, G G Wang, Journal of Mechanical Design. 132551009S. Shan, G. G. Wang, Metamodeling for high dimensional simulation-based design problems, Journal of Mechanical Design 132 (5) (2010) 051009.
A domain adaptive stochastic collocation approach for analysis of mems under uncertainties. N Agarwal, N R Aluru, Journal of Computational Physics. 22820N. Agarwal, N. R. Aluru, A domain adaptive stochastic collocation ap- proach for analysis of mems under uncertainties, Journal of Computational Physics 228 (20) (2009) 7662-7688.
| []
|
[
"Supersymmetry breaking metastable vacua in runaway quiver gauge theories",
"Supersymmetry breaking metastable vacua in runaway quiver gauge theories"
]
| [
"I García-Etxebarria ",
"F Saad ",
"A M Uranga ",
"\nPH-TH Division\nInstituto de Física Teórica C-XVI\nCERN CH-1211Geneva 23Switzerland\n",
"\nUniversidad Autónoma de Madrid\n28049Cantoblanco, MadridSpain\n"
]
| [
"PH-TH Division\nInstituto de Física Teórica C-XVI\nCERN CH-1211Geneva 23Switzerland",
"Universidad Autónoma de Madrid\n28049Cantoblanco, MadridSpain"
]
| []
| In this paper we consider quiver gauge theories with fractional branes whose infrared dynamics removes the classical supersymmetric vacua (DSB branes). We show that addition of flavors to these theories (via additional non-compact branes) leads to local meta-stable supersymmetry breaking minima, closely related to those of SQCD with massive flavors.We simplify the study of the one-loop lifting of the accidental classical flat directions by direct computation of the pseudomoduli masses via Feynman diagrams. This new approach allows to obtain analytic results for all these theories. This work extends the results for the dP 1 theory in hep-th/0607218. The new approach allows to generalize the computation to general examples of DSB branes, and for arbitrary values of the superpotential couplings. | 10.1088/1126-6708/2007/05/047 | [
"https://arxiv.org/pdf/0704.0166v2.pdf"
]
| 15,953,756 | 0704.0166 | 1941c6c33e75376ffd25bfc6788666aabffbe695 |
Supersymmetry breaking metastable vacua in runaway quiver gauge theories
12 Apr 2007
I García-Etxebarria
F Saad
A M Uranga
PH-TH Division
Instituto de Física Teórica C-XVI
CERN CH-1211Geneva 23Switzerland
Universidad Autónoma de Madrid
28049Cantoblanco, MadridSpain
Supersymmetry breaking metastable vacua in runaway quiver gauge theories
12 Apr 2007arXiv:0704.0166v2 [hep-th]
In this paper we consider quiver gauge theories with fractional branes whose infrared dynamics removes the classical supersymmetric vacua (DSB branes). We show that addition of flavors to these theories (via additional non-compact branes) leads to local meta-stable supersymmetry breaking minima, closely related to those of SQCD with massive flavors.We simplify the study of the one-loop lifting of the accidental classical flat directions by direct computation of the pseudomoduli masses via Feynman diagrams. This new approach allows to obtain analytic results for all these theories. This work extends the results for the dP 1 theory in hep-th/0607218. The new approach allows to generalize the computation to general examples of DSB branes, and for arbitrary values of the superpotential couplings.
Introduction
Systems of D-branes at singularities provide a very interesting setup to realize and study diverse non-perturbative gauge dynamics phenomena in string theory. In the context of N = 1 supersymmetric gauge field theories, systems of D3-branes at Calabi-Yau singularities lead to interesting families of tractable 4d strongly coupled conformal field theories, which extend the AdS/CFT correspondence [1,2,3] to theories with reduced (super)symmetry [4,5,6] and enable non-trivial precision tests of the correspondence (see for instance [7,8]). Addition of fractional branes leads to families of non-conformal gauge theories, with intricate RG flows involving cascades of Seiberg dualities [9,10,11,12,13], and strong dynamics effects in the infrared.
For instance, fractional branes associated to complex deformations of the singular geometry (denoted deformation fractional branes in [12]), correspond to supersymmetric confinement of one or several gauge factors in the gauge theory [9,12]. The generic case of fractional branes associated to obstructed complex deformations (denoted DSB branes in [12]), corresponds to gauge theories developing a non-perturbative Affleck-Dine-Seiberg superpotential, which removes the classical supersymmetric vacua [14,15,16]. As shown in [15] (see also [17,18]), assuming canonical Kahler potential leads to a runaway potential for the theory, along a baryonic direction. A natural suggestion to stop this runaway has been proposed for the particular example of the dP 1 theory (the theory on fractional branes at the complex cone over dP 1 ) in [19]. It was shown that, upon the addition of D7-branes to the configuration (which introduce massive flavors), the theory develops a meta-stable minimum (closely related to the Intriligator-Seiberg-Shih (ISS) model [20]), parametrically long-lived against decay to the runaway regime (see [21] for an alternative suggestion to stop the runaway, in compact models).
In this paper we show that the appearance of meta-stable minima in gauge theories on DSB fractional branes, in the presence of additional massless flavors, is much more general (and possibly valid in full generality). We use the tools of [15] to introduce D7-branes on general toric singularities, and give masses to the corresponding flavors.
Since quiver gauge theories are rather involved, we develop new techniques to efficiently analyze the one-loop stability of the meta-stable minima, via the direct computation of Feynman diagrams. These tools can be used to argue that the results plausibly hold for general systems of DSB fractional branes at toric singularities. It is very satisfactory to verify the correspondence between the existence of meta-stable vacua and the geometric property of having obstructed complex deformations.
The present work thus enlarges the class of string models realizing dynamical supersymmetry breaking in meta-stable vacua (see [22,23,24,25,26] for other proposed realizations, and [27,28,29] for models of dynamical supersymmetry breaking in orientifold theories). Although we will not discuss it in the present paper, these results can be applied to the construction of models of gauge mediation in string theory as in [30] (based on the additional tools in [31]), in analogy with [32]. This is another motivation for the present work.
The paper is organized as follows. In Section 2 we review the ISS model, evaluating one-loop pseudomoduli masses directly in terms of Feynman diagrams. In Section 3 we study the theory of DSB branes at the dP 1 and dP 2 singularities upon the addition of flavors, and we find that metastable vacua exist for these theories. In Section 4 we extend this analysis to the general case of DSB branes at toric singularities with massive flavors, and we illustrate the results by showing the existence of metastable vacua for DSB branes at some well known families of toric singularities. Finally, the Appendix provides some technical details that we have omitted from the main text in order to improve the legibility.
The ISS model revisited
In this Section we review the ISS meta-stable minima in SQCD, and propose that the analysis of the relevant piece of the one-loop potential (the quadratic terms around the maximal symmetry point) is most simply carried out by direct evaluation of Feynman diagrams. This new tool will be most useful in the study of the more involved examples of quiver gauge theories.
The ISS metastable minimum
The ISS model [20] (see also [33] for a review of these and other models) is given by N = 1 SU(N c ) theory with N f flavors, with small masses W electric = mTr φφ, (2.1) where φ andφ are the quarks of the theory. The number of colors and flavors are chosen so as to be in the free magnetic phase:
N c + 1 ≤ N f < 3 2 N c .(2.
2)
This condition guarantees that the Seiberg dual is infrared free. This Seiberg dual is the SU(N) theory (with N = N f − N c ) with N f flavors of dual quarks q andq and the meson M. The dual superpotential is given by rewriting (2.1) in terms of the mesons and adding the usual coupling between the meson and the dual quarks:
W magnetic = h (TrqMq − µ 2 Tr M),(2.3)
where h and µ can be expressed in terms of the parameters m and Λ, and some (unknown) information about the dual Kähler metric 1 . It was also argued in [20] that it is possible to study the supersymmetry breaking minimum in the origin of (dual) field space without taking into account the gauge dynamics (their main effect in this discussion consists of restoring supersymmetry dynamically far in field space). In the following we will assume that this is always the case, and we will forget completely about the gauge dynamics of the dual.
Once we forget about gauge dynamics, studying the vacua of the dual theory becomes a matter of solving the F-term equations coming from the superpotential (2.3).
The mesonic F-term equation reads:
− F M ij = hq i · q j − hµ 2 δ ij = 0,(2.4)
where i and j are flavor indices and the dot denotes color contraction. This has no solution, since the identity matrix δ ij has rank N f whileq i · q j has rank N = N f − N c .
Thus this theory breaks supersymmetry spontaneously at tree level. This mechanism for F-term supersymmetry breaking is called the rank condition.
The classical scalar potential has a continuous set of minima, but the one-loop potential lifts all of the non-Goldstone directions, which are usually called pseudomoduli. The usual approach to study the one-loop stabilization is the computation of the complete one-loop effective potential over all pseudomoduli space via the Coleman-Weinberg formula [34]:
V = 1 64π 2 Tr M 4 B log M 2 B Λ 2 − M 4 F log M 2 F Λ 2 . (2.5)
This approach has the advantage that it allows the determination of the one-loop minimum, without a priori information about its location, and moreover it provides the full potential around it, including higher terms. However, it has the disadvantage 1 The exact expressions can be found in (5.7) in [20], but we will not need them for our analysis.
We just take all masses in the electric description to be small enough for the analysis of the metastable vacuum to be reliable. Hence, our strategy to study the one-loop stabilization in this paper is as follows:
• First we choose an ansatz for the classical minimum to become the one-loop vacuum. It is natural to propose a point of maximal enhanced symmetry (in particular, close to the origin in the space of vevs for M there exist and Rsymmetry, whose breaking by gauge interactions (via anomalies) is negligible in that region). Hence the natural candidate for the one-loop minimum is
q =q T = µ 0 , (2.6)
with the rest of the fields set to 0. This initial ansatz for the one-loop minimum is eventually confirmed by the positive square masses at one-loop resulting from the computations described below. In our more general discussion of meta-stable minima in runaway quiver gauge theories, our ansatz for the one-loop minimum is a direct generalization of the above (and is similarly eventually confirmed by the one-loop mass computation).
• Then we expand the field linearly around this vacuum, and identify the set of classically massless fields. We refer to these as pseudomoduli (with some abuse of language, since there could be massless fields which are not classically flat directions due to higher potential terms) 2 Since supersymmetry is spontaneously broken the effective potential will get renormalized by quantum effects, and thus classically massive fields might shift slightly. This appears as a one loop tadpole which can be encoded as a small shift of µ. This will enter in the two loop computation of the pseudomoduli masses, which are beyond the scope of the present paper.
• As a final step we compute one-loop masses for these pseudomoduli by evaluating their two-point functions via conventional Feynman diagrams, as explained in more detail in appendix A.1 and illustrated below in several examples.
The ISS model is a simple example where this technique can be illustrated. Considering the above ansatz for the vacuum, we expand the fields around this point as:
q = µ + 1 √ 2 (ξ + + ξ − ) 1 √ 2 (ρ + + ρ − ) ,q T = µ + 1 √ 2 (ξ + − ξ − ) 1 √ 2 (ρ + − ρ − ) , M = Y Z Z T Φ ,(2.7)
where we have taken linear combinations of the fields in such a way that the bosonic mass matrix is diagonal. This will also be convenient in section 2.2, where we discuss the Goldstone bosons in greater detail.
We now expand the superpotential (2.3) to get
W = √ 2µξ + Y + 1 √ 2 µZρ + + 1 √ 2 µZρ − + 1 √ 2 µρ +Z − 1 √ 2 µρ −Z + 1 2 ρ 2 + Φ − 1 2 ρ 2 − Φ − µ 2 Φ + . . . ,(2.8)
where we have not displayed terms of order three or higher in the fluctuations, unless they contain Φ, since they are irrelevant for the one loop computation we will perform.
Note also that we have set h = 1 and we have removed the trace (the matricial structure is easy to restore later on, here we just set N f = 2 for simplicity). The massless bosonic fluctuations are given by Re ρ + , Im ρ − , Φ and ξ − . The first two together with Im ξ − are Goldstone bosons, as explained in section 2.2. Thus the pseudomoduli we are interested in are given by Φ and Re ξ − . Let us focus on Φ (the case of Re ξ − admits a similar discussion). In this case the relevant terms in the superpotential simplify further, and just the following superpotential contributes:
W = µZ 1 √ 2 (ρ + + ρ − ) + µZ 1 √ 2 (ρ + − ρ − ) + 1 2 ρ 2 + Φ − 1 2 ρ 2 − Φ − µ 2 Φ + . . . ,
which we recognize, up to a field redefinition, as the symmetric model of appendix A.2.
We can thus directly read the result
δm 2 Φ = |h| 4 µ 2 8π 2 (log 4 − 1). (2.9)
This matches the value given in [20], which was found using the Coleman-Weinberg potential.
The Goldstone bosons
One aspect of our technique that merits some additional explanation concerns the Goldstone bosons. The one-loop computation of the masses for the fluctuations associated to the symmetries broken by the vacuum, using just the interactions described in appendix A.1, leads to a non-vanishing result. This puzzle is however easily solved by realizing that certain (classically massive) fields have a one-loop tadpole. This leads to a new contribution to the one-loop Goldstone two-point amplitude, given by the diagram in Figure 1. Adding this contribution the total one-loop mass for the Goldstone bosons is indeed vanishing, as expected. This tadpole does not affect the computation of the one-loop pseudomoduli masses (except for Re ξ + , but its mass remains positive)
as it is straightforward to check.
Im ξ − Im ξ − Re ξ + The structure of this cancellation can be understood by using the derivation of the Goldstone theorem for the 1PI effective potential, as we now discuss. The proof can be found in slightly more detail, together with other proofs, in [35]. Let us denote by V the 1PI effective potential. Invariance of the action under a given symmetry implies that δV δφ i ∆φ i = 0, (2.10)
where we denote by ∆φ i the variation of the field φ i under the symmetry, which will in general be a function of all the fields in the theory. Taking the derivative of this equation with respect to some other field φ k
δ 2 V δφ i δφ k ∆φ i + δV δφ i · δ∆φ i δφ k = 0. (2.11)
Let us consider how this applies to our case. At tree level, there is no tadpole and the above equation (truncated at tree level) states that for each symmetry generator broken by the vacuum, the value of ∆φ i gives a nonvanishing eigenvector of the mass matrix with zero eigenvalue. This is the classical version of the Goldstone theorem, which allows the identification of the Goldstone bosons of the theory.
For instance, in the ISS model in the previous section (for N f = 2), there are three global symmetry generators broken at the minimum described around (2.6). The SU(2) × U(1) symmetry of the potential gets broken down to a U(1) ′ , which can be understood as a combination of the original U(1) and the t z generator of SU (2). The
Goldstone bosons can be taken to be the ones associated to the three generators of SU(2), and correspond (for µ real) to Im ξ − , Im ρ − and Re ρ + , in the parametrization of the fields given by equation (2.7).
Even in the absence of tree-level tadpoles, there could still be a one-loop tadpole.
When this happens, there should also be a non-trivial contribution to the mass term for the Goldstone bosons in the one-loop 1PI potential, related to the tadpole by the one-loop version of (2.11). This relation guarantees that the mass term in the physical (i.e. Wilsonian) effective potential, which includes the 1PI contribution, plus those of the diagram in Figure 1, vanishes, as we described above.
In fact, in the ISS example, there is a non-vanishing one-loop tadpole for the real part of ξ + (and no tadpole for other fields). The calculation of the tadpole at one loop is straightforward, and we will only present here the result
iM = −i|h| 4 µ 3 (4π) 2 (2 log 2). (2.12)
The 1PI one-loop contribution to the Goldstone boson mass is also simple to calculate, giving the result iM = −i|h| 4 µ 2 (4π) 2 (log 2). (2.13) Using the variations of the relevant fields under the symmetry generator, e.g. for t z , ∆Re ξ + = −Im ξ − (2.14)
∆Im ξ − = Re ξ + + 2µ. (2.15) we find that the (2.11) is satisfied at one-loop.
δ 2 V δφ i δφ k ∆φ i + δV δφ i · δ∆φ i δφ k = m 2 Im ξ − · 2µ + (Re ξ + tadpole) · (−1) = 0. (2.16)
A very similar discussion applies to t x and t y .
The above discussion of Goldstone bosons can be similarly carried out in all examples of this paper. Hence, it will be enough to carry out the computation of the 1PI diagrams discussed in appendix A.1, and verify that they lead to positive squared masses for all classically massless fields (with Goldstone bosons rendered massless by the additional diagrams involving the tadpole).
3 Meta-stable vacua in quiver gauge theories with
DSB branes
In this section we show the existence of a meta-stable vacuum in a few examples of gauge theories on DSB branes, upon the addition of massive flavors. As already discussed in [19], the choice of fractional branes of DSB kind is crucial in the result.
The reason is that in order to have the ISS structure, and in particular supersymmetry breaking by the rank condition, one needs a node such that its Seiberg dual satisfies arising from bi-fundamentals of the original D3-brane quiver, or introduced by the D7branes), the condition is equivalent to N f,0 < N c . This is precisely the condition that an ADS superpotential is generated, and is the prototypical behavior of DSB branes [14,15,16,18].
N f > N, with N = N f − N c with N c , N f
Another important general comment, also discussed in [19], is that theories on DSB branes generically contain one or more chiral multiplets which do not appear in the superpotential. Being decoupled, such fields remain as accidental flat directions at one-loop, so that the one-loop minimum is not isolated. The proper treatment of these flat directions is beyond the reach of present tools, so they remain an open question.
However, it is plausible that they do not induce a runaway behavior to infinity, since they parametrize a direction orthogonal to the fields parametrizing the runaway of DSB fractional branes.
The complex cone over dP 1
In this section we describe the most familiar example of quiver gauge theory with DSB fractional branes, the dP 1 theory. In this theory, a non-perturbative superpotential removes the classical supersymmetric vacua [14,15,16]. Assuming canonical Kähler potential the theory has a runaway behavior [15,17]. In this section, we revisit with our techniques the result in [19] that the addition of massive flavors can induce the appearance of meta-stable supersymmetry breaking minima, long-lived against tunneling to the runaway regime. As we show in coming sections, this behavior is prototypical and extends to many other theories with DSB fractional branes. The example is also representative of the computations for a general quiver coming from a brane at a toric singularity, and illustrates the usefulness of the direct Feynman diagram evaluation of one-loop masses.
Consider the dP 1 theory, realized on a set of M fractional D3-branes at the complex cone over dP 1 . In order to introduce additional flavors, we introduce sets of N f,1 D7-branes wrapping non-compact 4-cycles on the geometry and passing through the singular point. We refer the reader to [19], and also to later sections, for more details on the construction of the theory, and in particular on the introduction of the D7-branes.
Its quiver is shown in Figure 2, and its superpotential is
W = λ(X 23 X 31 Y 12 − X 23 Y 31 X 12 ) + λ ′ (Q 3iQi2 X 23 + Q 2jQj1 X 12 + Q 1kQk3 X 31 ) + m 3 Q 3iQk3 δ ik + m 2 Q 2jQi2 δ ji + m 1 Q 1kQj1 δ kj , (3.1)
where the subindices denote the groups under which the field is charged. The first line is the superpotential of the theory of fractional brane, the second line describes 77-73-37 couplings between the flavor branes and the fractional brane, and the last line gives the flavor masses. Note that there is a massless field, denoted Z 12 in [19], that does not appear in the superpotential. This is one of the decoupled fields mentioned above, and we leave its treatment as an open question. Figure 2: Extended quiver diagram for a dP 1 theory with flavors, from [19].
Q 3ĩ Q i2 Q 2jQj1 Q 1k Q k3
We are interested in gauge factors in the free magnetic phase. This is the case for the SU(3M) gauge factor in the regime
M + 1 ≤ N f,1 < 5 2 M. (3.2)
To apply Seiberg duality on node 3, we introduce the dual mesons:
M 21 = 1 Λ X 23 X 31 ; N k1 = 1 ΛQ k3 X 31 M ′ 21 = 1 Λ X 23 Y 31 ; N ′ k1 = 1 ΛQ k3 Y 31 N 2i = 1 Λ X 23 Q 3i ; Φ ki = 1 ΛQ k3 Q 3i (3.3)
and we also replace the electric quarks Q 3i ,Q k3 , X 23 , X 31 , Y 31 by their magnetic duals
Q i3 , Q 3k , X 32 , X 13 , Y 13 .
The magnetic superpotential is given by rewriting the confined fields in terms of the mesons and adding the coupling between the mesons and the dual quarks,
W = h ( M 21 X 13 X 32 + M ′ 21 Y 13 X 32 + N 2iQi3 X 32 + N k1 X 13 Q 3k + N ′ k1 Y 13 Q 3k + Φ kiQi3 Q 3k ) + hµ 0 ( M 21 Y 12 − M ′ 21 X 12 ) + µ ′ Q 1k N k1 + µ ′ N 2iQi2 − hµ 2 Tr Φ + λ ′ Q 2jQj1 X 12 + m 2 Q 2iQi2 + m 1 Q 1iQi1 . (3.4)
This is the theory we want to study. In order to simplify the treatment of this example we will disregard any subleading terms in m i /µ ′ , and effectively integrate out N k1 and N 2i by substituting them by 0. This is not necessary, and indeed the computations in the next sections are exact. We do it here in order to compare results with [19].
As in the ISS model, this theory breaks supersymmetry via the rank condition. The fieldsQ i3 , Q 3k and Φ ki are the analogs of q,q and M in the ISS case discussed above.
This motivates a vacuum ansatz analogous to (2.6) and the following linear expansion:
Φ = φ 00 φ 01 φ 10 φ 11 ;Q i3 = µe θ + Q 3,1 Q 3,2 ; Q T 3i = µe −θ + Q 3,1 Q 3,2 Q k1 = Q 1,1 y ; Q 2j = Q 2,11 x Q 2,21 x ′ ; M 21 = M 21,1 M 21,2 Y 13 = (Y 13 ) ; X T 12 = X 12,1 X 12,2 ; X T 32 = X 32,1 X 32,2 Y T 12 = Y 12,1 Y 12,2 ; N ′ k1 = N ′ k1,1 z ; M ′ 21 = λ ′ hµ 0 M ′ 21,1 M ′ 21,2 X 13 = (X 13 ) .
(3.5)
Note that we have chosen to introduce the nonlinear expansion in θ in order to reproduce the results found in the literature in their exact form 3 . Note also that for the sake of clarity we have not been explicit about the ranks of the different matrices.
They can be easily worked out (or for this case, looked up in [19]), and we will restrict ourselves to the 2 flavor case where the matrix structure is trivial. As a last remark,
we are not being explicit either about the definitions of the different couplings in terms of the electric theory. This can be done easily (and as in the ISS case they involve an unknown coefficient in the Kähler potential), but in any event, the existence of the meta-stable vacua can be established for general values of the coefficients in the superpotential. Hence we skip this more detailed but not very relevant discussion.
The next step consists in expanding the superpotential and identifying the massless fields. We get the following quadratic contributions to the superpotential:
W mass = 2hµφ 00Q3,1 + hµφ 01Q3,2 + hµφ 10 Q 3,2 + hµ 0 M 21,1 Y 12,1 + hµ 0 M 21,2 Y 12,2 − λ ′ M ′ 21,1 X 12,1 − λ ′ M ′ 21,2 X 12,2 + hµN ′ k1,1 Y 13 − h 1 µQ 1,1 X 13 − h 2 µQ 2,11 X 32,1 − h 2 µQ 2,21 X 32,2 . (3.6)
The fields massless at tree level are x, x ′ , y, z, φ 11 , θ, Q 3,2 andQ 3,2 . Three of these are Goldstone bosons as described in the previous section. For real µ they are Im θ,
Re (Q 3,2 + Q 3,2 ) and Im (Q 3,2 − Q 3,2 )
. We now show that all other classically massless fields get masses at one loop (with positive squared masses).
As a first step towards finding the one-loop correction, notice that the supersymmetry breaking mechanism is extremely similar to the one in the ISS model before, in particular it comes only from the following couplings in the superpotential:
W rank = hQ 3,2Q3,2 φ 11 − hµ 2 φ 11 + . . . (3.7)
This breaks the spectrum degeneracy in the multiplets Q 3,2 andQ 3,2 at tree level, so we refer to them as the fields with broken supersymmetry.
Let us compute now the correction for the mass of x, for example. For the one-loop computation we just need the cubic terms involving one pseudomodulus and at least one of the broken supersymmetry fields, and any quadratic term involving fields present in the previous set of couplings. From the complete expansion one finds the following supersymmetry breaking sector:
W symm. = hφ 11 Q 3,2Q3,2 + hµφ 01Q3,2 + hµφ 10 Q 3,2 − hµ 2 φ 11 . (3.8)
The only cubic term involving the pseudomodulus x and the broken supersymmetry fields is
W cubic = −h 2 xQ 3,2 X 32,1 ,(3.9)
and there is a quadratic term involving the field X 32,1 W mass coupling = −h 2 µQ 2,11 X 32,1 .
(3.10)
Assembling the three previous equations, the resulting superpotential corresponds to the asymmetric model in appendix A.2, so we can directly obtain the one-loop mass for x:
δm 2 x = 1 16π 2 |h| 4 µ 2 C |h 2 | 2 |h| 2 . (3.11)
Proceeding in a similar way, the one-loop masses for φ 11 , x ′ , y and z are:
δm 2 φ 11 = 1 8π 2 |h| 4 µ 2 (log 4 − 1) δm 2 x ′ = 1 16π 2 |h| 4 µ 2 C |h 2 | 2 |h| 2 , δm 2 y = 1 16π 2 |h| 4 µ 2 C |h 1 | 2 |h| 2 δm 2 z = 1 16π 2 |h| 4 µ 2 (log 4 − 1). (3.12)
There is just one pseudomodulus left, Re θ, which is qualitatively different to the others. With similar reasoning, one concludes that it is necessary to study a superpotential of the form
W = h(Xφ 1 φ 2 + µe θ φ 1 φ 3 + µe −θ φ 2 φ 4 − µ 2 X). (3.13)
Due to the non-linear parametrization, the expansion in θ shows that there is a term quadratic in θ which contributes to the one-loop mass via a vertex with two bosons and two fermions, the relevant diagram is shown in Figure 16d. The result is a vanishing mass for Im θ, as expected for a Goldstone boson (the one-loop tadpole vanishes in this case), and a non-vanishing mass for Re θ
δm 2 Re θ = 1 4π 2 |h| 4 µ 4 (log 4 − 1). (3.14)
We conclude by mentioning that all squared masses are positive, thus confirming that the proposed point in field space is the one-loop minimum. As shown in [19], this minimum is parametrically long-lived against tunneling to the runaway regime.
Additional examples: The dP 2 case
Let us apply these techniques to consider new examples. In this section we consider a DSB fractional brane in the complex cone over dP 2 , which provides another quiver theory with runaway behavior [15]. The quiver diagram for dP 2 is given in Figure 3, Following [19] and appendix B, one can introduce D7-branes leading to D3-D7 open strings providing (possibly massive) flavors for all gauge factors, and having cubic couplings with diverse D3-D3 bifundamental chiral multiplets. We obtain the quiver in Figure 5. Adding the cubic 33-37-73 coupling superpotential, and the flavor masses, the complete superpotential reads
with superpotential W = X 34 X 45 X 53 − X 53 Y 31 X 15 − X 34 X 42 Y 23 + Y 23 X 31 X 15 X 52 + X 42 X 23 Y 31 X 14 − X 23 X 31 X 14 X 45 X 52 (3.15)W total = −λX 53 Y 31 X 15 − λ ′ (Q 1iQi3 Y 31 + Q 3jQj5 X 53 + Q 5kQk1 X 15 ) + m 1 Q 1iQk1 + m 2 Q 3jQi3 + m 5 Q 5kQj5 (3.17)
where 1, 2, 3 are the gauge group indices and i, j, k are the flavor indices.
We consider the U(2M) node in the free magnetic phase, namely
M + 1 ≤ N f,1 < 2M (3.18) U(M) U(M) U(2M) Q 1i Q i3 Q 3j Q j5 Q 5k Q k1M 1k = 1 Λ X 15 Q 5K ;Ñ j3 = 1 ΛQ j5 X 53 M 13 = 1 Λ X 15 X 53 ;Φ jk = 1 ΛQ j5 Q 5k (3.19)
There is a cubic superpotential coupling the mesons and the dual flavors
W mes. = h ( M 1kQk5 X 51 + M 13 X 35 X 51 +Ñ j3 X 35 Q 5j +Φ jkQk5 Q 5j ) (3.20)
where h = Λ/Λ withΛ given by Λ
3Nc−N f elect Λ 3(N f −Nc)−N f =Λ N f ,W clas. = − h µ 0 M 13 Y 31 + λ ′ Q 1iQi3 Y 31 + µ ′Ñ j3 Q 3j + µ ′ M 1kQk1 + m 1 Q 1iQk1 + m 3 Q 3jQi3 − hµ 2 Tr Φ (3.21)
where µ 0 = λΛ, µ ′ = λ ′ Λ, and µ 2 = −m 5Λ . So the complete superpotential in the Seiberg dual is
W dual = − h µ 0 M 13 Y 31 + λ ′ Q 1iQi3 Y 31 + µ ′Ñ j3 Q 3j + µ ′ M 1kQk1 + m 1 Q 1iQk1 + m 3 Q 3jQi3 − hµ 2 Tr Φ + h ( M 1kQk5 X 51 + M 13 X 35 X 51 +Ñ j3 X 35 Q 5j +Φ jkQk5 Q 5j ) (3.22)
This superpotential has a sector completely analogous to the ISS model, triggering supersymmetry breaking by the rank condition. This suggests the following ansatz for the point to become the one-loop vacuum
Q 5k =Q T 5k = µ 0 , (3.23)
with all other vevs set to zero. Following our technique as explained above, we expand fields at linear order around this point. Focusing on N f,1 = 2 and N c = 1 for simplicity (the general case can be easily recovered), we havẽ Q k5 = µ + δQ 5,1 δQ 5,2 ; Q 5k = (µ + δQ 5,1 ; δQ 5,2 ) ; Φ = δΦ 0,0 δΦ 0,1
δΦ 1,0 δΦ 1,1 Q k1 = δQ 1,1 δQ 1,2 ; Q 1i = (δQ 1,1 ; δQ 1,2 ) ;Q i3 = δQ 3,1 δQ 3,2 ; Q 3j = (δQ 3,1 ; δQ 3,2 ) N j3 = δÑ 3,1 δÑ 3,2 ; M 1k = (δM 1,1 ; δM 1,2 ) ; M 13 = δM 13 ; Y 31 = δY 31 ; X 51 = δX 51 X 35 = δX 35 (3.24)
Inserting this into equation (3.22) gives W dual = − h µ 0 δM 13 δY 31 + λ ′ δQ 1,1 δQ 3,1 δY 31 + λ ′ δQ 1,2 δQ 3,2 δY 31 + µ ′ δÑ 3,1 δQ 3,1 + µ ′ δÑ 3,2 δQ 3,2 + µ ′ δM 1,1 δQ 1,1 + µ ′ δM 1,2 δQ 1,2 + m 1 δQ 1,1 δQ 1,1 + m 1 δQ 1,2 δQ 1,2 + m 3 δQ 3,1 δQ 3,1 + m 3 δQ 3,2 δQ 3,2 − hµ 2 δΦ 11 + h ( µδM 1,1 δX 51 + δM 1,1 δQ 5,1 δX 51 + δM 1,2 δQ 5,2 δX 51 + δM 13 δX 35 δX 51 + µδX 35 δÑ 3,1 + δX 35 δÑ 3,1 δQ 5,1 + δX 35 δÑ 3,2 δQ 5,2 + µδQ 5,1 δΦ 00 + µδQ 5,1 δΦ 00 + δQ 5,1 δQ 5,1 δΦ 00 + µδΦ 01 δQ 5,2 + δQ 5,1 δΦ 01 δQ 5,2 + µδΦ 10 δQ 5,2 + δQ 5,1 δΦ 10 δQ 5,2 + δQ 5,2 δΦ 11 δQ 5,2 ).
We now need to identify the pseudomoduli, in other words the massless fluctuations at tree level. We focus then just on the quadratic terms in the superpotential W mass = − h µ 0 δM 13 δY 31 + µ ′ δÑ 3,1 δQ 3,1 + m 3 δQ 3,1 δQ 3,1 + hµδX 35 δÑ 3,1 + µ ′ δÑ 3,2 δQ 3,2 + m 3 δQ 3,2 δQ 3,2 + µ ′ δM 1,1 δQ 1,1 + m 1 δQ 1,1 δQ 1,1 + hµδM 1,1 δX 51 + µ ′ δM 1,2 δQ 1,2 + m 1 δQ 1,2 δQ 1,2 + hµδQ 5,1 δΦ 00 + hµδQ 5,1 δΦ 00 + hµδΦ 01 δQ 5,2 + µδΦ 10 δQ 5,2 .
(3.25)
We have displayed the superpotential so that fields mixing at the quadratic level appear in the same line. In order to identify the pseudomoduli we have to diagonalize 4 these fields. Note that the structure of the mass terms corresponds to the one in appendix C, in particular around equation (C.9). From the analysis performed there we know that upon diagonalization, fields mixing in groups of four (i.e., three mixing terms in the superpotential, for example the δM 1,1 , δQ 1,1 , δQ 1,1 , δX 51 mixing) get nonzero masses, while fields mixing in groups of three (two mixing terms in the superpotential, for example δM 1,2 , δQ 1,2 and δQ 1,2 ) give rise to two massive perturbations and a massless one, a pseudomodulus. We then just need to study the fate of the pseudomoduli. From the analysis in appendix C, the pseudomoduli coming from the mixing terms are
Y 1 = m 3 δÑ 3,2 − µ ′ δQ 3,2 , Y 2 = m 1 δM 1,2 − µ ′ δQ 1,2 , Y 3 = hµ(δQ 5,1 − δQ 5,1 ) . (3.26)
In order to continue the analysis, one just needs to change basis to the diagonal fields and notice that the one loop contributions to the pseudomoduli are described again by the asymmetric model of appendix A.2, so they receive positive definite contributions.
The exact analytic expressions can be easily found with the help of some computer algebra program, but we omit them here since they are quite unwieldy.
The general case
In the previous section we showed that several examples of quiver gauge theories on DSB fractional branes have metastable vacua once additional flavors are included.
In this section we generalize the arguments for general DSB branes. We will show how to add D7-branes in a specific manner so as to generate the appropriate cubic flavor couplings and mass terms. Once this is achieved, we describe the structure of Consider a general quiver gauge theory arising from branes at singularities. As we have argued previously, we focus on DSB branes, so that there is a gauge factor satisfying N f,0 < N c , which can lead to supersymmetry breaking by the rank condition in its Seiberg dual. To make the general analysis more concrete, let us consider a quiver like that in Figure 6, which is characteristic enough, and let us assume that the gauge factor to be dualized corresponds to node 2. In what follows we analyze the structure of the fields and couplings in the Seiberg dual, and reduce the problem of studying the meta-stability of the theory with flavors to analyzing the structure of the theory in the absence of flavors. The first step is the introduction of flavors in the theory. As discussed in [19], for any bi-fundamental X ab of the D3-brane quiver gauge theory there exist a supersymmetric D7-brane leading to flavors Q bi ,Q ia in the fundamental (antifundamental) of the b th (a th ) gauge factor. There is also a cubic coupling X ab Q biQia . Let us now specify a concrete set of D7-branes to introduce flavors in our quiver gauge theory. Consider a superpotential coupling of the D3-brane quiver gauge theory, involving fields charged under the node to be dualized. This corresponds to a loop in the quiver, involving node 2, for instance X 32 X 21 X 14 Y 43 in Figure 6. For any bi-fundamental chiral multiplet in this coupling, we introduce a set of N f,1 of the corresponding D7-brane. This leads to a set of flavors for the different gauge factors, in a way consistent with anomaly cancellation, such as that shown in Figure 7. The description of this system of D7branes in terms of dimer diagrams is carried out in Appendix B. The cubic couplings described above lead to the superpotential terms 5
W f lavor = λ ′ ( X 32 Q 2b Q b3 + X 21 Q 1a Q a2 + X 14 Q 4d Q d1 + Y 43 Q 3c Q c4 ) (4.1)
Finally, we introduce mass terms for all flavors of all involved gauge factors:
W mass = m 2 Q a2 Q 2b + m 3 Q b3 Q 3c + m 4 Q c4 Q 4d + m 1 Q d1 Q 1a (4.2)
These mass terms break the flavor group into a diagonal subgroup.
d X 21 Y 21 X 32 Y 32 Z 32 X 14 X 43 Y 43 Q 1a Q a2 Q 2b Q b3 Q 3c Q c4 Q 4d Q d1
Seiberg duality and one-loop masses
We consider introducing a number of massive flavors such that node 2 is in the free magnetic phase, and consider its Seiberg dual. The only relevant fields in this case are those charged under gauge factor 2, as shown if Figure 8. The Seiberg dual gives us
3 X 21 Y 21 X 32 Y 32 Z 32 Q a2 Q 2bX 12Ỹ 12X 23Ỹ 23 Z 23 Q b2 Q 2a X ab R 1 R 2 S 1 S 2 S 3 M 1 , . . . , M 6W f lavor dual = λ ′ ( S 1 3b Q b3 + R 1 a1 Q 1a + X 14 Q 4d Q d1 + Y 43 Q 3c Q c4 ) W mass dual = m 2 X ab + m 3 Q b3 Q 3c + m 4 Q c4 Q 4d + m 1 Q d1 Q 1a (4.3)
In addition we have the extra meson superpotential
W mesons = h ( X abQb2Q2a + R 1 a1X 12Q2a + R 2 a1Ỹ 12Q2a + S 1 3bQ b2X23 + S 2 3bQ b2Ỹ23 + S 3 3bQ b2Z23 + M 1 31X 12X23 + M 2 31X 12Ỹ23 + M 3 31X 12Z23 + M 4 31Ỹ 12X23 + M 5 31Ỹ 12Ỹ23 + M 6 31Ỹ 12Z23 ). (4.4)
The crucial point is that we always obtain terms of the kind underlined above, namely a piece of the superpotential reading m 2 X ab + hX abQb2Q2a . This leads to tree level supersymmetry breaking by the rank condition, as announced. Moreover the superpotential fits in the structure of the generalized asymmetric O'Raifeartaigh model studied in appendix A.2, with X ab ,Q b2 ,Q 2a corresponding to X, φ 1 , φ 2 respectively. The mul-tipletsQ b2 andQ 2a are split at tree level, and X ab is massive at 1-loop. From our study of the generalized asymmetric case, any field which has a cubic coupling to the supersymmetry breaking fieldsQ b2 orQ 2a is one-loop massive as well. Using the general structure of W mesons , a little thought shows that all dual quarks with no flavor index (e.g.X,Ỹ ) and all mesons with one flavor index (e.g. R or S) couple to the supersymmetry breaking fields.
Thus they all get one-loop masses (with positive squared mass). Finally, the flavors of other gauge factors (e.g. Q b3 ) are massive at tree level from W mass .
The bottom line is that the only fields which do not get mass from these interactions are the mesons with no flavor index, and the bi-fundamentals which do not get dualized (uncharged under node 2). All these fields are related to the theory in the absence of extra flavors, so they can be already stabilized at tree-level from the original superpotential. So, the criteria for a metastable vacua is that the original theory, in the absence of flavors leads, after dualization of the node with N f < N c , to masses for all these fields (or more mildly that they correspond to directions stabilized by mass terms, or perhaps higher order superpotential terms).
For example, if we apply this criteria to the dP 2 case studied previously, the original superpotential for the fractional DSB brane is
W = −λX 53 Y 31 X 15 (4.5)
so after dualization we get
W = −λM 13 Y 31 (4.6)
which makes these fields massive. Hence this fractional brane, after adding the D7branes in the appropriate configuration, will generate a metastable vacua will all moduli stabilized.
The argument is completely general, and leads to an enormous simplification in the study of the theories. In the next section we describe several examples. A more rigorous and elaborate proof is provided in the appendix where we take into account the matricial structure, and show that all fields, except for Goldstone bosons, get positive squared masses at tree-level or at one-loop.
Additional examples 4.2.1 The dP 3 case
Let us consider the complex cone over dP 3 , and introduce fractional DSB branes of the kind considered in [15]. The quiver is shown in Figure 10 and the superpotential is
W = X 13 X 35 X 51 (4.7)
Node 1 has N f < N c so upon addition of massive flavors and dualization will lead to supersymmetry breaking by the rank condition. Following the procedure of the previous section, we add N f,1 flavors coupling to the bi-fundamentals X 13 , X 35 and X 51 . Node 1 is in the free magnetic phase for P + 1 ≤ N f,1 < 3 2 P + 1 2 . Dualizing node 1, the above superpotential becomes
W = X 35 M 53 (4.8)
where M 53 is the meson X 51 X 13 . So, following the results of the previous section, we can conclude that this DSB fractional brane generates a metastable vacua with all pseudomoduli lifted.
Phase 1 of P dP 4
Let us consider the P dP 4 theory, and introduce the DSB fractional brane of the kind considered in [15]. The quiver is shown in Figure 11 . The superpotential is Node 1 has N f < N c and will lead to supersymmetry breaking by the rank condition in the dual. Following the procedure of the previous section, we add N f,1 flavors coupling to the bi-fundamentals X 12 , X 25 and X 51 . Node 1 is in the free magnetic phase for P + 2 ≤ M + N f,1 < 3 2 (M + P ). Dualizing node 1, the above superpotential becomes W = X 25 M 52 , where M 53 is the meson X 51 X 12 . Again we conclude that this DSB fractional brane generates a metastable vacua with all pseudomoduli lifted.
W = −X 25 X 51 X 12
The Y p,q family
Consider D3-branes at the real cones over the Y p,q Sasaki-Einstein manifolds [36,37,38,39], whose field theory were determined in [8]. The theory admits a fractional brane [13] of DSB kind, which namely breaks supersymmetry and lead to runaway behavior [15,18]. The analysis of metastability upon addition of massive flavors for arbitrary Y p,q 's is much more involved than previous examples. Already the description of the field theory on the fractional brane is complicated. Even for the simpler cases of Y p,q and Y p,p−1 the superpotential contains many terms. In this section we do not provide a general proof of metastability, but rather consider the more modest aim of showing that all directions related to the runaway behavior in the absence of flavors are stabilized by the addition of flavors. We expect that this will guarantee full metastability, since the fields not involved in our analysis parametrize directions orthogonal to the runaway at infinity.
The dimer for Y p,q is shown in Figure 12 and consists of a column of n hexagons and 2 m quadrilaterals which are just halved hexagons [18]. The labels (n, m) are related to (p, q) by n = 2q ; m = p − q (4.10)
• The Y p,1 case
The dimer for the theory on the DSB fractional brane in the Y p,1 case is shown in Figure 13, a periodic array of a column of two full hexagons, followed by p − 1 cut hexagons (the shaded quadrilateral has N c = 0). As shown in [18], the top quadrilateral which has N f < N c , and induces the ADS superpotential triggering the runaway. The relevant part of the dimer is shown in Figure 14, where V 1 and V 2 are the fields that run to infinity [18]. This node will lead to supersymmetry breaking by the rank condition in the dual. It is in the free magnetic phase for M + 1 ≤ N f,1 < pM + M 2 . The piece Figure 12: The generic dimer for Y p,q , from [18].
of the superpotential involving the V 1 and V 2 terms is
W = Y U 2 V 2 − Y U 1 V 1 . (4.11)
In the dual theory, the dual superpotential makes the fields massive. Hence, the theory has a metastable vacua where the runaway fields are stabilized. The analysis for Y p,p−1 is similar but in this case it is the bottom quadrilateral which has the highest rank and thus gives the ADS superpotential [18]. The relevant part of the dimer is shown in Figure 15, and the runaway direction is described by the fields V 1 and V 2 . Upon addition of N f,1 flavors, the relevant node in the in the free magnetic phase for M + 1 ≤ N f,1 < pM + M 2 Considering the superpotential, it is straightforward to show that the runaway fields become massive. Complementing this with our analysis in previous section, we conclude that the theory has a metastable vacua where the runaway fields are stabilized.
We have thus shown that we can obtain metastable vacua for fractional branes at cones over the Y p,1 and Y p,p−1 geometries. Although there is no obvious generalization for arbitrary Y p,q 's, our results strongly suggest that the existence of metastable vacua extends to the complete family.
Conclusions and outlook
The present work introduces techniques and computations which suggest that the existence of metastable supersymmetry breaking vacua is a general property of quiver gauge theories on DSB fractional branes, namely fractional branes associated to obstructed complex deformations. It is very satisfactory to verify the correlation between a non-trivial dynamical property in gauge theories and a geometric property in their Beyond the fact that our arguments do not constitute a general proof, our analysis has left a number of interesting open questions. In fact, as we have mentioned, all theories on DSB fractional branes contain one or several fields which do not appear in the superpotential. We expect the presence of these fields to have a direct physical interpretation, which has not been uncovered hitherto. It would be interesting to find a natural explanation for them.
Finally, a possible extension of our results concerns D-branes at orientifold singularities, which can lead to supersymmetry breaking and runaway as in [27]. Interestingly, in this case the field theory analysis is more challenging, since they would require Seiberg dualities of gauge factors with matter in two-index tensors. It is very possible that the string theory realization, and the geometry of the singularity provide a much more powerful tool to study the system.
Overall, we expect other surprises and interesting relations to come up from further study of D-branes at singularities.
A Technical details about the calculation via Feynman diagrams A.1 The basic amplitudes
In the main text we are interested in computing two point functions for the pseudomoduli at one loop, and in section 2.2 also tadpole diagrams. There are just a few kinds of diagrams entering in the calculation, which we will present now for the two-point function, see Figure 16. The (real) bosonic fields are denoted by φ i and the (Weyl) fermions by ψ i . The pseudomodulus we are interested in is denoted by ϕ.
c) d) a) b) ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ φ 2 φ 1 φ ψ 2 ψ ψ 1
Bosonic contributions
These come from two terms in the Lagrangian. First there is a diagram coming from terms of the form (Figure 16b):
L = . . . + λϕ 2 φ 2 − 1 2 m 2 φ 2 , (A.1)
giving an amplitude (we will be using dimensional regularization)
iM = −2iλ (4π) 2 m 2 1 ǫ − γ + 1 + log 4π − log m 2 . (A.2)
The other contribution comes from the diagram in Figure 16a:
L = . . . + λϕφ 1 φ 2 − 1 2 m 2 1 φ 2 1 − 1 2 m 2 2 φ 2 2 , (A.3)
which contributes to the two point function with an amplitude:
iM = iλ 2 (4π) 2 1 ǫ − γ + log 4π − 1 0 dx log ∆ , (A.4)
where here and in the following we denote ∆ ≡ xm 2 1 + (1 − x)m 2 2 .
Fermionic contributions
The relevant vertices here are again of two possible kinds, one of which is nonrenormalizable. The cubic interaction comes from terms in the Lagrangian given by the diagram in Figure 16c:
L = . . . + ϕ(aψ 1 ψ 2 + a * ψ 1ψ2 ) + 1 2 m 1 (ψ 2 1 +ψ 2 1 ) + 1 2 m 2 (ψ 2 2 +ψ 2 2 ). (A.5)
We are assuming real masses for the fermions here, in the configurations we study this can always be achieved by an appropriate field redefinition. The contribution from such vertices is given by:
iM = 1 0 dx −2im 1 m 2 (4π) 2 (a 2 + (a 2 ) * ) 1 ǫ − γ + log 4π − log ∆ − 8i|a| 2 (4π) 2 ∆ 1 ǫ − γ + log 4π + 1 2 − log ∆ . (A.6)
The other fermionic contribution, which one does not need as long as one is dealing with renormalizable interactions only (but we will need in the main text when analyzing the pseudomodulus θ), is given by terms in the Lagrangian of the form (Figure 16d):
L = . . . + λϕ 2 (ψ 2 +ψ 2 ) + 1 2 m(ψ 2 +ψ 2 ), (A.7)
which contributes to the total amplitude with:
iM = 8λmi (4π) 2 m 2 1 ǫ − γ + 1 + log 4π − log m 2 . (A.8)
A.2 The basic superpotentials
The previous amplitudes are the basic ingredients entering the computation, but in general the number of diagrams contributing to the two point amplitudes is quite big, so calculating all the contributions by hand can get quite involved in particular examples 6 . Happily, one finds that complicated models (such as dP 1 or dP 2 , studied in the main text) reduce to performing the analysis for only two different superpotentials, which we analyze in this section.
The symmetric case
We want to study in this section a superpotential of the form:
W = h(Xφ 1 φ 2 + µφ 1 φ 3 + µφ 2 φ 4 − µ 2 X). (A.9)
This model is a close cousin of the basic O'Raifeartaigh model. We are interested in the one loop contribution to the two point function of X, which is massless at tree level.
From the (F-term) bosonic potential one obtains the following terms entering the one loop computation:
V = |hXφ 2 | 2 + |h| 2 µ(Xφ 2 φ * 3 + X * φ * 2 φ 3 ) + |h| 2 µ(Xφ 1 φ * 4 + X * φ * 1 φ 4 ) + |h| 2 µ 2 (φ 1 φ 2 + φ * 1 φ * 2 ) + 4 i=1 |h| 2 µ 2 |φ i | 2 (A.10)
In order to do the computation it is useful to diagonalize the mass matrix by introducing φ + and φ − such that:
φ 1 = 1 √ 2 (φ + + iφ − ) φ 2 = 1 √ 2 (φ + − iφ − ) (A.11)
and φ a , φ b such that:
φ * 3 = 1 √ 2 (φ a + iφ b ) φ * 4 = 1 √ 2 (φ a − iφ b ). (A.12)
With these redefinitions the bosonic scalar potential decouples into identical φ + and φ − sectors, giving two decoupled copies of:
V = |h| 2 |X| 2 |φ + | 2 + |h| 2 µ 2 (|φ + | 2 + |φ a | 2 ) +|h| 2 µ(Xφ + φ a + X * φ * + φ * a ) − |h| 2 µ 2 2 φ 2 + + (φ 2 + ) * . (A.13)
Calculating the amplitude consists simply of constructing the (very few) two point diagrams from the potential above and plugging the formulas above for each diagram (the fermionic part is even simpler in this case). The final answer is that in this model the one loop correction to the mass squared of X is given by:
δm 2 X = |h 4 |µ 2 8π 2 (log 4 − 1). (A.14)
The generalized asymmetric case
The next case is slightly more complicated, but will suffice to analyze completely all the models we encounter. We will be interested in the one loop contribution to the mass of the pseudomoduli Y in a theory with superpotential: 15) with k and r arbitrary complex numbers. The procedure is straightforward as above, so we will just quote the result. We obtain an amplitude given by:
W = h(Xφ 1 φ 2 + µφ 1 φ 3 + µφ 2 φ 4 − µ 2 X) + k(rY φ 1 φ 5 + µφ 5 φ 7 ), (A.iM = −i (4π) 2 |h 2 rµ| 2 C |k| 2 |h| 2 , (A.16)
where we have defined C(t) as:
C(t) = t 2 − t log 4 − t t − 1 log t . (A.17)
Note that this is a positive definite function, meaning that the one loop correction to the mass is always positive, and the pseudomoduli get stabilized for any (nonzero)
value of the parameters. Also note that the limit of vanishing t with |r| 2 t fixed (i.e., vanishing masses for φ 5 and φ 7 , but nonvanishing coupling of Y to the supersymmetry breaking sector) gives a nonvanishing contribution to the mass of Y .
B D7-branes in the Riemann surface
The gauge theory of D3-branes at toric singularities can be encoded in a dimer diagram [40,41,42,43,44]. This corresponds to a bi-partite tiling of T 2 , where faces correspond to gauge groups, edges correspond to bi-fundamentals, and nodes correspond to superpotential terms. As an example, the dimer diagram of D3-branes on the cone over dP 2 is shown in Figure 17. As shown in [43], D3-branes on a toric singularity are mirror to D6-branes on intersecting 3-cycles in a geometry given by a fibration of a Riemann surface Σ with punctures. This Riemann surface is just a thickening of the web diagram of the toric singularity [45,46,47], with punctures associated to external legs of the web diagram. The mirror D6-branes wrap non-trivial 1-cycles on this Riemann surface, with their intersections giving rise to bi-fundamental chiral multiplets, and superpotential terms arising from closed discs bounded by the D6-branes. In [19], it was shown that D7-branes passing through the singular point can be described in the mirror Riemann surface Σ by non-compact 1-cycles which come from infinity at one puncture and go to infinity at another. Figure 18 shows the 1-cycles corresponding to some D3-and D7-branes in the Riemann surface in the geometry mirror to the complex cone over dP 2 . A D7-brane leads to flavors for the two D3-brane gauge factors whose 1-cycles are intersected by the D7-brane 1-cycle, and there is a cubic coupling among the three fields (related to the disk bounded by the three 1-cycles in the Riemann surface). Figure 19: Quiver for the dP 2 theory with M fractional branes and flavors.
U(2M) Q 1i Q i3 Q 3j Q j5 Q 5k Q k1
As stated in Section 4, given a gauge theory of D3-branes at a toric singularity, we introduce flavors for some of the gauge factors in a specific way. We pick a term in the superpotential, and we introduce flavors for all the involved gauge factors, and coupling to all the involved bifundamental multiplets. For example, the quiver with flavors for the dP 2 theory is shown in Figure 19.
On the Riemann surface, this procedure amounts to picking a node and introducing D7-branes crossing all the edges ending on the node, see Figure 18. In this example we obtain the superpotential terms
W f lavor = λ ′ (Q 1iQi3 Y 31 + Q 3jQj5 X 53 + Q 5kQk1 X 15 ) (B.1)
In addition we introduce mass terms
W mass = m 1 Q 1iQk1 + m 2 Q 3jQi3 + m 5 Q 5kQj5 (B.2)
This procedure is completely general and applies to all gauge theories for branes at toric singularities 7 .
C Detailed proof of Section 4
Recall that in Section 4 we considered the illustrative example of the gauge theory given by the quiver in Figure 20. Since node 2 is the one we wish to dualize, the only relevant part of the diagram is shown in Figure 21. We show the Seiberg dual in Figure 22. The above choice of D7-branes, which we showed in appendix B can be applied to arbitrary toric singularities, gives us the superpotential terms
W f lavor = λ ′ ( X 32 Q 2b Q b3 + X 21 Q 1a Q a2 + X 14 Q 4d Q d1 + Y 43 Q 3c Q c4 ) W mass = m 2 Q a2 Q 2b + m 3 Q b3 Q 3c + m 4 Q c4 Q 4d + m 1 Q d1 Q 1a (C.1)
Taking the Seiberg dual of node 2 gives
W f lavor dual = λ ′ ( S 1 3b Q b3 + R 1 a1 Q 1a + X 14 Q 4d Q d1 + Y 43 Q 3c Q c4 ) W mass dual = m 2 X ab + m 3 Q b3 Q 3c + m 4 Q c4 Q 4d + m 1 Q d1 Q 1a W mesons = h ( X abQb2Q2a + R 1 a1X 12Q2a + R 2 a1Ỹ 12Q2a 1 3 5 4 a b c d X 21 Y 21 X 32 Y 32 Z 32 X 14 X 43 Y 43 Q 1a Q a2 Q 2b Q b3 Q 3c Q c4 Q 4d Q d1X 12Ỹ 12X 23 Y 23 Z 23 Q b2 Q 2a X ab R 1 R 2 S 1 S 2 S 3+ S 1 3bQ b2X23 + S 2 3bQ b2Ỹ23 + S 3 3bQ b2Z23 + M 1 31X 12X23 + M 2 31X 12Ỹ23 + M 3 31X 12Z23 + M 4 31Ỹ 12X23 + M 5 31Ỹ 12Ỹ23 + M 6 31Ỹ 12Z23 ) (C.2)
where we have not included the original superpotential. The crucial point is that the underlined terms appear for any quiver gauge theory with flavors introduced as described in appendix B. As described in the main text, supersymmetry is broken by the rank condition due to the F-term of the dual meson associated to the massive flavors. Our vacuum ansatz is (we take N f = 2 and N c = 1 for simplicity; this does not affect our conclusions)
Q b2 = µ1 Nc 0 ;Q 2a = (µ1 Nc ; 0) (C.3)
with all other vevs set to zero. We parametrize the perturbations around this minimum
asQ b2 = µ + φ 1 φ 2 ;Q 2a = (µ + φ 3 ; φ 4 ) ; X ab = X 00 X 01 X 10 X 11 (C.4)
and the underlined terms give hX abQb2Q2a − hµ 2 X ab = hX 11 φ 2 φ 4 − hµ 2 X 11 + hµ φ 2 X 01 + hµ φ 4 X 10 + hµ φ 1 X 00 + hµ φ 3 X 00 + h φ 1 φ 3 X 00 + h φ 2 φ 3 X 01 + h φ 1 φ 4 X 10 (C.5)
It is important to note that all the fields in (C.4) will have quadratic couplings only in the underlined term (C.5). Thus, one can safely study this term, and the conclusions are independent of the other terms in the superpotential. Diagonalizing (C.5) gives hX abQb2Q2a − hµ 2 X ab = hX 11 φ 2 φ 4 − hµ 2 X 11 + hµ φ 2 X 01 + hµ φ 4 X 10
+ √ 2hµ φ + X 00 + h 2 φ 2 + X 00 − h 2 φ 2 − X 00 + h √ 2 (ξ + − ξ − ) φ 2 X 01 + h √ 2 (ξ + + ξ − ) φ 4 X 10 (C.6) where ξ + = 1 √ 2 (φ 1 + φ 3 ) ; ξ − = 1 √ 2 (φ 1 − φ 3 ) (C.7)
This term is similar to the generalized asymmetric case studied in appendix A.2 with X 11 → X ; φ 4 → φ 1 ; φ 2 → φ 2 ; X 10 → φ 3 ; X 01 → φ 4 (C.8)
So here X 11 is the linear term that breaks supersymmetry, and φ 2 , φ 4 are the broken supersymmetry fields. In (C.6), the only massless fields at tree-level are X 11 and ξ − . Comparing to the ISS case in Section 2.1 shows that Im ξ − is a Goldstone boson and X 11 , Re ξ − get mass at tree-level. As for φ 2 and φ 4 , setting ρ + = 1 √ 2 (φ 2 + φ 4 ) and ρ − = 1 √ 2 (φ 2 −φ 4 ) gives us Re(ρ + ) and Im (ρ − ) massless and the rest massive. Following the discussion in Section 2.1, Re(ρ + ) and Im (ρ − ) are just the Goldstone bosons of the broken SU(N f ) symmetry 8 . We have thus shown that the dualized flavors (e.g.Q b2 , Q 2a ) and the meson with two flavor indices (e.g. X ab ) get mass at tree-level or at 1-loop unless they are Goldstone bosons. Now, we need to verify that this is the case for the remaining fields. The Seiberg dual of the original quiver diagram is shown in Figure 23. The dualized bi-fundamentals come in two classes. The first are the ones that initially (before dualizing) had cubic flavor couplings, there will always be only two of those (e.g.X 12 , X 23 ). The second are those that did not initially have cubic couplings to flavors, there is an arbitrary number of those (e.g.Ỹ 12 ,Ỹ 23 ,Z 23 ). Figure 24 shows the relevant part of the quiver for the first class. Recalling the superpotential terms (C.2), there are several possible sources of tree-level masses. For instance, these can arise in W f lavor dual and W mass dual . Also, remembering our assignation of vevs in (C.3), tree-level masses can also arise in W mesons from cubic couplings involving the broken supersymmetry fields (e.g.Q b2 ,Q 2a ). The first class of bi-fundamentals (e.g.X 12 ,X 23 ) only appear in W mesons coupled to their respective mesons (e.g. R 1 , S 1 ). In turn these mesons will ap- pear in quadratic terms in W f lavor dual coupled to flavors (e.g. S 1 3b Q b3 and R 1 a1 Q 1a ), and these flavors each appear in one term in W mass . Thus there are two sets of three terms which are coupled at tree-level and which always couple in the same way. Consider for instance the term
X 43 Y 43 Q 1a Q b3 Q 3c Q c4 Q 4d Q d1 X 12Ỹ 12X 23 Y 23 Z 23 Q b2 Q 2a X ab R 1 R 2 S 1 S 2 S 3 M 1 ..M 6a 1 2 3 d c b X 12X 23 Q b2 Q 2a R 1 S 1 Q 1a Q b3 Q 3c Q d1λ ′ S 1 3b Q b3 + m 3 Q b3 Q 3c + h S 1 3bQ b2X23 = λ ′ (S 1 S 2 ) B 1 B 2 + m 1 (C 1 C 2 ) B 1 B 2 + h (S 1 S 2 ) µ + φ 1 φ 2 X 23 = λ ′ (S 1 B 1 + S 2 B 2 ) + m 1 (B 1 C 1 + B 2 C 2 ) + hµ S 1X23 + h S 1 φ 1X23 + h S 2 φ 2X23 (C.9)
where S i , B i , C i andX 23 are the perturbations around the minimum. Diagonalizing (which can be done analytically for any values of the couplings), we get that all terms except one get tree-level masses, the massless field being:
Y = m 1 S 2 − λ ′ C 2 (C.10)
This massless field has a cubic coupling to φ 2X23 and gets mass at 1-loop since φ 2 is a broken supersymmetry field, as described in appendix A.2. Figure 25 shows the relevant part of the quiver for the second class of bi-fundamentals (i.e. those that are dualized but do not have cubic flavor couplings).
These fields and their mesons only appear in one term, so will always couple in the same way. Taking as an example This shows that R 1 andỸ 12 get tree-level masses and R 2 gets a mass at 1-loop since it couples to the broken supersymmetry field φ 4 . The only remaining fields are flavors like Q c4 , Q 4d , which do not transform in a gauge group adjacent to the dualized node (i.e. not adjacent in the quiver loop corresponding to the superpotential term used to introduce flavors). These are directly massive from the tree-level W mass term.
h R 2 a1Ỹ 12Q2a = R 1 R 2 Ỹ 12 (µ + φ 3 ; φ 4 ) = µR 1Ỹ12 + R 1 φ 3Ỹ12 + R 2 φ 4Ỹ12 (C.11) a b 1 2 3 d c 4 Y 12Ỹ 23 Z 23 Q b2 Q 2a R 2 S 2 S 3 Q c4 Q 4d
So, as stated, all fields except those that appear in the original superpotential (i.e. mesons with gauge indices and bi-fundamentals which are not dualized) get masses either at tree-level or at one-loop. So we only need to check the dualized original superpotential to see if we have a metastable vacua.
Figure 1 :
1Schematic tadpole contribution to the Im ξ − two point function. Both bosons and fermions run in the loop.
the number of colors, flavors of that gauge factor. Denoting N f,0 , N f,1 the number of massless and massive flavors (namely flavors
Figure 3 :Figure 4 Figure 4 :
344Quiver diagram for the dP 2 theory.We consider a set of M DSB fractional branes, corresponding to choosing ranks (M, 0, M, 0, 2M) for the corresponding gauge factors. The resulting quiver is shown in Quiver diagram for the dP 2 theory with M DSB fractional branes.
Figure 5 :
5Quiver for the dP 2 theory with M fractional branes and flavors.After Seiberg Duality the dual gauge factor is SU(N) with N = N f,1 −M and dynamical scale Λ. To get the matter content in the dual, we replace the microscopic flavors Q 5k , Q j5 , X 53 , X 15 by the dual flavorsQ k5 , Q 5j , X 35 , X 51 respectively. We also have the mesons related to the fields in the electric theory by
Figure 6 :
6Quiver diagram used to illustrate general results. It does not correspond to any geometry in particular.
Figure 7 :
7Quiver diagram with flavors. White nodes denote flavor groups.
Figure 9
9where the M's are mesons with indices in the gauge groups, R's and S's are
Figure 8 :
8Relevant part of quiver before Seiberg duality.
Figure 9 :
9Relevant part of the quiver after Seiberg duality on node 2.
mesons with only one index in the flavor group, and X ab is a meson with both indices in the flavor groups. The original cubic superpotential and flavor mass superpotentials become
Figure 10 :
10Quiver diagram for the dP 3 theory with a DSB fractional brane.
Figure 11 :
11Quiver diagram for the dP 4 theory with a DSB fractional branes.
Figure 13 :
13The dimer for Y p,1 .
Figure 14 :
14Top part of the dimer for Y p,1 . The hexagons are labeled by the ranks of the respective gauge groups • The Y p,p−1 case
Figure 15 :
15Bottom part of the dimer for Y p,p−1 . The hexagons are labeled by the ranks of the respective gauge groups string theory realization. The existence of such correlation fits nicely with the remarkable properties of gauge theories on D-branes at singularities, and the gauge/gravity correspondence for fractional branes.
Figure 16 :
16Feynman diagrams contributing to the one-loop two point function. The dashed line denotes bosons and the solid one fermions.
Figure 17 :Figure 18 :
1718Dimer diagram for D3-branes at a dP 2 singularity. Riemann surface in the geometry mirror to the complex cone over dP 2 , shown as a tiling of a T 2 with punctures (denoted by capital letters). The figure shows the noncompact 1-cycles extending between punctures, corresponding to D7-branes, and a piece of the 1-cycles that correspond to the mirror of the D3-branes.
Figure 20 :Figure 21 :
2021Quiver diagram with flavors. White nodes denote flavor groups Relevant part of quiver before Seiberg duality.
Figure 22 :
22Relevant part of the quiver after Seiberg duality on node 2.
Figure 23 :
23Quiver after Seiberg duality on node 2.
Figure 24 :
24Relevant part of dual quiver for first class of bi-fundamentals.
Figure 25 :
25Relevant part of dual quiver for second class of bi-fundamentals.
of requiring the diagonalization of the mass matrix, which very often does not admit a closed expression, e.g. for the theories we are interested in.In fact, we would like to point out that to determine the existence of a meta-stable minimum there exists a computationally much simpler approach. In our situation, we have a good ansatz for the location of the one-loop minimum, and are interested just in the one-loop pseudomoduli masses around such point. This information can be directly obtained by computing the one-loop masses via the relevant Feynman diagrams. This technique is extremely economical, and provides results in closed form in full generality, e.g. for general values of the couplings, etc. The correctness of the original ansatz forthe vacuum can eventually be confirmed by the results of the computation (namely
positive one-loop squared masses, and negligible tadpoles for the classically massive
fields 2 ).
where Λ elect is the dynamical scale of the electric theory. Writing the classical superpotential terms of the new fields gives
the Seiberg dual theory. The results of our analysis show that, with the specified configuration of D7-branes, the determination of metastability is greatly simplified and only involves looking at the original superpotential. Thus, although we do not prove that DSB branes on arbitrary singularities generate metastable vacua, we show how one can determine the existence of metastability in a very simple and systematic manner. Using this analysis we show further examples of metastable vacua on systems of DSB branes.4.1 The general argument
4.1.1 Construction of the flavored theories
A linear expansion would lead to identical conclusions concerning the existence of the meta-stable vacua, but to one-loop masses not directly amenable to comparison with results in the literature.
As a technical remark, let us note that it is possible to set all the mass terms to be real by an appropriate redefinition of the fields, so we are diagonalizing a real symmetric matrix.
Here we assume the same coupling, but the conclusions hold for arbitrary non-zero couplings.
The authors wrote the computer program in http://cern.ch/inaki/pm.tar.gz which helped greatly in the process of computing the given amplitudes for the relevant models.
This procedure does not apply if the superpotential (regarded as a loop in the quiver) passes twice through the node which is eventually dualized in the derivation of the metastable vacua. However we have found no example of this for any DSB fractional branes.
In the case where the flavor group is SU (2), these Goldstone bosons are associated to the generators t x and t y .
AcknowledgmentsWe thank S. Franco
. J M Maldacena, arXiv:hep-th/9711200Adv. Theor. Math. Phys. 21113Int. J. Theor. Phys.J. M. Maldacena, Adv. Theor. Math. Phys. 2, 231 (1998) [Int. J. Theor. Phys. 38, 1113 (1999)] [arXiv:hep-th/9711200].
. S S Gubser, I R Klebanov, A M Polyakov, arXiv:hep-th/9802109Phys. Lett. B. 428105S. S. Gubser, I. R. Klebanov and A. M. Polyakov, Phys. Lett. B 428, 105 (1998) [arXiv:hep-th/9802109].
. E Witten, arXiv:hep-th/9802150Adv. Theor. Math. Phys. 2253E. Witten, Adv. Theor. Math. Phys. 2, 253 (1998) [arXiv:hep-th/9802150].
. S Kachru, E Silverstein, arXiv:hep-th/9802183Phys. Rev. Lett. 804855S. Kachru and E. Silverstein, Phys. Rev. Lett. 80, 4855 (1998) [arXiv:hep-th/9802183].
. I R Klebanov, E Witten, arXiv:hep-th/9807080Nucl. Phys. B. 536199I. R. Klebanov and E. Witten, Nucl. Phys. B 536, 199 (1998) [arXiv:hep-th/9807080].
. D R Morrison, M R Plesser, arXiv:hep-th/9810201Adv. Theor. Math. Phys. 31D. R. Morrison and M. R. Plesser, Adv. Theor. Math. Phys. 3, 1 (1999) [arXiv:hep-th/9810201].
. M Bertolini, F Bigazzi, A L Cotrone, arXiv:hep-th/0411249JHEP. 041224M. Bertolini, F. Bigazzi and A. L. Cotrone, JHEP 0412, 024 (2004) [arXiv:hep-th/0411249].
. S Benvenuti, S Franco, A Hanany, D Martelli, J Sparks, arXiv:hep-th/0411264JHEP. 050664S. Benvenuti, S. Franco, A. Hanany, D. Martelli and J. Sparks, JHEP 0506, 064 (2005) [arXiv:hep-th/0411264].
. I R Klebanov, M J Strassler, arXiv:hep-th/0007191JHEP. 000852I. R. Klebanov and M. J. Strassler, JHEP 0008, 052 (2000) [arXiv:hep-th/0007191].
. S Franco, A Hanany, Y H He, P Kazakopoulos, arXiv:hep-th/0306092S. Franco, A. Hanany, Y. H. He and P. Kazakopoulos, arXiv:hep-th/0306092.
. S Franco, Y H He, C Herzog, J Walcher, arXiv:hep-th/0402120Phys. Rev. D. 7046006S. Franco, Y. H. He, C. Herzog and J. Walcher, Phys. Rev. D 70, 046006 (2004) [arXiv:hep-th/0402120].
. S Franco, A Hanany, A M Uranga, arXiv:hep-th/0502113JHEP. 050928S. Franco, A. Hanany and A. M. Uranga, JHEP 0509, 028 (2005) [arXiv:hep-th/0502113].
. C P Herzog, Q J Ejaz, I R Klebanov, arXiv:hep-th/0412193JHEP. 05029C. P. Herzog, Q. J. Ejaz and I. R. Klebanov, JHEP 0502, 009 (2005) [arXiv:hep-th/0412193].
. D Berenstein, C P Herzog, P Ouyang, S Pinansky, arXiv:hep-th/0505029JHEP. 050984D. Berenstein, C. P. Herzog, P. Ouyang and S. Pinansky, JHEP 0509, 084 (2005) [arXiv:hep-th/0505029].
. S Franco, A Hanany, F Saad, A M Uranga, arXiv:hep-th/0505040JHEP. 060111S. Franco, A. Hanany, F. Saad and A. M. Uranga, JHEP 0601 (2006) 011 [arXiv:hep-th/0505040].
. M Bertolini, F Bigazzi, A L Cotrone, arXiv:hep-th/0505055Phys. Rev. D. 7261902M. Bertolini, F. Bigazzi and A. L. Cotrone, Phys. Rev. D 72, 061902 (2005) [arXiv:hep-th/0505055].
. K Intriligator, N Seiberg, arXiv:hep-th/0512347JHEP. 060231K. Intriligator and N. Seiberg, JHEP 0602, 031 (2006) [arXiv:hep-th/0512347].
. A Brini, D Forcella, arXiv:hep-th/0603245A. Brini and D. Forcella, arXiv:hep-th/0603245.
. S Franco, A M Uranga, arXiv:hep-th/0604136JHEP. 060631S. Franco and A. M. Uranga, JHEP 0606 (2006) 031 [arXiv:hep-th/0604136].
. K Intriligator, N Seiberg, D Shih, arXiv:hep-th/0602239JHEP. 060421K. Intriligator, N. Seiberg and D. Shih, JHEP 0604 (2006) 021 [arXiv:hep-th/0602239].
. B Florea, S Kachru, J Mcgreevy, N Saulina, arXiv:hep-th/0610003B. Florea, S. Kachru, J. McGreevy and N. Saulina, arXiv:hep-th/0610003.
. H Ooguri, Y Ookouchi, arXiv:hep-th/0607183Phys. Lett. B. 641323H. Ooguri and Y. Ookouchi, Phys. Lett. B 641 (2006) 323 [arXiv:hep-th/0607183].
. R Argurio, M Bertolini, S Franco, S Kachru, arXiv:hep-th/0610212JHEP. 070183R. Argurio, M. Bertolini, S. Franco and S. Kachru, JHEP 0701 (2007) 083 [arXiv:hep-th/0610212].
. S Franco, I Garcia-Etxebarria, A M Uranga, arXiv:hep-th/0607218JHEP. 070185S. Franco, I. Garcia-Etxebarria and A. M. Uranga, JHEP 0701 (2007) 085 [arXiv:hep-th/0607218].
. I Bena, E Gorbatov, S Hellerman, N Seiberg, D Shih, arXiv:hep-th/0608157JHEP. 061188I. Bena, E. Gorbatov, S. Hellerman, N. Seiberg and D. Shih, JHEP 0611 (2006) 088 [arXiv:hep-th/0608157].
. R Argurio, M Bertolini, S Franco, S Kachru, arXiv:hep-th/0703236R. Argurio, M. Bertolini, S. Franco and S. Kachru, arXiv:hep-th/0703236.
. J D Lykken, E Poppitz, S P Trivedi, arXiv:hep-th/9806080Nucl. Phys. B. 543105J. D. Lykken, E. Poppitz and S. P. Trivedi, Nucl. Phys. B 543, 105 (1999) [arXiv:hep-th/9806080].
. M Wijnholt, arXiv:hep-th/0703047M. Wijnholt, arXiv:hep-th/0703047.
. Y E Antebi, T Volansky, arXiv:hep-th/0703112Y. E. Antebi and T. Volansky, arXiv:hep-th/0703112.
. I Garcia-Etxebarria, F Saad, A M Uranga, arXiv:hep-th/0605166JHEP. 060869I. Garcia-Etxebarria, F. Saad and A. M. Uranga, JHEP 0608, 069 (2006) [arXiv:hep-th/0605166].
. I Garcia-Etxebarria, F Saad, A M Uranga, arXiv:hep-th/0603108JHEP. 060655I. Garcia-Etxebarria, F. Saad and A. M. Uranga, JHEP 0606, 055 (2006) [arXiv:hep-th/0603108].
. D E Diaconescu, B Florea, S Kachru, P Svrcek, arXiv:hep-th/0512170JHEP. 060220D. E. Diaconescu, B. Florea, S. Kachru and P. Svrcek, JHEP 0602, 020 (2006) [arXiv:hep-th/0512170].
. K Intriligator, N Seiberg, arXiv:hep-ph/0702069K. Intriligator and N. Seiberg, arXiv:hep-ph/0702069.
. S R Coleman, E Weinberg, Phys. Rev. D. 71888S. R. Coleman and E. Weinberg, Phys. Rev. D 7 (1973) 1888.
. S Weinberg, Univ. Pr.pCambridge, UKS. Weinberg, Cambridge, UK: Univ. Pr. (1996) 489 p
. J P Gauntlett, D Martelli, J Sparks, D Waldram, arXiv:hep-th/0402153Class. Quant. Grav. 21J. P. Gauntlett, D. Martelli, J. Sparks and D. Waldram, Class. Quant. Grav. 21, 4335 (2004) [arXiv:hep-th/0402153].
. J P Gauntlett, D Martelli, J Sparks, D Waldram, arXiv:hep-th/0403002Adv. Theor. Math. Phys. 8711J. P. Gauntlett, D. Martelli, J. Sparks and D. Waldram, Adv. Theor. Math. Phys. 8, 711 (2004) [arXiv:hep-th/0403002].
. J P Gauntlett, D Martelli, J F Sparks, D Waldram, arXiv:hep-th/0403038Adv. Theor. Math. Phys. 8987J. P. Gauntlett, D. Martelli, J. F. Sparks and D. Waldram, Adv. Theor. Math. Phys. 8, 987 (2006) [arXiv:hep-th/0403038].
. D Martelli, J Sparks, arXiv:hep-th/0411238Commun. Math. Phys. 26251D. Martelli and J. Sparks, Commun. Math. Phys. 262, 51 (2006) [arXiv:hep-th/0411238].
. A Hanany, K D Kennaway, arXiv:hep-th/0503149A. Hanany and K. D. Kennaway, arXiv:hep-th/0503149.
. S Franco, A Hanany, K D Kennaway, D Vegh, B Wecht, arXiv:hep-th/0504110S. Franco, A. Hanany, K. D. Kennaway, D. Vegh and B. Wecht, arXiv:hep-th/0504110.
. A Hanany, D Vegh, arXiv:hep-th/0511063A. Hanany and D. Vegh, arXiv:hep-th/0511063.
. B Feng, Y H He, K D Kennaway, C Vafa, arXiv:hep-th/0511287B. Feng, Y. H. He, K. D. Kennaway and C. Vafa, arXiv:hep-th/0511287.
. S Franco, D Vegh, arXiv:hep-th/0601063S. Franco and D. Vegh, arXiv:hep-th/0601063.
. O Aharony, A Hanany, arXiv:hep-th/9704170Nucl. Phys. B. 504239O. Aharony and A. Hanany, Nucl. Phys. B 504, 239 (1997) [arXiv:hep-th/9704170].
. O Aharony, A Hanany, B Kol, arXiv:hep-th/9710116JHEP. 98012O. Aharony, A. Hanany and B. Kol, JHEP 9801, 002 (1998) [arXiv:hep-th/9710116].
. N C Leung, C Vafa, arXiv:hep-th/9711013Adv. Theor. Math. Phys. 291N. C. Leung and C. Vafa, Adv. Theor. Math. Phys. 2, 91 (1998) [arXiv:hep-th/9711013].
| []
|
[
"Insight into the OH polarimetric structure of OH 26.5+0.6",
"Insight into the OH polarimetric structure of OH 26.5+0.6"
]
| [
"S Etoka 1⋆ \nJodrell Bank Centre for Astrophysics\nSchool of Physics and Astronomy\nThe University of Manchester\nM13 9PLManchesterUK\n",
"P Diamond \nJodrell Bank Centre for Astrophysics\nSchool of Physics and Astronomy\nThe University of Manchester\nM13 9PLManchesterUK\n"
]
| [
"Jodrell Bank Centre for Astrophysics\nSchool of Physics and Astronomy\nThe University of Manchester\nM13 9PLManchesterUK",
"Jodrell Bank Centre for Astrophysics\nSchool of Physics and Astronomy\nThe University of Manchester\nM13 9PLManchesterUK"
]
| [
"Mon. Not. R. Astron. Soc"
]
| We present the first view of the magnetic field structure in the OH shell of the extreme OH/IR star OH 26.5+0.6. MERLIN interferometric observations of this object were obtained in December 1993 in full polarisation, at 1612, 1665 and 1667 MHz. The maser spots show a spheroidal distribution both at 1612 and 1667 MHz, while at 1665 MHz emission from the blue-shifted maser peak is concentrated on the stellar position, and the red-shifted peak emission exhibits a filamentary structure oriented on a SE-NW axis. The linear polarisation in both main lines is rather faint, ranging from 9 to 20% at 1665 MHz and from 0 to 30% at 1667 MHz. At 1612 MHz most maser spots exhibit a similar range of linear polarisation although those in the outermost parts of the envelope reach values as high as 66%. This is particularly apparent in the southern part of the shell. The detailed distribution of the polarisation vectors could only be obtained at 1612 MHz. The polarisation vectors show a highly structured distribution indicative of a poloidal magnetic field inclined by 40-60 • to the line of sight. The velocity distribution of the maser spots with respect to the radial distance is well explained by an isotropic outflow at constant velocity in the case of a prolate shaped spheroid envelope, also tilted about 45-65 • to the line of sight. | 10.1111/j.1365-2966.2010.16840.x | [
"https://arxiv.org/pdf/1004.2659v1.pdf"
]
| 118,561,036 | 1004.2659 | 1ccc79b6c49db60efbfb729f36c61c283b198283 |
Insight into the OH polarimetric structure of OH 26.5+0.6
15 Apr 2010 16 April 2010 16 April 2010
S Etoka 1⋆
Jodrell Bank Centre for Astrophysics
School of Physics and Astronomy
The University of Manchester
M13 9PLManchesterUK
P Diamond
Jodrell Bank Centre for Astrophysics
School of Physics and Astronomy
The University of Manchester
M13 9PLManchesterUK
Insight into the OH polarimetric structure of OH 26.5+0.6
Mon. Not. R. Astron. Soc
000000015 Apr 2010 16 April 2010 16 April 2010Printed (MN L A T E X style file v2.2)polarisation -magnetic fields -stars: AGB and post-AGB -masers - circumstellar shell -stars:individual: OH 265+06
We present the first view of the magnetic field structure in the OH shell of the extreme OH/IR star OH 26.5+0.6. MERLIN interferometric observations of this object were obtained in December 1993 in full polarisation, at 1612, 1665 and 1667 MHz. The maser spots show a spheroidal distribution both at 1612 and 1667 MHz, while at 1665 MHz emission from the blue-shifted maser peak is concentrated on the stellar position, and the red-shifted peak emission exhibits a filamentary structure oriented on a SE-NW axis. The linear polarisation in both main lines is rather faint, ranging from 9 to 20% at 1665 MHz and from 0 to 30% at 1667 MHz. At 1612 MHz most maser spots exhibit a similar range of linear polarisation although those in the outermost parts of the envelope reach values as high as 66%. This is particularly apparent in the southern part of the shell. The detailed distribution of the polarisation vectors could only be obtained at 1612 MHz. The polarisation vectors show a highly structured distribution indicative of a poloidal magnetic field inclined by 40-60 • to the line of sight. The velocity distribution of the maser spots with respect to the radial distance is well explained by an isotropic outflow at constant velocity in the case of a prolate shaped spheroid envelope, also tilted about 45-65 • to the line of sight.
INTRODUCTION
After leaving the main sequence, low and intermediate mass stars experience a crucial phase in their evolution toward the white dwarf stage: the Asymptotic Giant Branch (AGB) phase. It is at the very end of this phase that the star will shed most of its mass through extensive mass loss (up to a few 10 −4 M⊙yr −1 ). The exact evolutionary sequence along the AGB to this final stage has not yet been resolved, but OH/IR stars are thought to trace the period before the proto-planetary nebula stage. At that point, the central star is completely obscured in the optical by a thick dust shell built up by mass loss, but the envelope structure can be observed through strong emission in the ground state OH maser lines and at infrared wavelengths.
While AGB stars are fairly spherical objects, asymmetries such as elliptical shapes or bipolar outflows are commonly observed at the planetary nebula stage (Corradi & Schwarz 1995).
Recently, a series of papers investigated the polarimetric structure in the intermediate and outermost parts of the circumstellar shells of evolved stars (Bains et al. 2003, Etoka & Diamond 2004, Vlemmings et al. 2005, Vlemmings ⋆ E-mail: [email protected] & Diamond 2006). Although the origin and evolution of the magnetic field is not well understood and is currently a matter of debate, (cf. Nordhaus et al. 2007 and reference within), this series of papers has shown the importance of the magnetic field in shaping the circumstellar material.
OH 26.5+0.6 (AFGL 2205; IRAS 18348−0526) is an extreme OH/IR star at a distance of 1.37±0.30 kpc (van Langevelde et al. 1990). Its current mass-loss rate has been estimated to be on the order of 5 × 10 −4 M⊙yr −1 (Justtanont et al. 1996). It has been classified as a Very-Long Period Variable OH/IR star with a period of 1570 days (le Bertre 1993). Prior to that work, OH 26.5+0.6 has been imaged several times with the VLA at 1612 MHz with increasing sensitivity (Baud 1981;Bowers et al. 1983;Herman et al. 1985 andBowers &Johnston 1990) where a complete ring-like structure is seen at virtually all velocities. It has also been imaged with MERLIN (Diamond et al. 1985), where the clumpiness of the shell was clearly revealed.
The work presented here is part II of a series of papers intending to unravel the magnetic structure around extreme OH/IR stars through observations in the ground state OH maser lines at 18 cm. The first paper of the series, Etoka & Diamond (2004, hereafter paper I), presents the magnetic field structure of the red supergiant NML Cyg at 1612 and c 0000 RAS 1667 MHz. This first work has shown that a structured polarisation distribution exists for both lines linked with the geometry of the shell itself. This can be explained if the principal driver for the shaping of the shell is the magnetic field.
The details of the observations and data reduction process are given in Section 2. An analysis of the data is presented in Section 3. In Section 4 discussion and interpretation of the results is given, while conclusions are drawn in Section 5.
OBSERVATIONS & DATA REDUCTION
The observations were performed on the 12 th December 1993 at 1612, 1665 and 1667 MHz using the 8 telescopes of MER-LIN available at that time (namely Defford, Cambridge, Knockin, Wardle, Darnhall, MK2, Lovell & Tabley) giving a maximum baseline of 217 km and a resolution of 0.17 arcsec. The observations lasted 12 hours from which three hours were spent on calibrator sources. Data were taken in full polarisation mode in order to retrieve the four Stokes parameters. A bandwith of 0.5 MHz was recorded and divided into 512 channels at correlation, leading to a channel separation of 1 kHz, giving a velocity resolution of 0.18 km s −1 . The observing programme switched at intervals of a few minutes between the three maser lines. The continuum source 3C84 was used to derive corrections for instrumental gain variations across the bandpass. 3C286 was also observed in order to retrieve the absolute polarisation position angles and to provide the flux density reference. The data reduction followed the procedure explained in paper I, section 2.2. All the velocities given in this article are relative to the local standard of rest (LSR).
ANALYSIS
3.1 MERLIN spectra Andersson et al. (1974) originally discovered the intense OH maser signal at 18 cm emitted by OH 26.5+0.6. This Type II OH/IR supergiant has a maximum intensity observed in the 1612 MHz satellite line which is about 50 times greater than that in the 1665/1667 MHz mainlines.
The 1612 MHz spectrum of OH 26.5+0.6 in Stokes I, constructed from the final image, is shown in Fig.1. The spectral profile and the peak intensity ratio between the red-and the blue-shifted peaks of I red /I blue =2 has not changed since the detection of the source in 1973 by Andersson et al. (1974). The intensity and the profile retrieved from the final map show that we recovered most of the signal. A faint inter-peak emission can be observed in the spectrum presented by Andersson et al. in the velocity range [17][18][19][20][21][22] km s −1 and [33-35] km s −1 , not picked up by MERLIN. But the general agreement in the profile and peak flux intensity between the two sets of data indicates that the fraction of emission potentially lost in extended structures is minimal. The spectra in Stokes I, constructed from the final image at 1667 and 1665 MHz, are shown in Figs. 2 and 3 respectively. The profile and the peak intensity ratio between the red-and the blue-shifted peaks I red /I blue ≃0.5 has not changed for the two mainlines since they were first detected. Nevertheless, we observed a stronger intensity corresponding to an increase of 40% at 1665 MHz and 30% at 1667 MHz from that recorded by Andersson et al. (1974) twenty years earlier, likely due to variability of the presumed unsaturated maser emission. This assumption is strengthened by singledish observations presented by Etoka & Le Squeren (2004) taken with the Nançay radio telescope only three months after these MERLIN observations. With a periodicity of nearly 1600 days this corresponds to a phase difference of just 6%. The spectrum profile observed at 1667 MHz by MERLIN is entirely consistent with the single-dish observation. The spectrum observed at 1665 MHz with the Nançay radio telescope suggests faint inter-peak emission which is not picked up by MERLIN that could be the signature of faint extended emission. But, the peak flux in both mainlines is higher in the MERLIN spectra than in the single-dish observations (by about 10% at 1667 MHz and 35% at 1665 MHz). Such behaviour would be expected if both sets of observations were taken after the OH maximum, the steeper decrease of the 1665 MHz emission implying less saturated emission than that at 1667 MHz.
Maser emission extent and spot distributions
CLEANed maps of all the channels were created using the AIPS task IMAGR with a restoring beam of 0.342× 0.283 arcsec 2 at 1612 MHz and 0.350× 0.140 arcsec 2 at 1665 and 1667 MHz. The typical rms in Stokes I images, calculated over areas free of emission was about 8 mJy beam −1 increasing by up to 10 times that value for the channels with the strongest intensity.
The AIPS task SAD was used to identify maser components in the individual channel maps, as explained in pa- per I, section 3.2.1. At 1665 MHz, the simplicity of the maps was such that a 3σ threshold was taken to retrieve the maser components. At 1612 and 1667 MHz, the maps being complex, a more stringent selection was applied for retrieving the components. A component has been accepted only if its flux density was greater than 4×rms noise of a given channel (or greater than 10×rms noise in very complex regions). Similarly to paper I, the components were then grouped into maser spots if they existed in more than three consecutive channels and with positional offsets of less than 100 mas. With the given selection criteria, 10 maser spots were identified at 1665 MHz, 81 at 1667 MHz and 277 at 1612 MHz. Tables 1 to 3 present the flux densities in Stokes parameters and polarisation properties of the maser spots fitted at 1612, 1667 and 1665 MHz respectively. The meaning of the 13 columns of these tables is as follows: column 1 gives the maser spot number. The maser spots have been numbered in a decreasing velocity order. Column 2 gives the peak LSR velocity of the maser spot. Columns 3 to 6 present the corresponding I, Q, U and V flux densities. Column 7 presents the associated linear polarisation flux density. Columns 8 and 9
give the RA and DEC offsets from the pointing position. Columns 10 to 12 give the percentage of circular, linear and total polarisation, and finally column 13 gives the angle of the polarisation vector associated with the maser spot when relevant (i.e., for P≥ 3σ). The strong difference between the number of maser spots found at 1612 and 1667 MHz (ratio of 4:1) is partly due to the 4σ cutoff. This eliminated more maser spots at 1667 MHz than it did at 1612 MHz because most of the components at 1667 MHz are faint and did not meet the criterion for 3 consecutive channels. This has an impact upon the inferred total extent of the shell at 1667 MHz, where the maser spot distribution modelling (cf. section 3.4) points to a substantially smaller radius than that suggested by the velocity integrated image in Stokes I.
Maser emission extent
The velocity-integrated images for the 1612, 1667 and 1665 MHz maser emission are presented in Figs. 4, 5 and 6 respectively. In Fig. 4, the strong blue-and red-peak contributions (corresponding to the velocity range [11:14] km s −1 and [37:42] km s −1 ) have been cut off for dynamic range purposes. The 1612 and 1667 MHz extents are about 5 arcsec which corresponds to a linear extent of ∼7000 AU at 1.37 kpc. At 1612 MHz, the bulk of the emission describes a ring centred about +0.5 arcsec in δ from the optical stellar position. The 1665 MHz central core emission lies within an area less than 1.5 arcsec across. Including the very faint maser spots observed East and West, the total extent of the 1665-MHz emission is still less than 4 arcsec. Figures 7 and 8 show the maps of the distribution of the emission integrated over a velocity interval of 1.27 and 1.23 km s −1 at 1612 and 1667 MHz respectively. From these figures a certain number of physical properties concerning the geometry and the dynamic of the shell can be inferred:
• the ellipsoidal nature of the shell is revealed with an axis ratio of ∼0.80 and a projected major axis position angle of 20 • ± 5 • . It is clearly apparent at 1612 MHz in the velocity range [23:18] km s −1 .
• at 1612 MHz, the maser emission distribution along the velocity channels is consistent with a radially expanding shell;
• at 1667 MHz, there is a hint of a deviation from the uniform radial expansion in the red-shifted emission since the 'central spot' expected at V=+41.7 km s −1 is not observed. Instead, a maser spot approximately 1 arcsec off-centre is observed;
• a clear asymmetry is observed, as an incomplete ring structure can be seen both at 1612 and 1667 MHz. At 1612 MHz, there is no detectable maser emission radiating from the north of the shell in the velocity range [33:20] km s −1 , while at 1667 MHz virtually no emission is observed in both the NW and SE quadrants in the same velocity range.
Location of the star
At the time of the observations, MERLIN data were not routinely phase-referenced. Therefore, an assumption regarding the location of the central star has to be made. As mentioned in paper I, amplification of the stellar radiation by the bluest emission of the spectrum has been observed for various types of OH/IR emitters, strongly suggesting that this feature marks the location of the star (Norris et al. 1984;Sivagnanam et al. 1990;van Langevelde et al. 2000). Therefore, the maser component belonging to the blue-shifted peak and located at the centre of the maser distributions at 1612 and 1667 MHz, at a velocity of 11.4 km s −1 and 12.2 km s −1 respectively (cf. Figs. 7 & 8) is quite likely to be over the stellar position. Similarly, the blue-shifted peak at V=12 km s −1 at 1665 MHz is taken to be centred at the stellar position.
This assumption has been followed for the analysis of the data presented in this article. But, 1612 MHz shows by far the most complex spatial velocity distribution, in which the outermost part of the distribution is dominated by blue-shifted maser spots.
Maser spot distributions
At 1667 MHz, there is a rough gradient in the velocity distribution of the maser spots, such that the red-shifted spots are found in the N-NW part of the shell while the blue-shifted masers are found in the centre and S-SE part of the shell, contrasting with the 1612 MHz structure.
At 1665 MHz, emission from the blue peak is concentrated on the stellar position and the red-peak emission exhibits a filamentary structure oriented on a SE-NW axis. But this also stresses the dynamic range problem in channels where strong emission is present, that occurs around 41 km s −1 particularly but also around 12 km s −1 to a lesser extent.
Polarimetry
A few possible Zeeman patterns were found (cf. Fig. 12) leading to a magnetic field at the location of the OH shell B = −3.7 ± 0.3 mG. At a similar distance, the magnetic field strength in NML Cyg has been estimated to be also about 3 mG (paper I).
The linear polarisation in both mainlines is rather faint, ranging from 9 to 20% at 1665 MHz and from 0 to 30% at 1667 MHz. At 1612 MHz, most maser spots exhibit a similar range but the outermost maser spots tend to exhibit a greater degree of linear polarisation, reaching values as high as 66%. The strongest linearly polarised components belong to the southern part of the shell.
The information concerning the magnetic field structure at the location of the OH maser emission is displayed in Figs. 9 to 11 via the vectors of polarisation which reveal the electric field plane at the polarised radiation. At 1667 MHz and 1665 MHz, only two maser spots had polarised flux P≥ 3σ. At 1612 MHz, out of the 277 maser spots detected, 106 had P> 3σ. Therefore the detailed polarisation vector distribution could only be obtained for the latter transition. The vector distribution reveals a highly ordered polarisation field. Overall, the polarisation vectors show a mixture of radial and tangential distributions: the polarisation vectors in the N-NE part of the shell are radial while those in the S-SW are generally tangential. The position angle (PA) of the projected axis along which the tangential/radial separation occurs is about PA=100 • ± 10 • . This is illustrated by Fig. 13 which presents a view on how the polarisation angles are related to the radial direction (PAc) of their associated maser spot at 1612 MHz. More precisely, this figure shows the deviation of the vectors of polarisation from the tangent. The general trend observed clearly shows the change in direction of the polarisation vectors with orientation. A similar dichotomy in the orientation of the polarisation vector was observed by Boboltz (1997) for the Mira star R Aqr in SiO. Goldreich et al. (1973) showed that in the limiting case of strong saturated maser emission and overlapping of the Zeeman components, a flip of 90 • in the plane of polarisation occurs when the angle between the magnetic field direction and the line of sight is close to the critical angle of ∼55 • . Following Elitzur (1996) we can estimate the ratio χB for the significance of the Zeeman splitting:
χB = ∆νB ∆νD = 14gλ B ∆vD(1)
where the Lande factor g=0.935 for the 2 Π 3/2 J = 3/2 (ground state) transitions of OH. For a magnetic field strength of 3.7 mG, χB = 0.9 ∆v D < 1 if ∆vD > 0.9 km s −1 , which has to be compared with the width of the line, found to be about 2 km s −1 (cf. Fig. 12). A magnetic field of the order of a few mG in the case of maser emission in the ground state of OH implies a Zeeman splitting exceeding the stimulated rate (i.e., gΩ > R). In addition, saturation of the 1612 MHz line implies a stimulated emission rate exceeding the decay constant (i.e, R> Γ). We therefore interpret the flip of the plane of linear polarisation observed in maser spots at 1612 MHz as due to the magnetic field being inclined by an angle close to θcrit ∼ 55 • to the line of sight. Such a configuration accounts for the change of orientation of the polarisation vectors between tangential and radial as observed here. Such a flip in the polarisation angle has indeed already been observed in SiO and H2O respectively (Kemball & Diamond 1997, Kemball et al. 2009and Vlemmings & Diamond 2006. It has never been observed in OH so far though since usually the Zeeman pattern is fully separated.
Velocity distribution
The V =f(θ) distributions at 1612, 1667 and 1665 MHz are shown in Figs. 14(a,b,c), in which the maser component corresponding to the blue-shifted peak for all 3 lines has been taken to be at the stellar position (cf. Section 3.2.2) and the stellar velocity is taken to be Vstar = +27 km s −1 .
Comparison with the standard model
The simple model for a uniformly expanding spherical thin shell (Reid et al. 1977) is given by :
θ θS 2 + V − Vstar Vexp 2 = 1(2)
Where θS is the shell radius, Vstar is the velocity of the star and Vexp the expansion velocity. Generally, this model provides a good explanation for the velocity distribution observed in OH/IR stars (Habing 1996). On the 3 figures are displayed the two best fits for the lower and upper boundaries of the radial velocity distribution. These are as follows:
• at 1612 MHz: θS = 1.4 arcsec and Vexp = 12 km s −1 for the lower boundary (i.e., model 1 in Fig. 14a) and θS = Figure 13. Difference in angle between the polarisation vector orientation and the radial direction (given by PAc -PAp) versus the position angle of the maser components (PAc) at 1612 MHz. The size of the symbol is proportional to the corresponding maser spot intensity. The dashed line represents the best fit for the general trend observed. The dotted lines delineate a deviation of 20% from the best fit.
3.5 arcsec and Vexp = 15 km s −1 for the upper boundary (i.e., model 2 in Fig. 14a);
• at 1667 MHz: θS = 1.4 arcsec and Vexp = 12 km s −1 for the lower boundary (i.e., model 1 in Fig. 14b) and θS = 3.0 arcsec and Vexp = 18 km s −1 for the upper boundary (i.e., model 2 in Fig. 14b);
• at 1665 MHz: θS = 1.4 arcsec and Vexp = 12 km s −1 for the lower boundary (i.e., model 1 in Fig. 14c) and θS = 3.0 arcsec and Vexp = 16 km s −1 for the upper boundary (i.e., model 2 in Fig. 14c).
The simple model provides a reasonable explanation for the expansion of the inner shell. Also, it allows us to infer that the OH maser region has a certain thickness, which in the context of the standard model would be about 1.5-2 arcsec, and that acceleration is still taking place in the outer part of the circumstellar envelope. Both the 1612 and 1667 MHz distributions lead to similar results. But clearly, this model does not describe well the expansion of the outermost maser components. Indeed, the maximum shell radius is not observed at the stellar velocity, Vstar = +27 km s −1 , but at two values equidistant from the stellar velocity, on either side of it. Sensitivity is not the cause since maser spots were indeed found around the stellar velocity range both at 1612 and 1667 MHz (cf. Table 1 and Table 2). (1991) Bowers (1991) produced a series of kinematic models as a tool to analyse complex aspherical outflows observed in the cirmcumstellar shell of evolved stars. In these models, the maser emission is uniformly distributed throughout ellipsoidal shells with various orientations to the line of sight. The effect of rotation and radial acceleration that might be present in the velocity field is also taken into consideration. These models produce a large variety of possible θ(V ) and I(V ) curves that may be applicable to stellar outflow. As an application of these models, the author successfully describes the shell structure of 3 different types of aspherical outflows commonly observed at the late stages of stellar evolution.
Comparison with the models of Bowers
Overlaid on the 1612 MHz velocity distribution reproduced in Fig. 15, are two schematic models consistent with the distribution observed. The dotted line presents a schematic illustration of Bowers' model for an isotropic outflow at a constant velocity in the prolate case, where the inclination of the spheroid from the line of sight is i = 45 • , while the continuous line is for the spheroid tilted from the line of sight by i = 65 • . In the i = 65 • case, a standard double-peak spectral profile is expected while for i = 45 • , for both the red and blue peaks, a double-component structure is expected, with the component closer to the stellar velocity being fainter than the external one. The intensity of the internal components increases when i decreases. There is indication of such an internal component structure in the 1612 MHz spectrum shown in Fig. 1. Consequently, both the V =f(θ) distribution and the spectral profile I=f(V ) are well explained by an isotropic outflow at constant velocity in the ellipsoidal prolate case, with an inclination to the line of sight between 45 • and 65 • . Note that none of the other cases in the series of kinematic models presented by the Bowers are able to explain simultaneously the V =f(θ) and the spectral profile I=f(V ) we observe.
DISCUSSION
4.1 Actual stage of evolution of OH 26.5+0.6
The work of Sevenster (2002) rather evolved AGB star (Etoka & Le Squeren 2004). Its infrared and OH characteristics resemble those observed for red OH/IR supergiants. It is one of the brightest OH maser emitters in our Galaxy, and in order to account for its infrared SED, Justtanont et al. (1996) evaluated its Main Sequence mass to be in the order of 8 M⊙. The latter authors also estimated the current mass-loss rate of OH 26.5+0.6 to be on the order of 5 × 10 −4 M⊙/yr, triggered by the onset of a superwind phase just 150 years ago.
All this indicates that this object is at the tip of the AGB. This makes OH 26.5+0.6 a particular and important object in terms of stellar evolution: a junction object between an intermediate-and a high-mass evolved object on the verge of leaving the AGB towards the planetary nebula phase.
Distance considerations
The distance of OH 26.5+0.6 was calculated by van Langevelde et al. (1990) using phase lags which rely on the assumption of maser saturation, spherical symmetry and the thin shell model. The assumption of saturation for the OH 1612 MHz emission of Type II OH/IR has been demonstrated (Harvey et al. 1974, Etoka & Le Squeren 2000. Nonetheless it is clear from our results that the OH shell of OH 26.5+0.6 deviates from strictly spherical symmetry and the thin shell model condition. This deviation has consequently an impact on the actual distance inferred by van Langevelde et al. (1990). Indeed, as stated by those authors, phase lag measurement relies mainly on the reddest and bluest part of the spectral profile while angular diameter determination from interferometric measurement relies on the velocity ranges closest to the stellar velocity. While the blue and red peaks, under the hypothesis of radial expansion, trace the front and rear caps of the shell along the line of sight, the velocity ranges responsible for the emission in the plane of the sky, that is in a perpendicular direction, are more internal. And the difference in depth is likely to be more important in the case of a thick shell. A direct consequence of the divergence from thin shell model, but still assuming spherical geometry, is the following:
τ (D phase lag ) ≥ τ (Dint)(3)
where τ (D phase lag ) is the time travel difference for emission coming from the front and rear caps of the shell separated by D phase lag and, τ (Dint) is the time travel difference for emission coming from the total extent of the shell, Dint, as obtained from interferometric mapping (i.e., the time that would have been inferred if the observer was in the plane of the sky seeing the same object at an angle of 90 • ). But because we are diverging from sphericity, with a prolate spheroid, this also implies that Dint > D phase lag which would have a compensating effect in this particular case. Herman et al. (1985) consider the impact of deviation from sphericity and the thin shell model on distance determination with phase lag. They conclude that an asymmetry of ≤ 20% and the thickness of ≤ 20% would set a limit of ∼ 10% in the determination of the distance of OH/IR objects subject to these deviations. In our case, the more marked thickness of the shell would quite probably reduce the accuracy down to 20%. This shows the importance of better constraints on the actual shell properties (i.e., geometry and thickness) in order to get a more accurate distance determination from phase lag. Nonetheless, an uncertainty of 20%, already claimed by van Langevelde, would not have a major impact on the analysis presented here.
Faraday rotation
Faraday rotation could potentially rotate the polarisation vectors by a substantial angle if the radiation is propagating into an ionized medium. The change in the angle is given by:
∆θ Faraday = RMλ 2(4)
where RM is the rotation measure given by: The two main sources of Faraday rotation which could have an impact on the overall polarisation vectors are foreground rotation due to propagation in the interstellar medium (ISM) and, more importantly, rotation within the shell itself due to ionized material which would affect the linear polarisation of the red-shifted maser spots. We investigated these possible causes of Faraday rotation:
RM = 0.812 d 0 [ ne(s) cm −3 ][ B(s) µG ]( ds pc )(5)
• Noutsos (private communication) calculated the RM at the location of OH 26.5+0.6 from adjacent pulsars to be RM ∼ 7 rad m −2 . Such a value would produce a Faraday rotation of about ∆θ Faraday =13 • . Nonetheless, it has to be acknowledged that the uncertainty on this value is quite high due to the nature of the ISM and depends strongly on the density model adopted.In particular, adopting the electron density model (NE2001) of Cordes & Lazio (2002) would lead to a value of RM ∼ 40 rad m −2 and potentially a rotation of ∆Θ F araday of ∼ 74 o . But it also has to be noted that any general Faraday rotation occurring between the shell and the observer would not affect the general distribution observed, as it would rotate the overall polarisation vector angles by the same amount.
• Guilain & Mauron (1996) showed that for an Oxygenrich AGB star a typical fractional electron abundance xe ∼ 2×10 −5 . Such a value would produce a maximum differential rotation of the polarisation vectors of ∼ 10 • . This would not change our fundamental results.
• Internal Faraday rotation from the denser central region itself is expected to be negligible since the size of the thin ionized hydrogen layer surrounding the central star has a typical thickness of less than 10 12 cm. The overall region size is typically less than 2Rstar (i.e., <3 AU) to be compared with the typical maser spot size of at least 10-20 AU. This means that in the most pessimistic scenario it would affect at the very most 30% of the emission of those red-shifted maser spots on or very near the line of sight.
Consequently, we exclude Faraday rotation as a possible cause for the dichotomy observed in the polarisation angle distribution.
Shell structure and extent
At 1667 MHz, the emission in the south and the north-east of the shell extends beyond that at 1612 MHz by nearly 1 arcsec. This is an interesting result since mainline emission (i.e., 1665 and 1667 MHz) is expected to be internal to that of the 1612 MHz satellite line due to the difference in pumping schemes. The 1612 MHz transition is largely pumped by absorption at 35 and 53 µm radiation, whilst the mainlines are pumped by a radiative absorption from the ground state to 2 Π 3/2 J = 5/2, followed by a collisional de-excitation, which requires a high density than the 1612 MHz mechanism (Gray 2007). Competitive gain (Field 1985) can affect the balance of the intensities in the mainlines. Observing 1667 MHz emission beyond that of the 1612 MHz requires deviation from the standard model (Collison & Nedoluha 1995). Fong et al. (2002) imaged the circumstellar envelope of OH 26.5+0.6 in the 12 CO J = 1 − 0 line. Their observations show a deconvolved source size of 8.8 × 5.5 arcsec 2 . In order to account for the observed flux, they needed to include a second, more tenuous, AGB wind and conclude that up to 80% of the CO flux comes from the unresolved superwind. suggest that the flattened distribution observed in the midinfrared could be explained by either an equatorial overdensity or a disk close to an edge-on configuration.
Both the ellipticity and the mean position angle inferred by the latter authors relating to material close to the star, are in agreement with our new findings but at the OH maser location.
All this observational evidence shows that divergence from spherical symmetry is already present and observable at different resolutions in the whole gaseous and dusty envelope of OH 26.5+0.6. This provides us with strong evidence that, in this case, the onset of asymmetry does indeed start as early as the late-AGB phase. The two-step mechanism proposed by Sahai (2002) in which a high-speed collimated outflow (in other words, an anisotropic superwind) would carve an imprint within an intrinsically spherical AGB massloss envelope could be in action.
Role of the magnetic field in the shaping process ?
From the magnetic field strength of 3.7 mG measured from Zeeman splitting (cf Section 3.3), we can infer the corresponding magnetic energy density ǫB at the location of the OH maser emission:
ǫB = B 2 /(2µ0) = 10 −1 8π B 2 Gauss = 5.5 × 10 −8 J m −3(6)
And, we can compare it with the thermal and kinetic energy densities ǫ T hermal and ǫKinetic respectively. According to the model of Goldreich & Scoville (1976), at a distance r∼ 10 16 cm, the number density of hydrogen in the wind of an OH/IR star is typically nH = 10 5 cm −3 which is generally taken to be the typical distance for (mainline) OH maser emission. Nevertheless, we found that the maser extent at 1612 MHz attests to a radius of about 3500 AU, that is r∼ 5.5 ≃ 10 16 cm=875×Rstar in the model of Goldreich & Scoville, for which nH drops to 10 4 cm −3 . Adopting nonetheless a conservative value of nH = 10 5 cm −3 , and T=100 K, leads to the following for the thermal and kinetic densities:
ǫ T hermal = 3 2 nH kT ∼ 2 × 10 −10 J m −3(7)
and ǫKinetic = 1 2 ρV 2 exp ∼ 1.8 × 10 −8 J m −3
with Vexp=15 km s −1 .
This indicates that the magnetic energy density dominates over the thermal energy density and is at least 3 times greater than the kinetic energy density. Figure 16 presents, superimposed on top of the maser spot and polarisation vector distributions: 1) the ellipse that describes best the maser spot distribution observed; 2) the axis separating the radial and tangential vectors of polarisa-tion, and 3) the direction of the magnetic field which would produce such a distribution of the vectors of polarisation.
As previously mentioned, the position angle determined by Chesneau et al. (2005) for the mid-infrared emission is in agreement with the semi-major axis orientation on the plane of the sky of the 1612 MHz ellipsoid distribution as observed here.
The axis separating the tangential and radial distribution of the polarisation vectors accounting for a magnetic field direction of 40-60 • is aligned with the major axis of the geometrically ellipsoidal maser emission (cf. Fig. 16). This latter projected ellipse, and the velocity distribution of the maser spots as observed in Fig. 15 are indeed expected if the actual shell geometry is a prolate spheroid tilted about 45-65 • to the line of sight (Bowers 1991). These combined results reveal that there is a definite correlation between the magnetic field orientation and the geometrical structure of the circumstellar envelope.
CONCLUSION
The infrared and 18 cm OH maser properties of OH 26.5+0.6 attest to a thick circumstellar envelope, characteristic of a rather evolved star most probably at the tip of the AGB. The 1612 MHz emission reveals an ellipsoidal geometry, while the presumably more central 1665 MHz emission traces a filamentary structure. Both 1612 and 1667 MHz high resolution maps show a lack of maser emission from some parts of the shell, and in the southern and north-eastern part of the shell, 1667 MHz emission extends beyond that at 1612 MHz. All these deviations from the standard spherical model show that the OH/IR stage (i.e., late-AGB phase) is clearly at the stage where asymmetry starts to develop. The presence of acceleration in the shell at the OH maser location may be a secondary factor enhancing the asymmetry observed so far away from the star. The root of this asymmetry is likely to be close to the stellar surface itself, as near infrared results indicate. This latter hypothesis is reinforced by the agreement in orientation of the major axis of the elliptic distribution observed in infrared and at 1612 MHz. Finally, we found that the magnetic field strength, inferred from OH Zeeman splitting, is such that the magnetic field energy density dominates over the thermal and kinetic pressures and that there is a definite correlation between the magnetic field orientation and the main axis of geometrically ellipsoidal maser emission. This suggests that the magnetic field plays a role in the shaping process observed. Insight into the OH polarimetric structure of OH 26.5+0.6 13 Table 1. Stokes parameter flux densities and polarisation properties of the 1612 MHz maser spots.
No
Vel
I Q U V P ∆α ∆δ mc m l mt χ (km s −1 ) (Jy b −1 ) (Jy b −1 ) (Jy b −1 ) (Jy b −1 ) (Jy b −1 ) (
Figure 1 .
11612 MHz spectrum of OH 26.5+0.6 in Stokes I constructed from the final image.
Figure 2 .Figure 3 .
23Same Same as Fig 1 at 1665 MHz.
Figure 5 .Figure 6 .
56Same asFig. 4for 1667 MHz. The contour levels shown are 1, 2, 4, 6, 8, 10 and 12 times 0.044 Jy/Beam. This velocity-integrated image takes into account all the velocity channels in which emission was detected i.e., for the velocity range [11:42] km s −1 . Same asFig. 4for 1665 MHz. The contour levels shown are 1, 2, 4, 6, 8 and 10 times 0.075 Jy/Beam. This velocity-integrated image takes into account all the velocity channels in which emission was detected i.e., for the velocity range [11.5:41.5] km s −1 .
Figures 9
9to 11 present the maser spot distributions observed at 1612, 1667 and 1665 MHz respectively. The maser spots show a spheroidal distribution both at 1612 and 1667 MHz.
Figure 7 .
71612 MHz maps of OH 26.5+0.6 in Stokes I. Each map is an integration of 7 channels (i.e., leading to a map separation of 1.27 km s −1 ). The contour levels shown are 3, 4, 5, 7, 10, 30, 60, 90, 180, 360, 720 and 1440 times 0.019 Jy/Beam. The choice of the contours has been made so that the relatively faint emission can be seen in the velocity range [30-20] km s −1 .
Figure 8 .
8Same as Fig.7 for the 1667 MHz emission. The contour levels shown are 3, 4, 5, 7, 10 and 30 times 0.0042 Jy/Beam.
Figure 9 .
91612 MHz maser spot distribution of OH 26.5+0.6. The area of a symbol is proportional to the maser spot intensity. The colour scale indicates velocity. Also plotted are the polarisation vectors associated with each maser spot with P> 3σ. The length of a vector is proportional to the percentage of linear polarisation.
Figure 10 .
10Same asFig. 9for the 1667 MHz.
Figure 11 .
11Same asFig. 9for the 1665 MHz.
Figure 12 .
12Stokes I and V spectrum for a Zeeman pair of a blue-shifted component at the relative position δRA=-90 mas and δDec=+180 mas. The separation between the 2 pairs provides an estimate of the magnetic field B = −3.7 ± 0.3 mG that is pointing towards the observer.
Figure 14 .
14Velocity distribution of the maser spots versus the radial distance from the star projected on the plane of the sky at a) 1612 MHz, b) 1667 MHz and c) 1665 MHz. The inferred location of the star is RA=0 mas Dec=0 mas which is the location of the blue shifted peak in each line as explained Section 3.2.2.
Figure 15 .
15The radial velocity distributions at 1612 MHz on which is overlaid on top of the standard models (thin black lines), a series of schematic models in magenta illustrative of an isotropic outflow at a constant velocity in the prolate case, where the inclination of the spheroid from the line of sight is i = 45 • (thick dotted line) and i = 65 • (thick continuous line).
(
cf. Noutsos et al., 2008 their Equation 4).
Chesneau et al. (2005) observed OH 26.5+0.6 at 8.7 µm with the VLTI. Their deconvolved image exhibits asymmetry resulting in a elliptical shape with an axis ratio of 0.75 and a mean position angle (PA) of 95 • ±6 • . The authors
Figure 16
16. a) Distribution of the maser spots at 1612 MHz as shown inFig. 9, on which is overlaid a model explaining both the geometric and polarimetric structures observed. b) A 3D representation of the model in which the line of sight is perpendicular to the plane of the page.with MERLIN, a National Facility operated by the University of Manchester at Jodrell Bank Observatory, on behalf of STFC.
Jy/Beam. In this velocity-integrated image the strong blue and red-peak contributions (corresponding to the velocity range [11:14] km s −1 and [37:42] km s −1 ) have been cut off for dynamic range purposes.1
2
3
4
5
Jy/Beam
ARC SEC
ARC SEC
4
3
2
1
0
-1
-2
-3
-4
4
3
2
1
0
-1
-2
-3
-4
Figure 4. 1612 MHz velocity-integrated image of OH 26.5+0.6
in Stokes I. The contour levels shown are 1, 2, 4, 6, 8 and 10 times
0.40
and Ortiz et al. (2005) based on the MSX catalogue at the mid-infrared (MIR) wavelengths 8.3, 12.1, 14.7 and 21.3 µm showed that [8.3-14.7] vs [14.7-21.3] is better suited than the IRAS colour-colour diagram to separate AGB from post-AGB stars and that [15-21] vs [8-12] is the most efficient index to separate the four main classes of OH/IR objects: PPNs, SFRs, AGB and post-AGB stars. In the light of these results, we calculated the [8.3-12.1], [8.3-14.7] and [14.7-21.3] colour indices for OH 26.5+0.6 from the MSX measurement for this source to be 0.5669, 1.0832 and 0.0624 respectively. We then used these indices to locate OH 26.5+0.6 in the colour-colour diagrams of Ortiz et al. (2005) and Sevenster (2002). Those values put OH 26.5+0.6 in the bluer OH/IR group of Ortiz et al. (2005), according to their classification based on the [8.3-14.7] and [14.7-21.3] indices. While it places it in the bulk of the AGB stars in Sevenster's (2002) diagram, based on the [14.7-21.3] and [8.23-12.13] colourcolour indices. This indicates that OH 26.5+0.6 is definitely still on the AGB at the present time. Nevertheless, its IRAS [60-25] and [12-25] colour indices combined with its 18 cm OH maser properties attest to a thick circumstellar envelope characteristic of a
km s −1 ) (Jy b −1 ) (Jy b −1 ) (Jy b −1 ) (Jy b −1 ) (Jy b −1 ) (milli arcsec) km s −1 ) (Jy b −1 ) (Jy b −1 ) (Jy b −1 ) (Jy b −1 ) (Jy b −1 ) (milli arcsec)Table 2. Stokes parameter flux densities and polarisation properties of the 1667 MHz maser spots. km s −1 ) (Jy b −1 ) (Jy b −1 ) (Jy b −1 ) (Jy b −1 ) (Jy b −1 ) (milli arcsec) (milli arcsec)milli arcsec) (milli arcsec)
(%)
(%)
(%)
( • )
1
41.921
4.829
0.088
-0.147
0.121
0.171
501.38
-236.10
2.5
3.5
4.3
-29.5
2
41.139
63.344
-0.236
0.551
-0.169
0.600
58.00
-16.10
-0.3
0.9
0.9
56.6
3
40.873
50.169
-0.394
0.416
-0.520
0.573
-19.88
-16.16
-1.0
1.1
1.5
66.7
4
40.676
5.481
0.138
-0.013
-0.340
0.139
869.41
-107.05
-6.2
2.5
6.7
-2.7
5
40.344
1.750
-0.004
-0.020
<-0.012
<0.020
1399.67
-510.60
<-0.7
<1.2
<1.4
/
6
40.123
2.518
0.030
0.003
0.023
0.030
-174.57
694.78
0.9
1.2
1.5
2.9
7
39.941
3.989
0.189
0.011
-0.043
0.189
-606.08
1210.50
-1.1
4.7
4.8
1.7
8
39.917
0.754
0.001
0.010
<-0.017
<0.010
572.82
1722.28
<-2.3
<1.3
<2.6
/
9
39.904
0.956
-0.010
0.006
<0.011
<0.012
-1509.47
577.15
<1.2
<1.2
<1.7
/
10
39.834
2.142
-0.211
-0.086
-0.076
0.228
323.63
-1144.03
-3.5
10.6
11.2
-78.9
11
39.776
1.035
-0.004
0.006
<-0.005
<0.007
1414.57
-970.78
<-0.5
<0.7
<0.9
/
12
39.758
1.307
-0.027
-0.032
0.107
0.042
-298.22
-1138.00
8.2
3.2
8.8
-65.1
13
39.751
0.752
0.003
0.011
-0.048
<0.011
1288.79
554.21
-6.4
<1.5
<6.6
/
14
39.749
3.777
0.020
0.084
-0.032
0.086
382.58
830.36
-0.8
2.3
2.4
38.3
15
39.724
1.822
-0.006
-0.005
-0.100
0.008
1311.45
-341.11
-5.5
0.4
5.5
-70.1
16
39.526
2.611
-0.070
-0.031
0.040
0.077
184.59
-606.88
1.5
2.9
3.3
-78.1
17
39.452
0.857
0.009
0.012
<-0.006
<0.015
-736.63
1652.95
<-0.7
<1.8
<1.9
/
18
39.436
1.113
0.015
0.003
<-0.010
<0.015
-214.94
822.43
<-0.9
<1.4
<1.7
/
19
39.361
3.226
-0.029
-0.028
0.067
0.040
-1092.81
-185.92
2.1
1.2
2.4
-68.0
20
39.290
1.055
-0.006
0.003
-0.026
<0.007
1571.21
-957.57
-2.5
<0.6
<2.6
/
21
39.174
1.158
0.003
0.013
<-0.007
<0.013
1507.58
493.04
<-0.6
<1.2
<1.3
/
22
39.143
0.713
0.014
0.025
<0.008
0.029
1514.56
866.10
<1.1
4.0
<4.1
30.4
23
39.138
1.584
-0.046
-0.042
-0.160
0.062
-228.67
-1077.14
-10.1
3.9
10.8
-68.8
24
39.099
1.090
-0.030
-0.021
-0.044
0.037
464.77
-1222.62
-4.0
3.4
5.2
-72.5
25
39.081
2.295
0.080
-0.022
-0.024
0.083
-957.08
1310.33
-1.0
3.6
3.7
-7.7
26
39.039
0.695
-0.033
-0.032
<0.015
0.046
759.84
-408.79
<2.2
6.6
<7.0 -67.9
27
38.962
1.652
-0.030
0.019
-0.218
0.036
1180.73
-142.93
-13.2
2.1
13.4
73.8
28
38.955
2.049
-0.011
-0.026
<-0.007
0.028
-61.84
-257.47
<-0.3
1.4
<1.4 -56.5
29
38.935
2.121
0.004
0.065
-0.075
0.065
696.02
718.59
-3.5
3.1
4.7
43.2
30
38.798
0.374
-0.006
-0.015
<0.004
<0.016
-2928.66
-587.08
<1.1
<4.3
<4.4
/
31
38.778
2.605
0.141
-0.026
0.035
0.143
-868.11
1331.31
1.3
5.5
5.7
-5.2
32
38.761
0.581
-0.002
0.014
-0.057
<0.014
-1674.73
1176.87
-9.8
<2.4 <10.1
/
33
38.722
2.241
-0.020
-0.017
<-0.014
0.026
-33.80
-667.74
<-0.6
1.2
<1.3 -69.8
34
38.707
1.738
-0.002
-0.051
0.099
0.051
-948.46
-482.63
5.7
2.9
6.4
-46.1
35
38.680
1.286
0.006
0.038
-0.025
0.038
1677.90
723.81
-1.9
3.0
3.6
40.5
36
38.643
2.271
-0.007
0.056
-0.138
0.056
1252.56
9.05
-6.1
2.5
6.6
48.6
37
38.585
0.965
0.013
-0.004
<0.010
<0.014
763.84
1520.21
<1.0
<1.4
<1.7
/
38
38.567
0.873
-0.019
0.030
-0.025
0.036
-1031.10
525.45
-2.9
4.1
5.0
61.2
39
38.518
2.684
0.002
0.048
-0.092
0.048
780.08
675.60
-3.4
1.8
3.8
43.8
40
38.504
0.717
-0.034
-0.064
<0.008
0.072
607.02
-1289.27
<1.1
10.1
<10.2
-59.0
41
38.380
1.609
0.010
-0.009
<0.017
<0.013
-1336.37
1515.50
<1.1
<0.8
<1.4
/
42
38.322
1.337
0.020
-0.009
0.060
<0.022
-1296.84
-129.09
4.5
<1.6
<4.8
/
43
38.249
1.310
-0.029
-0.009
-0.269
0.030
1454.36
-571.17
-20.5
2.3
20.6
-81.4
44
38.230
0.522
0.020
0.014
<-0.008
0.024
2351.69
-1242.69
<-1.5
4.7
<4.9
17.5
45
38.196
0.585
0.049
0.003
<-0.012
0.049
1399.17
1301.85
<-2.1
8.4
<8.7
1.8
46
38.159
1.310
0.015
0.019
-0.027
0.024
1966.54
225.95
-2.1
1.8
2.8
25.9
47
38.102
0.803
-0.018
-0.029
<-0.019
0.034
-748.65
-1102.81
<-2.4
4.3
<4.9 -60.9
48
38.088
1.932
0.010
0.072
-0.044
0.073
1397.50
-0.30
-2.3
3.8
4.4
41.0
49
38.020
1.426
-0.012
-0.028
<-0.003
0.030
58.18
-913.48
<-0.2
2.1
<2.1 -56.6
50
38.012
1.223
0.020
-0.018
-0.029
0.027
-673.31
1966.92
-2.4
2.2
3.3
-21.0
51
37.940
0.825
0.002
0.022
-0.027
<0.022
2810.69
-909.99
-3.3
<2.7
<4.3
/
52
37.899
0.816
0.020
0.004
<-0.022
<0.020
518.78
1407.60
<-2.7
<2.5
<3.7
/
53
37.793
1.043
0.001
-0.025
-0.024
0.025
-915.31
-296.99
-2.3
2.4
3.3
-43.9
54
37.745
4.373
0.070
0.011
0.148
0.071
-1176.85
1372.36
3.4
1.6
3.8
4.5
55
37.557
0.426
0.010
-0.004
<0.001
<0.011
2416.70
678.23
<0.2
<2.5
<2.5
/
56
37.450
0.983
-0.037
-0.026
<-0.008
0.045
-178.83
-1042.53
<-0.8
4.6
<4.7 -72.5
57
37.435
0.783
-0.004
-0.047
-0.049
0.047
674.24
-820.53
-6.3
6.0
8.7
-47.4
58
37.363
0.776
-0.002
-0.012
<0.001
<0.012
-1152.55
-385.19
<0.1
1<.6
<1.6
/
59
37.323
0.509
-0.001
-0.007
<-0.007
<0.007
2223.57
-1226.21
<-1.4
<1.4
<2.0
/
60
37.305
0.544
-0.010
-0.004
0.076
<0.011
-1054.65
90.31
14.0
<2.0 <14.1
/
61
37.305
0.527
-0.003
-0.017
<0.005
<0.017
2091.19
268.01
<0.9
<3.3
<3.4
/
62
37.278
0.454
-0.005
-0.011
-0.031
<0.012
-1306.93
-1348.28
<-6.8
<2.7
<7.3
/
63
37.154
0.684
-0.015
0.025
-0.109
0.029
-1329.35
524.84
-15.9
4.3
16.5
60.5
64
37.145
0.603
0.067
0.004
0.037
0.067
1148.67
1527.21
6.1
11.1
12.7
1.7
Table 1. continued
No
Vel
I
Q
U
V
P
∆α
∆δ
mc
m l
mt
χ
(km s −1 ) (Jy b −1 ) (Jy b −1 ) (Jy b −1 )
(Jy b −1 ) (Jy b −1 ) (milli arcsec)
(milli arcsec)
(%)
(%)
(%)
( • )
65
37.119
0.723
0.009
0.008
<-0.004
<0.012
331.27
1447.99
<-0.6
<1.7
<1.8
/
66
37.012
0.552
-0.002
-0.016
-0.031
<0.016
170.30
-600.27
-5.6
<2.9
<6.3
/
67
37.002
0.446
-0.001
0.000
-0.040
<0.001
1919.21
-28.14
-9.0
<0.2
<9.0
/
68
36.936
0.411
0.005
0.004
<-0.006
< 0.006
62.48
2260.37
<-1.5
<1.6
<2.2
/
69
36.920
0.580
-0.008
-0.050
-0.038
0.051
88.38
-1646.85
-6.6
8.7
10.9
-49.5
70
36.880
0.657
-0.006
-0.009
-0.040
<0.011
1233.32
-790.36
-6.1
<1.6
<6.3
/
71
36.871
3.750
0.038
-0.010
0.044
0.039
-1506.80
1196.46
1.2
1.0
1.6
-7.4
72
36.837
0.896
0.015
-0.008
<-0.012
<0.017
-1137.62
1865.76
<-1.3
<1.9
<2.3
/
73
36.774
1.035
0.002
-0.022
<-0.013
<0.022
-54.59
-1132.63
<-1.3
<2.1
<2.5
/
74
36.737
1.229
0.009
0.004
-0.054
<0.010
-1835.57
1329.67
-4.4
< 0.8
<4.5
/
75
36.713
0.388
0.017
0.016
<0.003
0.023
-41.52
2548.44
<0.8
6.0
<6.1
21.6
76
36.694
0.477
0.021
0.008
<0.019
<0.022
-778.12
459.82
<4.0
<4.7
<6.2
/
77
36.604
0.441
0.002
-0.022
<-0.020
<0.022
-113.12
-674.22
<-4.5
<5.0
<6.7
/
78
36.540
0.710
-0.010
-0.018
-0.024
<0.021
-530.65
-1073.55
-3.4
<2.9
<4.5
/
79
36.521
0.433
0.010
-0.005
<-0.002
<0.011
-1446.48
-1346.15
<-0.5
<2.6
<2.6
/
80
36.396
1.114
-0.009
-0.008
<-0.002
<0.012
-1401.29
-290.51
<-0.2
<1.1
<1.1
/
81
36.259
1.408
0.005
-0.022
-0.039
0.023
357.55
-1112.85
-2.8
1.6
3.2
-38.6
82
36.182
0.346
-0.008
0.007
<0.006
<0.011
-172.89
2598.76
<1.7
<3.1
<3.5
/
83
36.155
0.613
-0.023
-0.026
<0.008
0.035
737.49
-649.11
<1.3
5.7
<5.8
-65.7
84
36.137
0.934
0.011
0.015
-0.092
<0.019
-1371.87
646.00
-9.9
<2.0
<10.1
/
85
36.130
0.424
-0.007
0.003
<0.008
<0.008
1647.20
-885.42
<1.9
<1.8
<2.6
/
86
36.075
1.218
-0.019
-0.023
<-0.010
0.030
1481.46
143.95
<-0.8
2.4
<2.5
-64.8
87
35.937
0.353
-0.004
-0.002
<-0.002
<0.004
792.32
-1665.77
<-0.6
<1.3
<1.4
/
88
35.920
0.495
-0.010
-0.019
<0.005
<0.021
-973.64
-792.60
<1.0
<4.3
<4.4
/
89
35.905
0.872
0.014
0.011
<0.007
<0.018
-1292.59
1875.62
<0.8
<2.0
<2.2
/
90
35.796
0.745
0.009
-0.005
-0.071
<0.010
-1390.81
745.06
-9.5
<1.4
<9.6
/
91
35.747
1.322
-0.009
-0.026
<-0.011
0.028
261.10
-1215.68
<-0.8
2.1
<2.2
-54.5
92
35.706
2.730
0.028
0.013
0.061
0.031
-1659.93
1232.14
2.2
1.1
2.5
12.5
93
35.613
0.411
-0.002
-0.025
<-0.009
0.025
-1436.42
-1182.28
<-2.2
6.1
<6.5
-47.3
94
35.535
0.664
-0.016
-0.017
<0.010
0.023
580.18
-1795.53
<1.5
3.5
<3.8
-66.6
95
35.450
1.068
-0.004
-0.005
<-0.004
<0.006
-1562.65
-247.15
<-0.4
<0.6
<0.7
/
96
35.339
0.679
-0.014
-0.007
-0.024
<0.016
-904.16
-948.06
-3.5
<2.3
<4.2
/
97
35.189
0.793
-0.002
-0.010
-0.046
<0.010
-542.70
-1485.43
-5.8
<1.3
<5.9
/
98
35.159
0.349
0.002
0.004
<-0.006
<0.004
-391.33
-709.66
<-1.7
<1.3
<2.1
/
99
35.113
0.292
0.017
0.005
<-0.012
<0.018
-966.51
1519.74
<-4.1
<6.1
<7.3
/
100
35.027
0.565
-0.009
-0.011
<0.022
<0.014
1529.96
257.08
<3.9
<2.5
<4.6
/
101
35.007
0.626
0.009
0.005
<0.015
<0.010
1168.03
-1110.50
<2.4
<1.6
<2.9
/
102
34.993
0.437
0.009
0.006
<-0.022
<0.011
-1067.96
1576.80
<-5.0
<2.5
<5.6
/
103
34.907
0.608
-0.008
0.013
-0.023
<0.015
-1461.51
732.37
-3.8
<2.5
<4.5
/
104
34.869
0.449
-0.010
-0.012
<-0.017
<0.016
-216.38
-1349.83
<-3.8
<3.5
<5.2
/
105
34.862
0.645
-0.010
0.024
<-0.006
0.026
-1065.42
-927.17
<-0.9
4.0
<4.1
56.3
106
34.811
0.266
0.031
-0.022
<-0.010
0.038
-281.75
2479.64
<-3.8
14.3
<14.8
-17.7
107
34.739
0.359
0.005
-0.014
<-0.005
<0.015
-1197.96
2144.34
<-1.4
<4.1
<4.3
/
108
34.545
0.649
0.005
-0.020
<0.006
<0.021
-696.72
-1468.22
<0.9
<3.2
<3.3
/
109
34.415
0.404
0.011
0.018
0.039
<0.021
-1761.31
-188.84
9.7
<5.2
<11.0
/
110
34.378
0.304
0.011
-0.007
<0.005
<0.013
954.45
-1159.32
<1.6
<4.3
<4.6
/
111
34.223
0.449
0.010
0.019
<0.008
<0.021
-1860.68
1352.62
<1.8
<4.8
<5.1
/
112
34.215
0.332
-0.018
-0.004
<0.007
<0.018
1434.12
-1111.03
<2.1
<5.6
<6.0
/
113
34.177
0.318
-0.010
-0.019
<0.001
<0.021
188.17
-1910.46
<0.3
<6.8
<6.8
/
114
34.120
0.410
0.003
0.009
<-0.003
<0.009
1566.01
208.99
<-0.7
<2.3
<2.4
/
115
34.067
0.291
0.011
0.009
<0.007
<0.014
-1544.23
859.50
<2.4
<4.9
<5.5
/
116
34.067
0.133
0.016
-0.003
<0.003
<0.016
-285.79
2753.86
<2.3
<12.2
<12.4
/
117
33.778
0.445
-0.010
-0.007
<-0.015
<0.012
1311.37
-1147.63
<-3.4
<2.7
<4.3
/
118
33.720
0.250
0.013
-0.008
<-0.022
<0.015
-1947.73
-210.36
<-8.8
<6.1
<10.7
/
119
33.505
0.305
-0.010
-0.013
<-0.021
<0.016
313.43
-1919.98
<-6.9
<5.4
<8.8
/
120
33.471
0.401
0.007
-0.012
<-0.004
<0.014
195.47
-1353.84
<-1.0
<3.5
<3.6
/
121
33.373
0.126
-0.025
-0.004
<0.013
0.025
-2337.64
830.66
<10.3
20.1
<22.6
-85.5
122
33.074
0.262
0.018
0.004
<-0.001
<0.018
901.37
-1877.17
<-0.4
<7.0
<7.0
/
123
33.065
0.166
-0.008
-0.022
<-0.012
0.023
1781.81
-736.69
<-7.2
14.1
<15.8
-55.0
124
33.012
0.337
0.003
-0.026
<-0.019
0.026
-1678.90
-799.77
<-5.6
7.8
<9.6
-41.7
125
32.860
0.158
-0.008
-0.005
<0.004
<0.009
-669.28
-1516.65
<2.5
<6.0
<6.5
/
126
32.801
0.084
0.038
-0.041
<0.003
0.056
-1055.22
-1869.69
<3.6
66.5
<66.6
-23.6
127
32.722
0.156
-0.014
0.010
<-0.005
<0.017
-1341.07
2346.78
<-3.2 <11.0
<11.5
/
128
32.573
0.260
-0.005
-0.016
<-0.019
<0.017
188.56
-1907.28
<-7.3
<6.4
<9.7
/
Table 1. continued
No
Vel
I
Q
U
V
P
∆α
∆δ
mc
m l
mt
χ
(km s −1 ) (Jy b −1 ) (Jy b −1 ) (Jy b −1 )
(Jy b −1 ) (Jy b −1 ) (milli arcsec)
(milli arcsec)
(%)
(%)
(%)
( • )
129
32.526
0.074
0.018
-0.020
<0.006
0.027
-862.02
-2005.87
<8.1
36.4
<37.3 -24.0
130
32.519
0.093
-0.023
-0.003
<0.007
0.023
3315.37
159.03
<7.5
24.9
<26.0 -86.3
131
32.507
0.241
-0.013
-0.005
<-0.016
<0.014
-1346.97
-1375.27
<-6.6
<5.8
<8.8
/
132
32.474
0.101
-0.025
0.005
<0.008
0.025
-2241.52
258.98
<7.9
25.2
<26.4
84.3
133
32.447
0.271
-0.011
-0.026
<-0.011
0.028
-1636.80
-970.94
<-4.1
10.4
<11.2 -56.5
134
32.276
0.085
0.010
-0.006
<-0.010
<0.012
2165.88
-849.78
<-11.8
<13.7
<18.1
/
135
31.899
0.256
0.004
-0.021
-0.052
<0.021
-1491.64
-1092.73
-20.3
<8.4
<22.0
/
136
31.591
0.075
0.003
0.006
<0.005
<0.007
1721.89
1213.72
<6.7
<8.9
<11.1
/
137
31.341
0.146
0.013
-0.013
<0.013
<0.018
535.95
-1239.13
<8.9
<12.6
<15.4
/
138
31.332
0.239
-0.005
-0.012
<0.008
<0.013
-423.54
-1516.96
<3.3
<5.4
<6.3
/
139
31.230
0.393
-0.011
-0.008
<-0.017
<0.014
-1463.49
-1172.58
<-4.3
<3.5
<5.5
/
140
31.197
0.148
0.005
-0.006
<-0.013
<0.008
-2073.78
-303.81
<-8.8
<5.3
<10.3
/
141
31.071
0.330
-0.003
-0.010
<0.019
<0.010
-93.69
-2029.81
<5.8
<3.2
<6.6
/
142
31.008
0.290
0.011
-0.011
<0.005
<0.016
906.37
-1572.29
<1.7
<5.4
<5.7
/
143
30.921
0.341
-0.030
-0.003
0.023
0.030
1363.14
-1333.78
6.7
8.8
11.1
-87.1
144
30.596
0.177
0.024
-0.006
<0.006
0.025
-638.42
-1525.51
<3.4
14.0
<14.4
-7.0
145
30.464
0.113
0.003
-0.008
<0.019
<0.009
393.04
-1371.29
<16.8
<7.6
<18.4
/
146
30.065
0.314
0.004
-0.007
<-0.004
<0.008
-1603.82
-1107.58
<-1.3
<2.6
<2.9
/
147
29.892
0.243
-0.023
0.016
<-0.010
0.028
-2147.57
-265.41
<-4.1
11.5
<12.2
72.6
148
29.422
0.119
0.002
-0.005
< 0.008
<0.005
1167.59
-1584.84
<6.7
<4.5
<8.1
/
149
29.369
0.240
0.008
0.012
<-0.005
<0.014
-1422.82
-1334.00
<-2.1
<6.0
<6.4
/
150
29.258
0.126
0.002
0.008
<-0.012
<0.008
-2246.46
-480.60
<-9.5
<6.5
<11.5
/
151
29.151
0.157
-0.008
-0.005
<-0.012
<0.009
298.68
-2209.26
<-7.6
<6.0
<9.7
/
152
28.905
0.094
0.003
-0.019
<-0.003
<0.019
817.08
-1677.35
<-3.2
<20.5
<20.7
/
153
28.138
0.423
-0.032
0.009
<0.010
0.033
-1621.02
-1223.50
<2.4
7.9
<8.3
82.1
154
28.127
0.235
-0.004
-0.010
<0.002
<0.011
-1268.36
-1584.09
<0.9
<4.6
<4.7
/
155
28.099
0.437
0.011
0.011
-0.024
<0.016
-1982.93
-663.28
-5.5
<3.6
<6.6
/
156
28.036
0.160
0.006
0.002
<0.006
< 0.006
1056.14
-1691.25
<3.8
<4.0
<5.5
/
157
27.925
0.146
-0.006
-0.007
<0.011
<0.009
-956.74
-1311.27
<7.5
<6.3
<9.8
/
158
27.905
0.092
-0.004
-0.012
<-0.004
<0.013
-754.40
-1887.18
<-4.3
<13.7
<14.4
/
159
26.272
0.084
0.010
-0.016
<-0.008
<0.019
-376.40
-1926.35
<-9.5
<22.5
<24.4
/
160
25.750
0.141
-0.004
-0.006
<0.008
<0.007
-1221.28
-1675.94
<5.7
<5.1
<7.6
/
161
25.139
0.297
-0.002
-0.006
<-0.003
<0.006
-1678.68
-1197.86
<-1.0
<2.1
<2.3
/
162
24.829
0.066
0.006
-0.005
<0.015
<0.008
-1157.18
-891.49
<22.7
<11.8
<25.6
/
163
24.589
0.408
-0.016
-0.013
<-0.014
<0.021
-1857.41
-1007.41
<-3.4
<5.1
<6.1
/
164
24.571
0.102
0.004
0.002
<0.009
<0.004
-831.13
-1620.59
<8.8
<4.4
<9.8
/
165
24.442
0.062
0.009
-0.020
<-0.012
<0.022
1271.81
-1897.95
<-19.4
<35.4
<40.4
/
166
24.150
0.125
0.004
0.019
<-0.014
<0.019
1994.61
2004.59
<-11.2
<15.5
<19.1
/
167
24.124
0.198
0.002
0.013
<-0.004
<0.013
-1923.11
-831.54
<-2.0
<6.6
<6.9
/
168
24.019
0.245
0.010
-0.011
<-0.005
<0.015
-1384.83
-1670.73
<-2.0
<6.1
<6.4
/
169
23.835
0.097
0.018
0.021
<-0.011
0.028
1048.40
2278.84
<-11.3
28.5
<30.7
24.7
170
23.524
0.184
0.013
0.003
<0.006
<0.013
1330.09
-1809.05
<3.3
<7.3
<8.0
/
171
23.030
0.095
0.013
-0.017
<-0.003
<0.021
-740.42
-1599.07
<-3.2
<22.5
<22.7
/
172
22.763
0.116
-0.010
-0.012
<0.011
<0.016
1941.04
-243.95
<9.5
<13.5
<16.5
/
173
22.747
0.115
0.002
-0.004
<0.015
<0.004
-93.70
-1935.23
<13.0
<3.9
1<3.6
/
174
22.709
0.245
0.010
-0.021
-0.023
0.023
-1727.83
-1098.38
-9.4
9.5
13.4
-32.3
175
22.672
0.239
-0.010
-0.010
0.023
<0.014
-2943.53
331.37
9.6
<5.9
<11.3
/
176
22.471
0.121
0.002
0.002
<0.007
<0.003
-358.74
-1564.01
<5.8
<2.3
<6.2
/
177
22.400
0.145
-0.007
-0.003
-0.029
<0.008
2285.83
-667.93
-20.0
<5.3
<20.7
/
178
22.328
0.074
0.001
-0.022
<0.022
<0.022
1849.37
-2061.00
<29.7
<29.8
<42.1
/
179
22.286
0.040
-0.002
-0.002
<-0.012
<0.003
1929.18
-122.15
<-30.0
<7.1
<30.8
/
180
22.248
0.185
0.010
-0.007
<-0.010
<0.012
2842.62
326.88
<-5.4
<6.6
<8.5
/
181
22.121
0.170
-0.024
-0.007
<-0.016
0.025
-1361.61
-1671.20
<-9.4
14.7
<17.4 -81.9
182
21.942
0.151
-0.001
-0.011
<-0.017
<0.011
-374.94
-1912.30
<-11.3
<7.3
<13.5
/
183
21.901
0.101
-0.010
-0.019
<-0.008
<0.021
76.45
-2184.84
<-7.9
<21.3
<22.7
/
184
21.847
0.111
0.005
0.010
<-0.006
<0.011
840.73
2761.49
<-5.4
<10.1
<11.5
/
185
21.555
0.330
0.025
0.013
<0.011
0.028
-2867.32
118.45
<3.3
8.5
<9.1
13.7
186
21.504
0.286
-0.015
-0.013
0.037
0.020
-2640.83
-367.48
12.9
6.9
14.6
-69.5
187
21.498
0.309
-0.014
-0.007
<0.005
<0.016
2248.26
-377.63
<1.6
<5.1
<5.3
/
188
21.377
0.119
-0.005
-0.013
<0.010
<0.014
1511.46
-457.19
<8.4
<11.7
<14.4
/
189
21.051
0.128
0.009
-0.003
<-0.009
<0.009
1490.96
-1215.88
<-7.0
<7.4
<10.2
/
190
21.016
0.339
-0.003
-0.003
<-0.003
<0.004
2269.72
-321.95
<-0.9
<1.3
<1.6
/
191
20.732
0.499
-0.011
-0.014
<0.020
<0.018
1286.10
-1803.97
<4.0
<3.6
<5.4
/
192
20.612
0.109
0.000
-0.005
<0.002
<0.005
-1770.70
-1532.36
<1.8
<4.6
<4.9
/
Table 1. continued
No
Vel
I
Q
U
V
P
∆α
∆δ
mc
m l
mt
χ
((milli arcsec)
(%)
(%)
(%)
( • )
193
20.595
0.208
-0.009
0.005
<-0.003
<0.010
1647.09
-444.01
<-1.4
<4.9
<5.1
/
194
20.470
0.670
0.005
0.004
<-0.004
<0.006
2183.36
-398.83
<-0.6
<1.0
<1.2
/
195
20.458
0.203
0.008
-0.032
0.025
0.033
-61.33
-1966.64
12.3
16.2
20.3
-38.0
196
20.456
0.222
0.001
-0.004
<0.004
<0.004
-321.62
-1416.98
<1.8
<1.9
<2.6
/
197
20.440
0.138
0.000
-0.004
<-0.018
<0.004
-2462.98
961.17
<-13.0
<2.9 <13.3
/
198
20.424
0.575
0.006
-0.005
< 0.022
<0.008
1933.90
-927.86
<3.8
<1.4
<4.0
/
199
19.868
1.308
0.005
-0.013
-0.082
<0.014
-2489.87
-61.88
-6.3
<1.1
<6.4
/
200
19.773
0.754
-0.014
-0.011
-0.028
<0.018
1232.07
-1770.65
-3.7
<2.4
<4.4
/
201
19.708
0.292
-0.002
-0.011
<0.007
< 0.011
-120.90
-1399.99
<2.4
<3.8
<4.5
/
202
19.594
1.750
-0.004
-0.046
-0.072
0.046
-2803.13
497.16
-4.1
2.6
4.9
-47.5
203
19.548
0.325
0.010
-0.029
<0.013
0.031
-972.46
-1000.92
<4.0
9.4
<10.2
-35.5
204
19.140
0.394
-0.010
-0.008
<-0.007
<0.013
-2095.09
768.95
<-1.8
<3.3
<3.8
/
205
19.005
0.323
-0.007
-0.004
<-0.011
<0.008
1493.92
2217.22
<-3.4
<2.5
<4.2
/
206
19.000
1.653
-0.010
0.009
-0.176
0.013
-2498.62
-686.71
-10.6
0.8
10.6
69.0
207
18.976
0.340
0.001
-0.034
<-0.011
0.034
1530.95
-991.75
<-3.2
10.0
<10.5
-44.2
208
18.964
0.887
-0.013
-0.021
-0.190
0.025
-2220.02
-1177.25
-21.4
2.8
21.6
-60.9
209
18.907
0.779
0.026
-0.044
-0.065
0.051
-2778.28
855.26
-8.3
6.6
10.6
-29.7
210
18.774
0.980
-0.053
-0.006
-0.046
0.053
1848.80
-192.05
-4.7
5.4
7.2
-86.8
211
18.649
0.667
0.004
-0.010
-0.036
<0.011
-2129.14
-82.04
-5.4
<1.6
<5.6
/
212
18.646
0.646
-0.006
-0.006
<-0.015
<0.008
944.55
-989.66
<-2.3
<1.3
<2.6
/
213
18.640
1.159
-0.011
-0.009
<-0.005
<0.014
1318.96
-1633.61
<-0.4
<1.2
<1.3
/
214
18.635
0.379
-0.012
-0.026
-0.150
0.029
-2215.78
-1294.05
-39.6
7.6
40.3
-57.4
215
18.627
0.373
0.004
0.009
-0.074
<0.010
-2636.20
-699.91
-19.8
<2.6 <20.0
/
216
18.437
0.387
0.001
0.002
<-0.004
<0.002
-2231.28
1344.35
<-1.0
<0.6
<1.2
/
217
18.313
1.326
-0.012
-0.020
<-0.008
0.023
-2565.01
740.75
<-0.6
1.8
<1.9 -60.5
218
18.090
0.635
-0.016
0.022
-0.044
0.027
2054.21
-806.03
-6.9
4.3
8.1
63.0
219
18.086
1.115
-0.019
0.011
<0.004
<0.022
1262.53
-1553.57
<0.4
<2.0
<2.0
/
220
18.028
0.568
-0.020
-0.027
-0.107
0.034
-2093.88
-1273.90
-18.8
5.9
19.7
-63.3
221
17.993
1.152
-0.011
0.018
-0.034
<0.021
-2378.00
-753.58
-3.0
<1.8
<3.5
/
222
17.966
0.342
-0.021
-0.015
<-0.017
0.026
-2156.48
1353.26
<-5.0
7.5
<9.0 -72.2
223
17.916
0.498
0.013
-0.030
-0.029
0.033
-1963.99
-199.81
-5.8
6.6
8.8
-33.3
224
17.907
0.728
-0.023
-0.004
<-0.013
0.023
1998.20
-284.12
<-1.8
3.2
<3.7 -85.1
225
17.873
0.948
0.026
-0.006
<0.020
0.027
417.77
2679.38
<2.1
2.8
<3.5
-6.5
226
17.846
0.737
0.004
-0.011
-0.025
<0.012
695.85
-1757.36
-3.4
<1.6
<3.8
/
227
17.718
0.464
-0.009
0.001
<0.021
<0.009
485.34
-1334.25
<4.5
<2.0
<4.9
/
228
17.587
0.738
-0.026
0.006
<0.003
0.027
-2475.18
-730.73
<0.4
3.6
<3.6
83.5
229
17.581
0.265
0.001
-0.008
<-0.004
<0.008
-886.49
-740.05
<-1.5
<3.0
<3.4
/
230
17.466
0.419
-0.003
-0.009
<0.007
<0.009
-1381.09
-1289.53
<1.7
<2.3
<2.9
/
231
17.463
0.261
0.001
-0.014
<0.003
<0.014
-298.33
-1438.59
<1.1
<5.4
<5.5
/
232
17.447
0.188
-0.008
-0.008
<-0.019
<0.011
3186.43
752.88
<-10.1
<6.0 <11.7
/
233
17.426
0.792
-0.041
0.013
-0.042
0.043
1501.73
-208.09
-5.3
5.4
7.6
81.2
234
17.381
0.255
-0.016
-0.004
<-0.005
<0.016
-803.08
-2264.06
<-2.0
<6.5
<6.8
/
235
17.258
0.862
-0.013
-0.006
-0.059
<0.014
1195.05
-1388.75
-6.8
<1.7
<7.0
/
236
17.238
0.463
0.026
0.007
<-0.021
0.027
1575.27
752.84
<-4.5
5.8
<7.3
7.5
237
17.209
0.440
-0.003
0.017
<-0.010
<0.017
2038.68
746.90
<-2.3
<3.9
<4.5
/
238
17.193
1.020
-0.015
-0.003
<-0.020
<0.015
1769.77
-634.81
<-2.0
<1.5
<2.5
/
239
17.092
0.239
-0.007
-0.017
<0.006
<0.018
1127.44
-2133.95
<2.5
<7.7
<8.1
/
240
17.036
0.272
-0.007
-0.021
<-0.003
<0.022
-754.32
-2352.03
<-1.1
<8.1
<8.2
/
241
17.020
0.357
0.008
-0.008
<-0.008
<0.011
-1258.30
-335.07
<-2.2
<3.2
<3.9
/
242
17.020
0.289
0.011
0.012
<-0.007
<0.016
-456.22
2632.04
<-2.4
<5.6
<6.1
/
243
16.960
0.742
-0.005
-0.008
-0.024
<0.009
-1695.47
-484.01
-3.2
<1.3
<3.5
/
244
16.882
0.189
-0.005
-0.003
<-0.003
<0.006
-747.02
-586.42
<-1.6
<3.1
<3.5
/
245
16.852
1.406
-0.035
-0.005
-0.059
0.035
737.08
-819.55
-4.2
2.5
4.9
-85.9
246
16.825
0.251
-0.005
-0.025
-0.023
0.025
-1081.93
-1554.08
-9.2
10.2
13.7
-50.7
247
16.807
0.166
-0.011
-0.007
<0.005
<0.013
-349.15
-1182.91
<3.0
<7.9
<8.5
/
248
16.802
0.947
-0.001
0.016
<0.012
<0.016
-2395.34
-634.11
<1.3
<1.7
<2.1
/
249
16.752
0.527
0.002
-0.014
<0.022
<0.014
-2170.51
1382.92
<4.2
<2.7
<5.0
/
250
16.423
0.739
-0.008
0.006
<-0.012
<0.010
1400.29
-1134.44
<-1.6
<1.4
<2.1
/
251
16.337
0.290
-0.006
0.012
<-0.014
<0.013
1720.36
359.42
<-4.8
<4.6
<6.6
/
252
16.229
1.194
-0.023
-0.011
<-0.007
0.025
-2302.23
-670.01
<-0.6
2.1
<2.2 -77.2
253
16.128
0.348
-0.010
0.032
<-0.009
0.034
1884.80
-1100.48
<-2.6
9.6
<9.9
53.7
254
16.113
0.966
-0.021
-0.009
-0.064
0.023
1057.26
-1259.56
-6.6
2.4
7.0
-78.4
255
16.076
1.174
-0.006
-0.011
<-0.013
<0.013
-1445.23
-527.52
<-1.1
<1.1
<1.6
/
256
16.074
0.652
0.005
0.008
-0.035
<0.009
977.27
-187.73
-5.4
<1.4
<5.6
/
Table 1. continued
No
Vel
I
Q
U
V
P
∆α
∆δ
mc
m l
mt
χ
((milli arcsec)
(%)
(%)
(%)
( • )
257
16.029
0.466
-0.018
-0.018
<0.006
0.025
-1919.01
-1176.00
<1.3
5.5
<5.7
-67.5
258
15.968
0.799
-0.009
0.010
-0.039
<0.013
1652.97
-350.53
-4.9 <1.7
<5.2
/
259
15.948
0.450
-0.019
-0.004
-0.041
<0.019
1082.06
-2015.52
-9.1 <4.3
<10.1
/
260
15.805
1.069
-0.060
-0.011
0.027
0.061
516.80
-790.25
2.5
5.7
6.2
-84.8
261
15.751
0.514
0.015
0.006
0.028
<0.016
-46.10
2614.48
5.4
<3.1
<6.2
/
262
15.685
0.770
-0.009
-0.006
<-0.019
<0.011
-2255.16
-584.81
<-2.5 <1.4
<2.9
/
263
15.548
0.451
0.007
0.005
<0.003
<0.009
292.72
1952.60
<0.7 <1.9
<2.0
/
264
15.411
0.426
-0.020
0.018
<-0.010
0.027
1910.47
525.17
<-2.3
6.3
<6.7
69.0
265
15.006
0.376
0.040
0.012
<-0.005
0.042
-288.46
1249.65
<-1.3
11.1
<11.2
8.3
266
14.993
0.746
0.010
0.018
<0.017
<0.021
1508.17
595.07
<2.3 <2.8
<3.6
/
267
14.917
0.697
-0.010
-0.009
<0.010
<0.013
-597.90
-87.00
<1.4 <1.9
<2.4
/
268
14.789
1.099
-0.007
-0.019
<0.016
<0.020
-133.34
-331.37
<1.5 <1.8
<2.3
/
269
14.750
3.300
0.077
0.018
<-0.007
0.079
356.17
1045.77
<-0.2
2.4
<2.4
6.6
270
14.665
1.209
-0.010
-0.016
<0.010
<0.019
346.38
-1349.94
<0.8 <1.6
<1.8
/
271
14.499
1.273
-0.030
-0.009
<0.007
0.031
-377.18
-1576.57
<0.5
2.5
<2.5
-81.7
272
14.488
2.424
-0.002
-0.023
<0.008
0.023
-1043.39
-173.15
<0.3
1.0
<1.0
-47.5
273
13.538
2.024
-0.011
0.006
0.031
<0.013
-1142.40
440.06
1.5
<0.6
<1.6
/
274
13.457
10.834
-0.130
-0.054
-0.083
0.141
169.85
-567.88
-0.8
1.3
1.5
-78.7
275
13.270
3.837
0.013
0.105
<-0.019
0.106
831.83
416.70
<-0.5
2.8
<2.8
41.5
276
13.172
7.910
0.037
0.018
0.083
0.041
163.45
529.32
1.0
0.5
1.1
13.0
277
12.417
2.233
-0.070
-0.022
<0.009
0.073
-728.59
19.19
<0.4
3.3
<3.3
-81.3
No
Vel
I
Q
U
V
P
∆α
∆δ
mc
m l
mt
χ
((%)
(%)
(%)
( • )
Table 3 .
3Stokes parameter flux densities and polarisation properties of the 1665 MHz maser spots. km s −1 ) (Jy b −1 ) (Jy b −1 ) (Jy b −1 ) (Jy b −1 ) (Jy b −1 ) (milli arcsec) (milli arcsec)No
Vel
I
Q
U
V
P
∆α
∆δ
mc
m l
mt
χ
((%)
(%)
(%)
( • )
c 0000 RAS, MNRAS 000, 000-000
ACKNOWLEDGEMENTSThe authors would like to thank L.E. Davis who contributed in the primary data reduction during her stay as a visiting student at Jodrell Bank Observatory. We thank the referee for useful comments, some of which prompted us changes improving the clarity of the paper. We would also like to thank Malcolm D. Gray for his careful reading of the manuscript and for constructive discussions. Finally, we are grateful to Aris Noustos for the calculation of the RM applicable to OH 26.5+0.6 The work presented here is based on observations obtained
. <-19, <-19.8
| []
|
[
"Exploring Privacy Preservation in Outsourced K-Nearest Neighbors with Multiple Data Owners",
"Exploring Privacy Preservation in Outsourced K-Nearest Neighbors with Multiple Data Owners"
]
| [
"Frank Li [email protected] ",
"Richard Shin [email protected] ",
"Vern Paxson \nInternational Computer Science Institute\n\n",
"\nUniversity of California\nBerkeley\n"
]
| [
"International Computer Science Institute\n",
"University of California\nBerkeley"
]
| []
| The k-nearest neighbors (k-NN) algorithm is a popular and effective classification algorithm. Due to its large storage and computational requirements, it is suitable for cloud outsourcing. However, k-NN is often run on sensitive data such as medical records, user images, or personal information. It is important to protect the privacy of data in an outsourced k-NN system.Prior works have all assumed the data owners (who submit data to the outsourced k-NN system) are a single trusted party. However, we observe that in many practical scenarios, there may be multiple mutually distrusting data owners. In this work, we present the first framing and exploration of privacy preservation in an outsourced k-NN system with multiple data owners. We consider the various threat models introduced by this modification. We discover that under a particularly practical threat model that covers numerous scenarios, there exists a set of adaptive attacks that breach the data privacy of any exact k-NN system. The vulnerability is a result of the mathematical properties of k-NN and its output. Thus, we propose a privacy-preserving alternative system supporting kernel density estimation using a Gaussian kernel, a classification algorithm from the same family as k-NN. In many applications, this similar algorithm serves as a good substitute for k-NN. We additionally investigate solutions for other threat models, often through extensions on prior single data owner systems. | 10.1145/2808425.2808430 | [
"https://arxiv.org/pdf/1507.08309v1.pdf"
]
| 460,434 | 1507.08309 | 5d1185b5954b15249b559ba0df02253c19593e13 |
Exploring Privacy Preservation in Outsourced K-Nearest Neighbors with Multiple Data Owners
Frank Li [email protected]
Richard Shin [email protected]
Vern Paxson
International Computer Science Institute
University of California
Berkeley
Exploring Privacy Preservation in Outsourced K-Nearest Neighbors with Multiple Data Owners
The k-nearest neighbors (k-NN) algorithm is a popular and effective classification algorithm. Due to its large storage and computational requirements, it is suitable for cloud outsourcing. However, k-NN is often run on sensitive data such as medical records, user images, or personal information. It is important to protect the privacy of data in an outsourced k-NN system.Prior works have all assumed the data owners (who submit data to the outsourced k-NN system) are a single trusted party. However, we observe that in many practical scenarios, there may be multiple mutually distrusting data owners. In this work, we present the first framing and exploration of privacy preservation in an outsourced k-NN system with multiple data owners. We consider the various threat models introduced by this modification. We discover that under a particularly practical threat model that covers numerous scenarios, there exists a set of adaptive attacks that breach the data privacy of any exact k-NN system. The vulnerability is a result of the mathematical properties of k-NN and its output. Thus, we propose a privacy-preserving alternative system supporting kernel density estimation using a Gaussian kernel, a classification algorithm from the same family as k-NN. In many applications, this similar algorithm serves as a good substitute for k-NN. We additionally investigate solutions for other threat models, often through extensions on prior single data owner systems.
INTRODUCTION
The k-nearest neighbors (k-NN) classification algorithm has been widely and effectively used for machine learning applications. Wu et al. categorized it as one of the top 10 most influential data mining algorithms [25]. K-NN identifies the k points nearest to a query point in a given data set, and classifies the query based on the classifications of the neighboring points. The intuition is that nearby points are of similar classes. K-NN classification benefits from running over large data sets and its computation can be expensive.
These characteristics make it suitable to outsource the classification computation to the cloud. However, k-NN data is often sensitive in nature. For example, k-NN can be applied to medical patient records, census information, and facial images. It is critical to protect the privacy of such data when outsourcing computation.
The outsourced k-NN model focuses on providing classification, rather than training. Hence, it is assumed that the algorithm parameters have been adequately chosen through initial investigation. Prior work on preserving privacy in such a model uses computation over encrypted data [24,32,9,26]. These works have all assumed a single trusted data owner who encrypts her data before sending it to a cloud party. Then queriers can submit encrypted queries to the system for k-NN classification. In this setting, we would like to keep data private from the cloud party and queriers, and keep queries private from the cloud party and the data owner. In some existing works, the queriers share the secret (often symmetric) data encryption keys. In others, the querier interacts with the data owner to derive the encryption for a query without revealing it.
From these existing models, we make a simple observation with non-trivial consequences: the data owner may not be completely trusted. In all previous models, trust in the data owner is natural since only the owner's data privacy is at risk. However, this assumption does not hold in some practical scenarios:
• Multiple mutually-distrusting parties wish to aggregate their data for k-NN outsourcing, without revealing data to each other. Since k-NN can perform significantly better on more data, all parties can benefit from improved accuracy through sharing. As an example, hospitals might contribute medical data for a k-NN disease classification study or a service available to doctors. However, they do not want to release their patient data in the clear to each other or a cloud service provider. In some cases, these data owners may act adversarially if that allows them to learn the other owners' data.
• The k-NN outsourced system allows anyone to be a data owner and/or querier. This is a generalization of the previous scenario, and a privacy-preserving solution allows data owner individuals to contribute or participate in a system directly without trusting any other parties with plaintext data. Some potential applications include sensor information, personal images, and location data.
These scenarios involve data aggregated from multiple owners, hence we term this the multi-data owner outsourced model. In this paper, we provide the first framing and exploration of privacy preservation under this practical model. Since these multi-data owner systems have not yet been used in practice (perhaps because no prior works address privacy preservation), we enumerate and investigate variants of a privacy threat model. However, we focus on one variant that we argue is particularly practical and covers a number of interesting cases. Under this model, we discover a set of adaptive privacy-breaching attacks based purely on the nature of how k-NN works, regardless of any system protocol and encryption designs.
To counter these privacy attacks, we propose using kernel density estimation with a Gaussian kernel instead of k-NN. This is an alternative algorithm from the same class as k-NN (which we can consider as kernel density estimation with a uniform kernel). While this algorithm and k-NN are not equivalent, we demonstrate that in many applications the Gaussian kernel should provide similar accuracy. We construct a privacy-preserving scheme to support such an algorithm using partially homomorphic encryption and garbled circuits, although at a computational and network cost linear to the size of the data set. Do note that k-NN itself is computationally linear. While this system may not yet be practical for large data sets, it is both proof of the existence of a theoretically secure alternative as well as an first step to guide future improvements.
In summary, our contributions are:
• We provide the first framework for the k-NN multidata owner outsourced model.
• We explore privacy preservation of k-NN under various threat models. However, we focus on one threat model we argue is both practical and covers several realistic scenarios. We discover a set of privacy-breaching attacks under such a model.
• We propose using kernel density estimation with a Gaussian kernel in place of k-NN. We describe a privacypreserving construction of such a system, using partially homomorphic encryption and garbled circuits.
BACKGROUND 2.1 Nearest Neighbors
K-nearest neighbors (k-NN) is a simple yet powerful nonparametric classification algorithm that operates on the intuition that neighboring data points are similar and likely share the same classification. It runs on a training set of data points with known classifications: D = {(x1, y1), · · · , (xn, yn)}, where xi is the i-th data point and the label yi ∈ {1, · · · , C} indicates which of the C classes xi belongs to. For a query point x, k-NN determines its classification label y as the most common label amongst the k closest points to x in D. Closeness is defined under some distance metric (such as Manhattan or Euclidean distance for real-valued vectors). Since all of k-NN's computation is at classification time, instead of training time, k-NN is an example of a lazy learning method.
In the case of 1-NN (k = 1), as the number of training data points approaches infinity, the classification error of k-NN becomes bounded above by twice the Bayes error (which is the theoretical lower bound on classification error for any ideal classification algorithm). In general, k-NN benefits greatly from execution over large data sets. Given a single data owner will have a limited amount of data, aggregation of data from multiple parties is desirable for increased accuracy in k-NN applications. Also, since k-NN classification is computationally expensive, it is practical to outsource computation. These characteristics motivate our interest in multi-data owner outsourced k-NN classification.
Kernel density estimation and regression
K-NN is an example of an instance-based learning method as it involves making a prediction directly from existing instances in the training data (as opposed to using a model of the training data, for example). K-NN uses only the k nearest instances to a query and allows each to equally influence the query classification. However, there are other possible rules to decide how much influence each instance should have on the final prediction, known as kernels. 1 Formally, a kernel K(x) is a function which satisfies
(1) ∞ −∞ K(x)dx = 1 (2) K(x) = K(−x).
In other words, it integrates to 1 and is symmetric across 0.
Given n samples {x1, · · · , xn} of a random variable x, one can estimate the probability density function p(x) with a kernel K in the following way, called kernel density estimation:
p(x) = 1 n n i=1 K( x − xi )
where · is a norm. Given this estimate, classification can be determined as the most probable class:
DC = {xi | (xi, yi) ∈ D, yi = C} p(y = C|x, D) ∝ x i ∈D C K( x − xi ) arg max C p(y = C|x, D) = arg max C x i ∈D C K( x − xi )
See Appendix A for a more detailed exposition.
Therefore, to classify a particular point x, we can sum the kernel values of the points which belong to each class and determine which class has the largest sum. The following uniform kernel equates this derivation with k-NN:
d k,x,D := distance of k th nearest point from x in D K( x − xi ; k, D) = 1 2d k,x,D if x − xi ≤ d k,x,D 0 otherwise d k,
x,D is the width of this kernel, as K(t) > 0 for ||t − x|| ≤ d k,x,D and K(t) = 0 for ||t − x|| > d k,x,D . Thus, k-NN classification is kernel density estimation with a uniform finite-width kernel.
One can substitute in a different kernel to obtain a classifier which behaves similarly. One example is the Gaussian kernel:
K(x) = 1 σ √ 2π e − x 2 2σ 2
where σ is the standard deviation parameter. Note that this kernel has infinite width, meaning all points have some influence on the classification.
Paillier Cryptosystem
The Paillier cryptosystem [16] is a public-key semantically secure cryptographic scheme with partially homomorphic properties. These homomorphic properties allow certain mathematical operations to be conducted over encrypted data. Let the public key pk be (N, g), where N is a product of two large primes and g is a generator in Z * N 2 . The secret decryption key is sk. Also let E pk be the Paillier encryption function, and E −1 sk be the decryption function. Given a, b ∈ ZN , the Paillier cryptosystem has the following properties:
E −1 sk (E pk (a) · E pk (b) mod N 2 ) = a + b mod N (Add) E −1 sk (E pk (a) b mod N 2 ) = a · b mod N (Mult)
In other words, multiplying the ciphertext for two values results in the ciphertext for the sum of those values, and computing the c-th power of a value's ciphertext will result in the ciphertext for the value multiplied by c.
The Paillier ciphertext is twice the size of a plaintext; if N is 1024 bits, the ciphertext is 2048 bits. A micro-benchmark of Paillier in [18] shows practical performance runtimes: encryption of a 32-bit integer takes 9.7 ms, decryption takes 0.7 ms, and the homomorphic addition operation takes 0.005 ms.
Yao's Garbled Circuits
Yao's garbled circuit protocol [29,14] allows a two-party evaluation of a function f (i1, i2) run over inputs from both parties, without revealing the input values when assuming a semi-honest adversary model. If Alice and Bob have private input values iA and iB, respectively, the protocol is run between them (the input owners) and outputs f (iA, iB) without revealing the inputs to any party.
In the protocol, the first party is called the generator and the second party is the evaluator. The generator takes a Boolean circuit for computing f , and generates a "garbled" version GF (intuitively, it is cryptographically obfuscating the circuit logic). Any input i for function f has a mapping to a garbled input for GF , which we will denote as GI(i). The general model of existing outsourced k-NN system. The data owner is trusted and outsources encrypted data to the cloud party. Queriers request k-NN computation from the cloud party, and are sometimes trusted depending on the prior work. The cloud party executes k-NN classification and is semi-honest.
The generator gives the garbled circuit GF to the evaluator, as well as the generator's garbled inputs GI(i1). Since the generator created the garbled circuit, only the generator knows the valid garbled inputs. The evalutor then engages in a 1-out-of-2 oblivious transfer protocol [20,10] with the generator to obliviously obtain the garbled input values for her own private inputs i2. The evaluator can now evaluate GF (GI(i1), GI(i2)) to obtain a garbled output, which maps back to the output of f (i1, i2).
For a more concrete understanding of the garbled circuit itself, consider a single binary gate g (e.g., an AND-gate) with inputs i and j, and output k. For each input and the output, the generator generates two random cryptographic keys K 0
x and K 1 x that correspond to the bit value of x as 0 and 1, respectively. The generator then computes the following four ciphertexts using a symmetric encryption algorithm Enc (which must be IND-CPA, or indistinguishable under chosen-plaintext attacks):
Enc (K b i i ,K b j j ) (K g(b i ,b j ) k ) for bi, bj ∈ {0, 1}
A random ordering of these four ciphertexts represents the garbled gate. Knowing (K b i i , K b j j ) allows the decryption of only one of these ciphertexts to yield the value of K
g(b i ,b j ) k
and none of the other outputs. Similarly, valid outputs cannot be obtained without the keys associated with the gate inputs. A garbled circuit is the garbling of all the gates in the Boolean circuit for f , and can be evaluated gate-by-gate without leaking information about intermediate computations. There exists efficient implementations of garbled circuits [15,6,2,23], although naturally the garbled circuit's size and evaluation runtime increases with the complexity of the evaluated function's circuit.
RELATED WORK
Prior works have focused on two general approaches to achieving privacy in k-NN: distributed k-NN and outsourced k-NN computed on encrypted data.
In distributed k-NN, multiple parties each maintain their own data set. Distributed computation involves interactions between these parties to jointly compute k-NN without re-Prior Work DO Q Cloud Wong et al [24] Trusted Trusted Semi-honest Zhu et al [32] Trusted Semi-honest Semi-honest Elmehdwi et al [9] Trusted Semi-honest Semi-honest Xiao et al [26] Trusted Trusted Semi-honest Table 1: A summary of the trust models from prior private outsource k-NN works. DO is the data owner, Q is the querier, and Cloud represents any cloud parties. Semi-honest parties are assumed to follow protocols but may attempt to discover query or data values.
vealing other data values [5,31,27,13,19,22,28,30]. A general framework for how these systems operate is they iteratively reveal the next closest neighbor until k neighbors have been determined. While allowing multiple parties to include their data sets, these works do not provide a solution for outsourced k-NN because they requires data owners to store and compute on their data. Furthermore, they must remain online for all queries.
The other line of prior privacy-preserving k-NN work relies on computing k-NN over encrypted data. These systems are designed for outsourcing, as depicted in Figure 1. Table 1 summarizes the trust model each work assumes. An important observation is that all prior works assume a trusted data owner.
Wong et al. [24] provides secure k-NN outsourcing by developing an asymmetric scalar-product-preserving encryption (ASPE) scheme. ASPE transforms data tuples and queries with secret matrices that are inverses of each other. Multiplying encrypted tuples and queries cancel the transformations to output scalar products, used for distance comparisons. Hence, a data owner and queriers can upload encrypted data tuples and queries to a cloud party, who can compute k-NN using scalar products. It is worth noting this encryption scheme is deterministic, so identical data tuples have the same ciphertext and likewise for queries. The secret matrices are symmetric keys for the encryption scheme, and must be shared with both the data owner and queriers. This approach assumes a trusted data owner and queriers, and a semi-honest cloud party who follows the protocols but attempts to learn query or data values.
In the outsourced k-NN system from [32], data encryption is again conducted by a trusted data owner, using a symmetric scheme with a secret matrix transformation as a key. However, queriers do not share this key. Instead, they interact with the data owner to derive a query encryption without revealing the query. Note this requires a data owner to always remain online for all queries. Also, data tuple encryption (being a matrix transformation) is deterministic, but query encryption is not due to randomness introduced during the query encryption protocol. The encryption scheme is designed similiarly to ASPE and preserves distance, so a cloud party can execute k-NN using distances computed from encrypted data tuples and queries. In this system's trust model, the data owner is trusted while the queriers and the cloud party are semi-honest.
The system in [9] is designed to protect data and query privacy with two cloud parties. One cloud party is the data host, who stores all uploaded (encrypted) data tuples. The other cloud party is called the key party, since it generates the keys for a public-key encryption scheme. Data tuples and queries are encrypted with the key party's public key.
In this system, Paillier encryption is used for its partially homomorphic properties. For each k-NN execution, the system computes encrypted distances through interactions between the key party and the data host. The key party orders the distances and provides the data host with the indices of the k nearest neighbor data tuples to return to a querier. This work assumes a trusted data owner, and semi-honest queriers and cloud parties.
Instead of finding exact k-NN, [26] allows a cloud party to approximate it using encrypted data structures uploaded by the data owner that represent Voronoi boundaries. The Voronoi boundary surrounding a data point is the boundary equidistant from that data point to neighboring points. The region enclosed in a boundary is the region within which queries will return the same 1-nearest neighbor. Because the Voronoi boundaries change whenever a new data point is added, this approach prevents continuous data uploading without redoing the entire outsourcing process. The encryption scheme is symmetric, and both the data owner and queriers share the secret key. This work's trust model is identical to [24]'s, where the cloud party is semi-honest and the data owner and queriers are fully trusted.
Another contribution from [26] is a reduction-based impossibility proof for privacy preservation in outsourced exact k-NN under a single cloud party model, where the cloud party has access to an encryption key. Note that [24] and [32] assume the cloud party does not have the encryption key, hence avoiding this impossibility argument. The proof is a reduction to order-preserving encryption (OPE), and leverages prior work that shows OPE is impossible under certain cryptographic assumptions [4]. Fully homomorphic encryption does actually allow OPE but it is still impractical [11]. Let B be an algorithm, without access to a decryption oracle, that finds the nearest neighbor of an encrypted query E(q) in the encrypted data E(D). The impossibility proof shows that B can be used to construct an OPE function E(), hence B cannot exist. However, their argument does not apply to a system model with multiple cloud parties. Their proof, which relies on B's lack of access to a decryption oracle, can be circumvented by providing access to the decryption function at another cloud party, such as in [9]. Furthermore, OPE has been realized in an interactive twoparty setting [17]. As later discussed in Section 5 and 8, we consider it reasonable that a cloud party has encryption capabilities in a multi-data owner model. Thus further exploration of private k-NN is needed for scenarios not within the scope of this impossibility result.
One important observation about these prior works is that they all exhibit linear complexity in computation (and network bandwidth in the case of [9]). Intuitively, this is because the protocols are distance based. Since the distances from a query to all data tuples varies for each query, all distances must be recomputed per query. While there are techniques [3,12] for reducing the computational complex- ity of k-NN, these techniques may not be privacy preserving. For example, these algorithms necessarily compute on only a portion of the data tuples near the query. Observing similar data tuple subsets used can leak the similarity of subsequent queries. However, future work may yield more efficient privacy preserving k-NN constructions.
SYSTEM AND THREAT MODELS FOR MULTIPLE DATA OWNERS
All existing private outsourced k-NN systems assume a single trusted data owner entity. In this paper, we are the first to consider multiple mutually distrusting data owner entities. This is a simple and practical extension of existing models, yet has important implications. We explore the various threat models that can arise in such a scenario, but we focus the most attention on one threat model we find particularly realistic. The remaining threat models can be more easily dealt with, for example by extending existing single data owner systems, and will be discussed in detail in Section 8. In this section, we first discuss our model of a multi-data owner system. We then provide a framework for modeling threats in the system.
K-NN System Model
The privacy-preserving outsourced k-NN model has three types of parties: the data owners who outsource their data, the cloud party(s) who host the k-NN computation, and the queriers requesting k-NN classification. For emphasis, the key difference between our system model and prior models is the existence of multiple data owner entities. An immediate consequence of this modification is seen on the structure of the cloud parties. As discussed in Section 3, results from [26] indicate that privacy-preserving outsourcing of exact k-NN cannot be achieved by a single cloud party without fully homomorphic encryption, which is impractical. Instead, at least two cloud parties should exist, one which stores encrypted data, and one with access to the associated decryption function that acts as a decryption oracle. Hence our system model includes an additional cloud party we call the Cryptographic Service Provider (CSP) who generates a key pair for a public-key encryption scheme, and distributes the public key to data owners and queriers to use for encryption.
The cloud party storing the encrypted data, termed the data host, is able to compute on encrypted data via interaction with the CSP. As depicted in Figure 2, our system model involves multiple data owners and queriers, the data host, and the CSP. Note that encrypted queries and data submissions must be sent to the data host, not the CSP, since the CSP can simply decrypt any received values.
Threat Model Abstractions
We consider any party as potentially adversarial. Like prior works, we will consider semi-honest adversaries that aim to learn data or query values, possibly through collusion, while following protocols. We do not consider attacks where parties attempt to disrupt or taint the system outputs. Additionally, we must assume that the CSP and the data host do not directly or indirectly collude, since the CSP maintains the secret key to decrypt all data tuples on the data host. This is a reasonable assumption, for example, if the two cloud parties are hosted by different competing companies incentivized to protect their customers' data privacy.
To describe our threat models, we will take a slightly unorthodox approach. Instead of providing a model for each party's malicious actions, we model the malicious behavior of a party based on roles it possesses. The logic behind this approach is that different parties in our system model can pose the same threats because they possess the same roles. Enumerating and investigating all combinations of roles and parties is redundant. Hence, just using roles provides a cleaner abstraction from which to analyze threats. Below we describe the four possible roles that arise in our system model.
• Data Owner Role: A party with this role may submit encrypted data into the system. We focus on misbehavior to compromise data privacy, and do not deal with spam or random submissions aimed at tainting system results. Note that the data owner role cannot compromise data privacy by itself since it does not allow observation of system outputs.
• Querier Role: This role allows submission of encrypted queries to receive k-NN classification results.
We note that the querying ability can allow the discovery of the Voronoi cell surrounding a data point. The Voronoi cell is the boundary surrounding a point p that is equidistant between p and its neighbors. In the 1-NN case, queries within p's Voronoi cell will return p's classification. A query outside of the cell will return the classification of a neighboring point. Hence, changes in 1-NN outputs can signal a crossing over of a boundary. However, we deem this inherent leakage minimal since discovering the Voronoi cell would require numerous queries to reveal each boundary edge. The Voronoi cell also simply bounds the value of the data point, and does not directly reveal it. Furthermore, neighboring cells of the same class will appear merged in such an attack, since the output signal will not change between cells. K-NN with a large k parameter makes analysis even less accurate.
• Data Host Role: The data host role can only be possessed by a cloud party. It allows storage and obser-vation of incoming encrypted data tuples and queries, and the data host computation that is conducted. A holder of the role also observes any interactions with other parties.
• CSP Role: Only a cloud party may possess this role. A CSP possesses both an encryption and decryption key, and can decrypt any data it observes. It interacts with a data host role to serve as a decryption oracle, and can observe any interaction with other parties, as well as the CSP's own computation.
In Section 5, we focus on the primary threat model in this paper, where any single party can possess both the data owner and querier role. In Section 8, the remaining threat models are explored. These are threats from one of the cloud parties possessing either the data owner role or the querier role, but not both. Also, we consider the case where the cloud parties possess neither roles.
ATTACKS IN THE DATA OWNER-QUERIER ROLES THREAT MODEL
In this section, we consider the threat model where an adversarial party possesses both the data owner and querier roles (termed the DO-Q threat model). Since we consider all parties as potentially adversarial, any scenario where both roles may belong to a single party falls under this threat model. We argue this is a realistic threat model, and discover a set of conceptually simple attacks on any system regardless of system design or encryption scheme. The attacks work based purely on the mathematical nature of the k-NN output, rather than implementation specifics. Hence, we conclude that multi-data owner outsourced k-NN cannot be secured under such a threat model.
DO-Q Threat Model
The DO-Q threat model covers any situation where a single party can possess both the data owner and querier roles. We consider this a practical threat model because it can arise in numerous scenarios. Hence, we will focus on it in this section as well as Section 6. The following outlines the possible scenarios where a single party may possess both roles:
• It is reasonable to expect that data owners, contributing their sensitive data, are allowed access to querying.
If not, there is less incentive for data owners to share data. For examples, hospitals might want to pool their data to create a disease classification system. The doctors at any participating hospital should be able to query the system when diagnosing their patients.
• The data owners may not be explicitly given querying permissions, say if the data owners and queriers are separate institutions. However, miscreants in each party may collude together, providing a joint alliance with both roles.
• If the data host can encrypt data tuples and queries, it can act as its own data owner and querier. It can insert its own encrypted data tuples into the system's data set, and compute k-NN using its own encrypted queries. A data host with access to the encryption keys is not unreasonable. In [9], the data host needs to encrypt values to carry out operations on encrypted data tuples. In addition, the queries are encrypted under the same key as data tuples to allow homomorphic operations.
• If the data host lacked the data owner and/or querier roles (e.g., if an encryption key was kept secret from it), it may still collude with a data owner and/or querier to obtain the roles. This collusion can also occur between the CSP with a data owner and querier.
• If the system is public to data submissions, queries, or both, then any party can supplement their current roles with the public roles. For example, a fully public system allows anyone to obtain both the data owner and querier role.
Distance-Learning Attacks in the DO-Q Model
We now present a set of attacks that reveal distances between a query and encrypted data tuples, allowing triangulation of the plaintext data values. We begin by presenting attacks under the simpler 1-NN, and gradually progress towards k-NN. We assume that k-NN does not output the neighboring tuples directly, which provides arbitrary privacy breaches through querying, but rather just the query's predicted classification.
We note that prior work [7] has developed algorithms for cloning the Voronoi diagram of 1-NN using only queries if the query responses contain the exact location of the nearest neighbor or the distance and label of the nearest neighbor. If only the nearest neighbor label is returned, then the algorithm provides an approximate cloning. Our attacks differ in that they are structurally very different, we look at k-NN beyond 1-NN, our attacks reveal the exact data value of a target tuple rather than the Voronoi diagram, and our attacks leverage data insertion as well as querying. Also we focus on a system model where query responses do not contain the distance (which we considered an already broken construction). The algorithms in [7] require at least distance in the query response to conduct exact cloning.
Attack using Plaintext Distance
We begin by considering a broken construction of outsourced k-NN. The k-NN system must calculate the distances from all data tuples to the query point. If these distances are ever visible in plaintext to a party with just the querier role, then it is simple to discover the true value of any encrypted data tuple.
Knowing a query q, the adversary can observe the distance l from q to the nearest neighbor. This forms a hypersphere of radius l around q. If the data is d dimensions, d + 1 different query points with the same nearest neighbor constructs d+1 hyperspheres, which will intersect at one unique point, the nearest neighbor's plaintext value. Hence, we can uncover data even though it is encrypted in the data set. Note that in this case, the adversary needs both a querier role as well as the cloud party role that observes plaintext distances. This is different from the threat model we are considering, but we discuss it as it provides insight for the following attacks.
Attack on 1-NN
Now consider the 1-NN scenario where the system does not foolishly reveal the distance in plaintext. An adversary with the data owner and querier roles can still breach the privacy of the system. The querier role allows the adversary to submit a query q, and observe the classification C of its nearest neighbor p. The attacker, using its data owner role, can then insert an encrypted "guess" data tuple E(g), with any classification C = C. If the new nearest neighbor classification returns C , we know p is farther away from q than g. If not, then p is closer. Using this changing 1-NN output signal, the adversary can conduct binary search using additional guess tuples to discover the distance from q to p. Hence, the distance is revealed and triangulation can be conducted as in the insecure previous case. This takes O((d + 1) log D) guesses in total to compromise a data tuple, where d is the data dimensionality and D is the distance from q to the initial guess insertion.
Note the above attack appears to require tuple deletions or updates, without which guess tuples that are closer than p will disallow continued binary search. Deletions or updates will certainly improve attack performance, and is reasonable to allow in many scenarios (e.g., location data which is constantly changing). However, it is not required. First, the attacker could conduct a linear search, rather than binary, starting with a guess at distance D and linearly decreasing the guess distance until equal to the distance between q and p. Alternatively, a too-near guess still narrows down the range in which p's value can be located, and the adversary can restart a search with a new query in that range.
Attack on k-NN
Consider now attacks on k-NN, instead of 1-NN. In this scenario, the k-NN system returns the classes of all k nearest neighbors. Let p of class C be the nearest neighbor to query q. In fact, k-NN can be reduced to 1-NN by inserting k − 1 data tuples of class C = C adjacent to q. Then, p is the only unknown data tuple whose classification is included in the k-NN output set. The attacker with data owner and querier roles can test guess data insertions of class C until C is no longer included in the k-NN output, indicating the guess tuple is closer than p. This signal can be used as before to conduct a distance search.
Attack on Majority-Rule k-NN
The final scenario for a k-NN system is one that operates on majority rule. Rather than outputting all classifications from the k nearest neighbors, only the most common classification is returned. Again, data owner and querier privileges allow a distance-learning attack by reducing to 1-NN. Let p of class C be the nearest neighbor. The attack can insert k − 1 data tuples adjacent to a query q, split evenly over all classes such that the output of k-NN solely depends on the classification of p. For example, if we have binary classification (0 and 1) and 3-NN, the attacker can insert a 0-class tuple and a 1-class tuple adjacent to q. The original nearest neighbor (p) now solely determines the output of 3-NN, and the adversary may again use a change in k-NN output as a signal for a distance search.
Fundamental Vulnerability
The vulnerability exposed by these attacks is fundamental to exact k-NN. For any given query, k-NN outputs a function evaluated on the subset of tuples within a distance d, where d is the distance of the k-th nearest neighbor. Representing k-NN as kernel density estimation using a uniform kernel, d is the kernel's width. The vulnerability above fundamentally relies on the fact that the kernel's width changes depending on which data points are near the query. An adversary can learn distances through guess insertions and an observation of k-NN output change. Hence, in any scenario where a party possesses both the data owner and querier roles, exact k-NN cannot be secure, regardless of implementation details. An alternate or approximate solution must be used. In Section 6, we propose a privacy-preserving scheme using a similar algorithm from the same family as k-NN.
PRIVACY-PRESERVING KERNEL DEN-SITY ESTIMATION
In Section 5, we demonstrated that data privacy can be breached in any multi-data owner exact k-NN system under the DO-Q threat model. Given the practicality of this scenario, particularly since the data host typically will have those roles, we seek to provide a secure alternative to exact k-NN. In this section, we propose a privacy-preserving system using an algorithm from the same family as k-NN.
In particular, k-NN is a specific case of kernel density estimation (as discussed in Section 2.2). Intuitively, the kernel measures the amount of influence a neighbor should have on the query's final classification. For a given query, kernel density estimation computes the sum of the kernel values for all data points of each class. The classification is the class with the highest sum. K-NN uses a kernel that is uniform for the k nearest neighbors, and zero otherwise. We propose substituting this kernel with another common kernel, the Gaussian kernel:
K( q − xi ) = 1 σ √ 2π e − 1 2σ 2 q−x i 2
In this equation, q is the query tuple, xi is the i-th data tuple, σ is a parameter (chosen to maximize accuracy using standard hyperparameter optimization techniques, such as cross-validation with grid search), and · is the L2 norm.
As the distance between q and xi increases, the kernel value (and influence) of xi decreases. K-NN allows the k nearest neighbors to all provide equal influence on the query point's classification. When using a Gaussian kernel, all points have influence, but points farther away will influence less than points nearby. While this approach is not equivalent to exact k-NN due to the non-uniformity of neighbor influences, it is similar and from the same family of algorithms. Later, we show experimentally that it can serve as an appropriate substitute in order to provide privacy preservation.
Why the Gaussian Kernel?
The Gaussian kernel for kernel density estimation is specifically chosen to provide defense against the adaptive k-NN distance-learning attacks under the DO-Q threat model (Section 5). Here, we briefly provide the intuition for why this is true, with the formal proof in Section 6.3.
In k-NN, the width of the kernel depends on the distance of the kth nearest point from the query, so that only the k nearest neighbors influence the classification. An adversary with the data owner role can change this set of neighbors by inserting data tuples within the kernel width, and manipulate whether a particular point influences the classification. Distances can be learned by using changes in this subset as a signal. When using the Gaussian kernel, which has an infinite kernel width, any data the adversary inserts will not change the influence of other points, and the outcome will be affected by exactly an amount the adversary already can compute. For example, for a given query q, an adversary inserting a point y knows the influence for y's class will increase by exactly K( q − y ). Hence, the adversary learns nothing new from possessing the data owner role and querying.
It is important to realize that this security arises from the Gaussian kernel's unvarying and infinite width. The choice of an alternative kernel is non-trivial and must be carefully selected. For example, another viable substitute is the logistic kernel K(u) = 1 e u +2+e −u , since it too has an unvarying and infinite width. Our decision to use the Gaussian kernel was based on the ease of developing a scheme using partially homomorphic cryptographic primitives.
A Privacy-Preserving Design
In this section, we will step-by-step describe the construction of a privacy-preserving classification system using kernel density estimation with a Gaussian kernel. As we will demonstrate, our protocols provide classification without leaking data or queries. Our system will follow the same system model as described in Section 4.1.
Setup
Recall that the system model provides outsourcing data storage and computation to a data host and a cryptographic service provider. The cryptographic service provider generates a public/private key pair for the Paillier cryptosystem and distributes the public key to data owners and queriers, who use it to encrypt data they submit to the cloud, as well as the data host. Data owners submit their data in encrypted form to the data host. Note that while we use the Paillier cryptosystem, other similarly additive homormophic cryptosystems may be appropriate as well.
Computing Squared Distances
Since a kernel is a function of distance, our system must be able to compute distance using encrypted data without revealing it. Algorithm 1 describes SquaredDist(E(a), E(b)), which allows the data host to compute the encrypted squared distance between a and b given only their ciphertexts. The protocol does not leak the distance to either cloud parties, to prevent a distance-learning attack. Only squared distances are required as they are used in the Gaussian kernel calculation.
Assume our tuples are m dimensions, and ai is the i-th feature of tuple a. E and E −1 are the Paillier encryption and decryption functions, respectively, using the previouslychosen public/private key pair. Note all Paillier operations are done modulo the Paillier parameter N , but for simplicity
Algorithm 1 SquaredDist(E(a), E(b)): Output E( a − b 2 ) 1. DH: for 1 ≤ i ≤ m do: (a) xi ← E(ai) · E(bi) −1 . Thus, xi = E(ai − bi). (b) Choose random µi ∈ ZN . (c) yi ← xi · E(µi), such that yi = E(ai − bi + µi). (d) Send yi to CSP . 2. CSP : for 1 ≤ i ≤ m do: (a) wi ← E((E −1 (yi)) 2 ). (b) Send wi to DH. 3. DH: for 1 ≤ i ≤ m do: (a) zi ← wi · x −2µ i i · E(−µ 2 i ). 4. DH: Encrypted squared distance E( a − b 2 ) = Π m i=1 zi.
we elide the modulus. Also, we abbreviate the data host as DH, and the cryptographic service provider as CSP.
Step 1 computes the encryption of di = (ai − bi) and additively masks each with a random secret µi, such that E −1 (yi) = di + µi mod N . This prevents the CSP from learning di in step 2. The CSP decrypts yi, squares it, and sends it back to the DH the encryption of the square.
Note wi = E((E −1 (yi)) 2 ) = E(d 2 i + 2µidi + µ 2 i ). The DH can encrypt µ 2 i and compute E(2µidi) as x 2µ i i , allowing it to recover E(d 2 i ) in step 3. Finally, the DH can compute E(d 2 ) = E(Σ m i=1 d 2 i ) = Π m i=1 E(d 2 i )
, as in step 4. This provides the DH with encrypted squared distances without revealing any information to either cloud parties.
Computing Kernel Values
Now that we can compute squared distances, we must compute the Gaussian kernel values in a secure fashion. Algorithm 2 does so such that the CSP obtains a masked kernel value for each data tuple.
Algorithm 2 KernelV alue(q): Compute Gaussian kernel values for data tuples t in data set D given query q 1. DH: for 1 ≤ i ≤ |D| do:
(a) si ← SquareDist(ti, q) = E( ti − q 2 ) (b) Choose random µi ∈ ZN (c) ei ← si · E(µi). Thus, ei = E( ti − q 2 + µi).
(d) Send ei to CSP . 2. CSP : for 1 ≤ i ≤ |D| do:
(a) gi ← 1 σ √ 2π e − 1 2σ 2 E −1 (e i )
In step 1, the DH additively masks each computed encrypted squared distance by adding a different random mask. These values are sent to the CSP who in step 2 decrypts and computes the kernel values using the masked squared distances. Note that since each distance is masked with a random value, the CSP does not learn any distances, even if the CSP knows some data values in the data set. The CSP will not be able to determine which masked values correspond to its known values. Again, no cloud party learns any distances, and they observed only encrypted or masked values. The query is only used in this algorithm, without ever being decrypted. Hence query privacy is achieved.
Computing Classification
The final step is to conduct kernel density estimation for classification by determining which class is associated with the largest sum of kernel values. Our scheme represents classes not as a single value, but rather as a vector. If there are c classes, we require a set of c orthonormal unit-length bases, where each basis is associated with a particular class. When uploading a data tuple, a data owner uploads the encrypted basis associated with the tuple's class.
We can compute classification as described in Algorithm 3. This algorithm is run after Algorithm 2, such that the CSP knows the masked Gaussian kernel values for all data tuples. Let ci represent the classification vector for the i-th data tuple. (d) Send gout to DH. 8. DH: Map gout to its non-garbled value out and return to the querier. out is the number representing the predicted class.
In step 1, the encrypted classification vectors are masked through matrix multiplication with a random invertible matrix Bi, which can be efficiently constructed [21]. Note this computation can be conducted on encrypted data because the DH knows the values of Bi, and the matrix multiplication involves only multiplying by plaintext constants and adding ciphertexts. The masked classification vectors are sent to the CSP in step 2, who scales them by the kernel values (from Algorithm 2). In step 3, the DH is returned these values. The DH undoes the transformation by multiplying by B −1 i . The distance masks in the Gaussian kernel from Algorithm 2 are removed by multiplying by e 1 2σ 2 µ i , where µi is the mask for the i-th tuple. Now, w i is the en-crypted basis for the i-th tuple's class, scaled by the kernel value. In step 4, the sum of these scaled encrypted bases will sum the kernel values for each class in the direction of the classes' bases, forming the vector A. Note that A is still a vector of encrypted values though.
At this point, we need to determine the class with the highest kernel values summation. Having the CSP decrypt A and return the kernel value sums for the classes may appear straightforward, but in fact leads to an attack on future inserted data tuples if an adversary can view the kernel value sums and can continuously query q. When the next data tuple t is inserted, the adversary will observe an increase in the kernel values summation for t's class by K( t − q 2 ). Knowing q and K( t − q 2 ), the adversary can compute t − q . Assuming the data has m features, the attacker can determine t using a set of (m + 1) queries both before and after insertion to compute (m + 1) distances. Note that our threat model allows for one of the cloud parties to possess the querying role, so whichever party can observe the kernel value sums can conduct this attack. Thus, the protocol must determine the classification without revealing the kernel value sums to any party.
We accomplish this using a garbled circuit for the function f (in1, ..., inc, µ1, ..., µc) = arg max k (in k − µ k mod N ). If there are c classes, this function accepts c masked inputs in and c additive masks µ, unmasks each masked input with its associated mask, and returns the index corresponding to the largest unmasked value. If the masked inputs are masked kernel value summations for each class, f returns the class index with the kernel values summation. In step 5, the DH additively masks the (encrypted) kernel values summation for each class. Then in step 6, the DH generates the garbled circuit GF f for f and garbles its own inputs, which are the masks for each class's kernel values summation. Note that f is not that complex of a function (for example, compared to a decryption function) and its garbled circuit can be practical implemented using existing systems [15]. The DH sends GF f , its garbled inputs, and the masked (still encrypted) A to the CSP. The CSP decrypts the masked A vector in step 7(a). By conducting oblivious transfer in step 7(b) with the DH to obtain the garbled input associated with each Ai, the CSP does not reveal the masked kernel value sums. With all of the inputs now to GF f , the CSP evaluates the garbled circuit and returns the garbled output to the DH. Because the DH created the circuit, it can map the garbled output back to its non-garbled value, which is the class index returned by f with the largest kernel values summation. This classification is exactly the classification output of kernel density estimation using a Gaussian kernel.
Security Analysis
First we must show that use of the Gaussian kernel by our scheme is resistant to the adaptive distance-learning attacks detailed in Section 5. While Section 6.1 provide an intuitive argument for this, here we present a formal proof.
Theorem 6.1. Under the Gaussian kernel in kernel density estimation, the adversary learns nothing from the distance-learning attacks of Section 5. More specifically, the adversary does not learn the distance from a query output by adding to the data set (or removing her own data tuple).
Proof. Let D = {d0, ..., dn} be the current data set, q be the adversary's desired query point, and dA be any data tuple the adversary could insert. Also define GKj(D, q) to be the kernel value sums for the j-th class given the query q and data set D, and C(di) to be the i-th data tuple's classification. Recall that if the system allows for c classes, the query output is arg max j∈{1,...,c} (GKj (D, q)).
Before tuple insertion, for each class j in {1, ..., c}:
GKj(D, q) = Σ |D| i=1 1 σ √ 2π e − 1 2σ 2 ( d i −q 2 ) 1 C(d i )=j
Here, 1 C(d i )=j is an indicator variable for whether the i-th data tuple di is of class j. After inserting tuple dA, for each class j in {1, ..., c}:
GKj(D ∪ {dA}, q) = Σ |D| i=1 1 σ √ 2π e − 1 2σ 2 ( d i −q 2 ) 1 C(d i )=j + 1 σ √ 2π e − 1 2σ 2 ( d A −q 2 ) 1 C(d A )=j = GKj(D, q) + 1 σ √ 2π e − 1 2σ 2 ( d A −q 2 ) 1 C(d A )=j
The adversary, through using its data owner role, can cause a change of δ = 1
σ √ 2π e − 1 2σ 2 ( d A −q 2 ) 1 C(d A )=j . However, the
Gaussian kernel values have not changed for any other data tuples. The change in a query output is directly and completely a result of δ, which the adversary knows independent of querying since it knows dA, q, and σ (which can be public). Given the query output is based only on hidden Gaussian kernel value sums, the output after insertion does not leak any more information on an individual tuple's kernel value beyond what the query output leaks inherently (without insertion). Because the distance is only used in the Gaussian kernel, and the adversary learns nothing about other tuples' Gaussian kernel values through insertion, it does not learn the distance from insertion.
If the adversary deleted one of her tuples, the outcome is exactly the same except δ is negative. With a semi-honest adversary, we can assume the adversary only deletes her own tuples.
The next analysis is to show our scheme leaks no further data beyond its output. More formally, our scheme leaks no additional information compared to an ideal scheme where the data is submitted to a trusted third party, who computes the same classification output. In our initial discussion of the protocol design in Section 6.2, we justified the security along this front for each step of the construction. See Appendix B for a more formal proof.
A final analysis that could be conducted is to formally characterize what the kernel density estimation output itself reveals about the underlying data. One way is to use the framework of differential privacy [8], which can tell us how much randomness we need to add to the output in order to achieve a specific quantifiable level of privacy as defined by the framework.
We say that a randomized algorithm A provides -differential privacy if for all data sets D1 and D2 differing by at most one element, and all subsets S of the range of f ,
Pr(A(D1) ∈ S) ≤ exp( ) · Pr(A(D2) ∈ S).
We also define the sensitivity of a function f : D → R m as ∆f = max
D 1 ,D 2 f (D1) − f (D2) 1.
Then [8] shows that, if we add Laplacian noise with standard deviation λ, as in
Pr(A f (D) = k) ∝ exp(− f (D) − k /λ),
A f gives (∆f /λ)-differential privacy.
In our case, we can view f as performing a kernel density estimation query with a particular query point on a set of data points, to receive a score for each class. A f would be a randomized version of f , which takes these scores from f and adds Laplacian noise to each one. The sensitivity is ∆f = 1 σ √ 2π , as follows from the proof of Theorem 6.1 when dA − q = 0, and so using A f would grant us σ √ 2π
λdifferential privacy. We can see that a larger σ in the Gaussian kernel grants us greater differential privacy (i.e., leads to a higher value of ) when the magnitude of the noise remains fixed. This follows the intuition that a larger σ in the kernel increases the influence of far-away points in the data set, and so the result is not as effected by any particular point.
While we proved that the Gaussian kernel output does not leak data tuple values through the distance-learning attacks of Section 5, we have not yet developed a framework to analyze other leakage channels other than the preliminary analysis above. Such a problem is challenging and we leave it to future work. However, even in the face of potential leakage, data perturbation provides some security, although in exchange for output accuracy.
Performance Considerations
Our kernel density solution requires O(N ) computation and communication overhead, where N is the number of data tuples in the data set. On large data sets, we acknowledge this cost might be impractical. Future work remains to find privacy-preserving constructions with optimizations. Our goals with the presented construction are to both demonstrate that there exists a theoretically viable privacypreserving alternative to k-NN, and lay the groundworks for future improvements. Note that existing private k-NN schemes also have linear performance costs, as discussed in Section 3.
One solution for improving performance is parallelism, since our scheme operates on each data tuple independent of other tuples. Ideally, multiple machines would run in parallel at both cloud parties, providing parallelism not only in computation but also network communication. Each machine at the data host would compute a subcomponent of the kernel value sums, and one master node can aggregate these subcomponents and execute Yao's garbled circuit protocol to produce the classification. Our protocol was intentionally designed so that the function circuit that is garbled is of limited complexity, without need for complex sub-routines such as decryption. Thus, Yao's protocol can be implemented efficiently [15].
K-NN suffers this linear-growth complexity as well, however approximation algorithms significantly improve performance by reducing the number of points considered during neighbor search. For example, k-d trees [3] divide the search space into smaller regions, while locality hashing [12] hashes nearby points to the same bin. Hence, search for a query's neighbors involves computation over the data points in the same region or bin, rather than the entire data set. Unfortunately, it is not obvious how to execute approximate k-NN in a privacy-preserving fashion. Similarly, there must be future work on similar optimizations and approximations for kernel density estimation.
COMPARING K-NN AND GAUSSIAN KERNEL DENSITY CLASSIFICATION
In this section, we examine the differences between classification with k-NN and with Gaussian kernel density estimation. We empirically evaluated the performance of using both algorithms, to show they can give similar results on real data. Appendix C expands on this evaluation, discussing a number of additional considerations when using Gaussian kernel density estimation. We construct hypothetical data sets where we can expect them to behave differently, and discuss the underlying causes for the discrepancy. We also discuss some practical issues specific to using a Gaussian kernel and how to solve them.
We summarize the results in Table 2. The medical data comes from the UCI Machine Learning Repository [1], a collection of real data sets used for empirical evaluations of machine learning algorithms. "Cancer 1" and "Cancer 2" contain measurements from breast cancer patients conducted at the University of Wisconsin. "Cancer 1" contains 9 features for each patient; "Cancer 2" came several years after "Cancer 1" and contains 30 features which are more finegrained. "Diabetes" contains measurements from Pima Indian diabetes patients (8 features per patient). We also use the MNIST handwritten digit recognition data set, which consist of 28 × 28 pixel grayscale images each containing a single digit between 0 and 9. Using several data sets, we evaluate each algorithm's classification accuracy as well as the degree of agreement between the two algorithms' outputs.
took 20% of each data set for testing. Table 2 contains the prediction accuracy and classification agreement for both algorithms on these test sets when using the remaining 80% as training data.
Except for the Pima Indian diabetes data set, Gaussian kernel density classification exhibits an accuracy no more than 1% lower than k-NN's, and the two algorithms agree on over 99% of classifications. In the diabetes case, both algorithms perform poorly because the data is not well separated, with regions scattered with both classes. We argue such a data set is not well suited for k-NN in the first place. Prior investigation must decide whether a particular algorithm is suitable for use.
ALTERNATIVE THREAT MODELS
Sections 5 and 6 focused on the DO-Q threat model since it accounts for a number of realistic scenarios, including various collusion relationships. Without any party (directly or through collusion) possessing both the data owner and querier roles, there are several remaining threat situations.
In this section, we explore these threat models. The one pervasive assumption remains: the data host and the cryptographic service provider (CSP) do not collude. As mentioned in Section 4.2, a data owner or querier role itself does not allow privacy compromise of the system. Hence, the remaining alternate threat models are outlined below:
• Each party possesses only its original role (e.g., a data host only possesses a data host role).
• The data host additionally obtains a data owner role.
• The data host additionally obtains a querier role.
• The CSP additionally obtains a data owner role.
• The CSP additionally obtains a querier role.
Note that a given system may be under more than one of the above threats. For example, the CSP and data host may both obtain an additional role. However, since we assume no collusion directly or indirectly between the two cloud parties, we can analyze each threat independently.
These threat models, while each covering fewer scenarios, are still realistic. There are settings where the cloud parties do not collude with certain other parties. For example, the cloud parties may have reputations to maintain, and may avoid collusion or misbehavior for fear of legal, financial, and reputational backlash from discovery through security audits, internal and external monitoring, whistleblowers, or investigations. Note that in all cases, our kernel density estimation solution using the Gaussian kernel can still be used as an appropriate alternative, since the DO-Q threat model allows for a strictly more powerful (in terms of roles) adversary. Our discussion that follows will look at other solutions though, under each threat model.
Original Roles Threat Model
In this threat model, each party only possesses or utilizes the role it was originally designated. This is the simplest of threat models because data owners do not collude with any other parties. Since data owners themselves do not receive output from the k-NN system, they cannot compromise privacy. Hence, we can treat them as trusted, allowing us to revert to single data owner designs [9,24], except using multiple data owners. These existing systems already provide privacy against cloud and querier adversaries. This scenario could realistically arise if data owners and querier are separate institutions. For example, hospitals could contribute data to a research study conducted at a university, and only the researchers at that university can query the system.
Data Host Threat Models
We consider now the threat models where the data host obtains the role of a data owner or a querier, but not both. This can be through collusion, or a design giving the data host the ability to insert data (e.g., encrypt data tuples) or initiate queries.
An extension of the system in [9] can protect against a data host with the data owner role. This system uses computation over Paillier ciphertext, similar in flavor to our kernel density estimation approach. Their system consists also of a CSP which they call the key party, and a data host. Their scheme is privacy-preserving in the single data owner model. The data host is allowed to encrypt, and hence possesses the role of a data owner, but all data that flows through the data host is encrypted, including that received from the key party. Given the data host does not know query values, the data host cannot learn anything that it did not already possess prior knowledge about (e.g., data submitted using the data owner role). Hence, we argue this scheme can be simply extended to multiple data owners in this threat model. Do note that this scheme is not secure under the DO-Q threat model since it still computes exact k-NN.
The data host with the querier role appears more problematic due to information leakage. Intuitively, the ability to observe plaintext query outputs can leak the classification of data tuples. In particular, the data host can observe what tuples are used in the output construction, and the query output associates classifications to those tuples. It is challenging to defend against this breach because in outsourced exact k-NN, some cloud party typically must determine the subset of encrypted data tuples associated with the system output. If that party colludes with a querier (or possesses the querying role), the group will be able to associate those tuples with query-returned classifications. In this situation, our Gaussian kernel density algorithm can again provide privacy, since the query output is dependent on the classification over all encrypted tuples. Hence, the query output cannot be associated with any particular set of data tuples.
Key Party Threat Models
Finally, we consider a key party either colluding with or possessing the role of a data owner or a querier, but not both. Again we will consider extending the Paillier-based scheme in [9]. A key party with either roles should not be able to compromise this system even with multiple data-owners. K-NN computation is done with relations to distances from a query point. Without knowing the query point, the key party who knows values in the data set (through the data owner role) will not be able to associate observed distance-based values with known data. Furthermore, the values it observes should either be masked (as in our scheme), or encrypted (as in [9]). A key party with querying ability will only observe masked or encrypted values, hence it will not be able to learn anything from the computation.
CONCLUSION
In this paper, we have presented the first exploration of privacy preservation in an outsourced k-NN system with multiple data owners. Under certain threat conditions, we can extend existing single data owner solutions to secure the multi-data owner scenario. However, under a particularly practical threat model, exact k-NN cannot be secured due to a set of adaptive distance-learning attacks. This highlights the need for investigation into the inherent leakage from outputs of machine learning algorithms. As an alternative solution, we propose use of a Gaussian kernel in kernel density estimation, which is the family of algorithms k-NN belongs to. We present a privacy preserving system that supports this similar algorithm, as well as evidence of its similarity to k-NN. Admittedly, the scheme may not be practical for large data sets, given that the computational and communication complexities scale linearly. Improving performance is a remaining challenge, and we hope this first step lays the groundwork for future optimized privacy-preserving solutions.
ACKNOWLEDGEMENTS
This work is partially supported by a National Science Foundation grant (CNS-1237265). The first author is supported by the National Science Foundation Graduate Research Fellowship. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors(s) and do not necessarily reflect the views of the National Science Foundation.
APPENDIX A. KERNEL DENSITY ESTIMATION AND REGRESSION
Given n samples {x1, · · · , xn} of a random variable x, one can estimate the probability density function p(x) with a kernel K in the following way, called kernel density estimation:
p(x) = 1 n n i=1 K( x − xi )
where · is a norm. Given this estimate, classification can be determined as the most probable class:
DC = {xi | (xi, yi) ∈ D, yi = C} p(x|y = C, D) = 1 |DC | x i ∈D C K( x − xi ) p(y = C|D) = |DC | |D| p(y = C|x, D) = p(x|y = C, D)p(y = C|D) p(x|D) = p(x|y = C, D)p(y = C|D) C p(x|y = C , D)p(y = C |D) = 1 |D C | x i ∈D C K( x − xi ) |D C | |D| C 1 |D C | x i ∈D C K( x − xi ) |D C | |D| = x i ∈D C K( x − xi ) x i ∈D K( x − xi ) ∝ x i ∈D C K( x − xi ) arg max C p(y = C|x, D) = arg max C x i ∈D C K( x − xi )
Therefore, to classify a particular point x, we can sum the kernel values of the points which belong to each class and determine which class has the largest sum.
B. ADDITIONAL SECURITY ANALYSIS
In this section, we additionally analyze our scheme to show that it leaks no further data beyond what the classification output leaks, under our semi-honest adversary model. More formally, our scheme leaks no further data than an ideal scheme where the data is submitted to a trusted third party, who computes the Gaussian kernel value sums and produces the same classification output. In the below proofs, we abbreviate the data host as the DH and the crypto-service provider as the CSP. Also, our initial discussion of the protocol design in Section 6.2 already justified the security provided by each step of the construction. We will avoid repeating these detail in the proofs and analyze leakage by the algorithm as a whole.
Theorem B.1. Algorithm 1 leaks nothing about data tuples or queries.
Proof. As we see in step 1 of Algorithm 2, the inputs to Algorithm 1 are one encrypted data tuple and one encrypted query. In step 1 and 3 of the algorithm, only encrypted data is visible to the DH. In step 2, the CSP does see plaintext. However, each plaintext has had a random additive mask applied to the original true value, done by the DH in step 1(c). Thus, the CSP cannot determine the true value from the plaintext. Under the security of the Paillier cryptosystem, the DH and the CSP learn nothing about data tuples or queries.
Theorem B.2. Algorithm 2 leaks nothing about the data tuples or queries.
Proof. In step 1, the DH only operates on encrypted values. Theorem B.1 proves that the SquareDist algorithm (Algorithm 1) leaks nothing about the data tuples or queries, so the DH learns nothing from running SquareDist.
In step 2, the CSP again views plaintext, but a random additive mask has been applied (step 1(b)). Thus, the CSP cannot determine the original unmasked value, and does not learn anything about the data tuples or queries.
Theorem B.3. Algorithm 3 leaks nothing about the data tuples or queries, except what may inherently leak from the classification output of kernel density estimation using a Gaussian kernel.
Proof.
Step 0 leaks no information based on Theorem B.2. Again, the DH only operates on encrypted values in step 1. In step 2, the CSP has access to the Gaussian kernel value for a masked value (from step 2(a) of Algorithm 2), and a randomly masked classification vector. Since all values are masked, the CSP cannot determine the true value or classification of any data tuple or query. Note the CSP does not return a decrypted classification vector to the DH, resulting in the DH still only operating on encrypted values in steps 3 through 6. Steps 6 through 8 are simply Yao's garbled circuit protocol for the defined function f . Through this construction, the garbled circuit's inputs, which are masked values and their associated additive masks, are not revealed to any party that doesn't possess them. Only the DH possesses the masks, and only the CSP can obtain the masked kernel value sums through decryption. Since these CSPowned values are masked, the CSP learns nothing from the plaintext. By the security of Yao's protocol, nothing else is leaked by the garbled circuit protocol except the circuit output, which is the classification output of our algorithm.
Thus, no information is revealed about the data tuples or queries from Algorithm 3 except what may inherently leak from the classification output of kernel density estimation with a Gaussian kernel. Our protocol is as secure as an an ideal model, where a single trust cloud party receives all data tuples and computes kernel density estimation in the clear.
C. ADDITIONAL CONSIDERATIONS WITH GAUSSIAN KERNEL DENSITY CLASSIFICATION
Here we discuss additional considerations with using Gaussian kernel density estimation in place of k-NN. We construct hypothetical data sets where we can expect them to behave differently, and discuss the underlying causes for the discrepancy. We also discuss some practical issues specific to using a Gaussian kernel and how to solve them.
C.1 Causes of Divergence
Given a set of training data and parameters for the Gaussian kernel, assume query q is classified as class A. If sufficiently many points of class B are added to the training data, the classification of q will change to B no matter how far away these extra points are. In contrast, the classification under k-NN would remain unaffected if the new points are farther away than any of the existing k nearest neighbors. Figure 3 illustrates a similar situation, where a point of one class is surrounded by many points of a different class.
To see why, recall from Appendix A:
p(y = C|D) = |DC | |D| ,
which means that the prior probability of a class is equal to its proportion in the training data. If the data contains significantly more of one class than the other, then Gaussian kernel density classification will tend to predict the more frequent class, which arguably is preferable if the query points also satisfy this assumption. If it is important to detect some less commonly occurring classes even at the cost of increased false positives, the less common classes should be weighted higher while computing the summed influences for each class. This can be done using a scaled basis vector for that class.
C.2 Importance of Distance Metric
A significant difference between classification with k-NN and with Gaussian kernel density estimation arises from the usage of the distances in computing the decision. With k-NN, only the ordering of the distances to the labeled data points matters, and changes in the actual values of the distances have no effect if the order remains constant. However, the Gaussian kernel is a functions of distance. While the Gaussian kernel is monotonic (it decreases in value as distance increases), the sum of the values for each class can dramatically change even if the ordering of the distances between points does not. In particular, as the Gaussian kernel is not linear, the classification of a query might change if distances are all scaled by some factor.
The limited precision of numbers used in calculation creates a more practical concern that the kernel value for a large distance will round down to 0, effectively imposing a finite width on the Gaussian kernel. If all distances between pairs of points exceed this width, then kernel density estimation fails to provide any information about a query point; if a large fraction does, then the accuracy will correspondingly suffer.
To solve these issues, we recommend re-scaling all features to fit in [0, 1]. Since the data owners already need to agree on the number of features and a common meaning for each feature, they can also find a consistent way to scale each feature. In fact, if all features contain roughly equal information, then there is no particular reason some features should span a wider range and have greater influence on the distance metric. Other domain-specific methods for ensuring that the distance between two points is never too large would also suffice.
C.3 Selecting Parameters
Both k-NN and the Gaussian kernel contain parameters we need to select: k, the number of neighbors to consider, and σ, the standard deviation for the Gaussian distribution, respectively. Choosing them inappropriately can lead to poor classification performance. For example, if points of one class are scattered amongst points of another class, then a large k will prevent us from classifying the scattered class correctly. If σ is too large, then the differences in the summed influence for each class will depend more heavily on the number of data points in each class rather than the distance to the points, and the system will tend to classify queries as the more frequent class. Parameters should be selected using standard techniques such as cross-validation and grid search.
Figure 1 :
1Figure 1: The general model of existing outsourced k-NN system. The data owner is trusted and outsources encrypted data to the cloud party. Queriers request k-NN computation from the cloud party, and are sometimes trusted depending on the prior work. The cloud party executes k-NN classification and is semi-honest.
Figure 2 :
2The model of an outsourced k-NN system containing four parties. The cloud parties are the data host and the crypto-service provider (CSP).
Algorithm 3
3Classif y(q): Output a classification prediction for query q where there are c classes 0. DH: Run KernelV alue(q) 1. DH: for 1 ≤ i ≤ |D| do:(a) Generate a random invertible matrix Bi of size c × c.(b) vi = Bi × E(ci). (d) Send vi to CSP . 2. CSP : for 1 ≤ i ≤ |D| do: (a) wi = v g i i , where gi are kernel density values. (b) Send wi to DH. 3. DH: for 1 ≤ i ≤ |D| do: µi is the random distance masking for the i-th tuple (see Algorithm 2 step 1(c)) 4. DH: A ← Σ |D| i=1 w i . Note thatthis is a c × 1 vector. 5. DH: for 1 ≤ i ≤ c: (a) Choose new µi ∈ ZN randomly. (b) Ac ← Ac + µi, where Ac is the c-th component of A. 6. DH: Let f (in1, ..., inc, µ1, ..., µc) = arg max k (in k − µ k mod N ). (a) Generate the garbled circuit GF f of f and the garbled inputs GI(µi) ∀i ∈ {1, ..., c}. (b) Send GF f , GI(µi) and Ai ∀i ∈ {1, ..., c} to CSP . 7. CSP : (a) A i ← E −1 (Ai) ∀i ∈ {1, ..., c}. (b) Conduct oblivious transfers with DH to obtain GI(A i ) ∀i ∈ {1, ..., c} (c) gout ← GF f (GI(A 1 ), ..., GI(A c ), GI(µ1), ..., GI(µc)).
Figure 3 :
3A visualization of unbalanced data. The filled points and unfilled points represent separate classes. Depending on the distance metric and the parameter of the Gaussian kernel, it is possible for classification with kernel density estimation to predict all possible points in the space as the unfilled class, whereas 1-NN classification would not.
Table 2 :
2Empirical results for k-NN versus kernel
density estimation (KDE) with a Gaussian kernel.
To disambiguate between the many distinct meanings of "kernel" in machine learning, statistics, and computer science, the kind of kernels in this paper are also called "smoothing kernels".
UCI machine learning repository. K Bache, M Lichman, K. Bache and M. Lichman. UCI machine learning repository, 2013.
Efficient garbling from a fixed-key blockcipher. M Bellare, V T Hoang, S Keelveedhi, P Rogaway, Proceedings of the IEEE Symposium on Security and Privacy, SP'13. the IEEE Symposium on Security and Privacy, SP'13M. Bellare, V. T. Hoang, S. Keelveedhi, and P. Rogaway. Efficient garbling from a fixed-key blockcipher. In Proceedings of the IEEE Symposium on Security and Privacy, SP'13.
Multidimensional binary search trees used for associative searching. J L Bentley, Commun. ACM. J. L. Bentley. Multidimensional binary search trees used for associative searching. Commun. ACM, Sept. 1975.
Order-preserving encryption revisited: Improved security analysis and alternative solutions. A Boldyreva, N Chenette, A O'neill, Proceedings of the Annual Conference on Advances in Cryptology, CRYPTO'11. the Annual Conference on Advances in Cryptology, CRYPTO'11A. Boldyreva, N. Chenette, and A. O'Neill. Order-preserving encryption revisited: Improved security analysis and alternative solutions. In Proceedings of the Annual Conference on Advances in Cryptology, CRYPTO'11.
Fast privacy-preserving top-k queries using secret sharing. M Burkhart, X Dimitropoulos, Proceedings of the International Conference on Computer Communications and Networks, ICCCN'10. the International Conference on Computer Communications and Networks, ICCCN'10M. Burkhart and X. Dimitropoulos. Fast privacy-preserving top-k queries using secret sharing. In Proceedings of the International Conference on Computer Communications and Networks, ICCCN'10.
ABY -a framework for efficient mixed-protocol secure two-party computation. D Demmler, T Schneider, M Zohner, Proceedings of the Network and Distributed System Security Symposium, NDSS'15. the Network and Distributed System Security Symposium, NDSS'15D. Demmler, T. Schneider, and M. Zohner. ABY -a framework for efficient mixed-protocol secure two-party computation. In Proceedings of the Network and Distributed System Security Symposium, NDSS'15.
Cloning voronoi diagrams via retroactive data structures. M Dickerson, D Eppstein, M Goodrich, European Symposium on Algorithms. M. Dickerson, D. Eppstein, and M. Goodrich. Cloning voronoi diagrams via retroactive data structures. In European Symposium on Algorithms, Lecture Notes in Computer Science. 2010.
Differential privacy. C Dwork, Proceedings of the International Colloquium on Automata, Languages and Programming. the International Colloquium on Automata, Languages and Programming6C. Dwork. Differential privacy. In Proceedings of the International Colloquium on Automata, Languages and Programming, ICALP'06.
Secure k-nearest neighbor query over encrypted data in outsourced environments. Y Elmehdwi, B K Samanthula, W Jiang, Proceedings of the IEEE International Conference on Data Engineering. the IEEE International Conference on Data Engineering14Y. Elmehdwi, B. K. Samanthula, and W. Jiang. Secure k-nearest neighbor query over encrypted data in outsourced environments. In Proceedings of the IEEE International Conference on Data Engineering, ICDE'14.
A randomized protocol for signing contracts. S Even, O Goldreich, A Lempel, Commun. ACM. S. Even, O. Goldreich, and A. Lempel. A randomized protocol for signing contracts. Commun. ACM, June 1985.
Fully homomorphic encryption using ideal lattices. C Gentry, Proceedings of the ACM Symposium on Theory of Computing, STOC'09. the ACM Symposium on Theory of Computing, STOC'09C. Gentry. Fully homomorphic encryption using ideal lattices. In Proceedings of the ACM Symposium on Theory of Computing, STOC'09.
Similarity search in high dimensions via hashing. A Gionis, P Indyk, R Motwani, Proceedings of the International Conference on Very Large Data Bases, VLDB'99. the International Conference on Very Large Data Bases, VLDB'99A. Gionis, P. Indyk, and R. Motwani. Similarity search in high dimensions via hashing. In Proceedings of the International Conference on Very Large Data Bases, VLDB'99.
Privately computing a distributed k-nn classifier. M Kantarcioglu, C Clifton, Proceedings of the European Conference on Principles and Practice of Knowledge Discovery in Databases, PKDD'04. the European Conference on Principles and Practice of Knowledge Discovery in Databases, PKDD'04M. Kantarcioglu and C. Clifton. Privately computing a distributed k-nn classifier. In Proceedings of the European Conference on Principles and Practice of Knowledge Discovery in Databases, PKDD'04.
A proof of security of Yao's protocol for two-party computation. Y Lindell, B Pinkas, Journal of Cryptography. Y. Lindell and B. Pinkas. A proof of security of Yao's protocol for two-party computation. In Journal of Cryptography, April 2009.
Fairplay: a secure two-party computation system. D Malkhi, N Nisan, B Pinkas, Y Sella, Proceedings of the USENIX Security Symposium, USENIX'04. the USENIX Security Symposium, USENIX'04D. Malkhi, N. Nisan, B. Pinkas, and Y. Sella. Fairplay: a secure two-party computation system. In Proceedings of the USENIX Security Symposium, USENIX'04.
Public-key cryptosystems based on composite degree residuosity classes. P Paillier, Proceedings of the International Conference on Theory and Application of Cryptographic Techniques, EUROCRYPT'99. the International Conference on Theory and Application of Cryptographic Techniques, EUROCRYPT'99P. Paillier. Public-key cryptosystems based on composite degree residuosity classes. In Proceedings of the International Conference on Theory and Application of Cryptographic Techniques, EUROCRYPT'99.
An ideal-security protocol for order-preserving encoding. R A Popa, F H Li, N Zeldovich, Proceedings of the IEEE Symposium on Security and Privacy, SP'13. the IEEE Symposium on Security and Privacy, SP'13R. A. Popa, F. H. Li, and N. Zeldovich. An ideal-security protocol for order-preserving encoding. In Proceedings of the IEEE Symposium on Security and Privacy, SP'13.
Cryptdb: Protecting confidentiality with encrypted query processing. R A Popa, C M S Redfield, N Zeldovich, H Balakrishnan, Proceedings of the ACM Symposium on Operating Systems Principles, SOSP'11. the ACM Symposium on Operating Systems Principles, SOSP'11R. A. Popa, C. M. S. Redfield, N. Zeldovich, and H. Balakrishnan. Cryptdb: Protecting confidentiality with encrypted query processing. In Proceedings of the ACM Symposium on Operating Systems Principles, SOSP'11.
Efficient privacy-preserving k-nearest neighbor search. Y Qi, M J Atallah, Proceedings of the International Conference on Distributed Computing Systems, ICDCS'08. the International Conference on Distributed Computing Systems, ICDCS'08Y. Qi and M. J. Atallah. Efficient privacy-preserving k-nearest neighbor search. In Proceedings of the International Conference on Distributed Computing Systems, ICDCS'08.
How to exchange secrets by oblivious transfer. M Rabin, Boston, MA, USATechnical reportM. Rabin. How to exchange secrets by oblivious transfer. Technical report, Boston, MA, USA, 1981.
Efficient generation of random nonsingular matrices. D Randall, Berkeley, CA, USATechnical reportD. Randall. Efficient generation of random nonsingular matrices. Technical report, Berkeley, CA, USA, 1991.
Privacy preserving nearest neighbor search. M Shaneck, Y Kim, V Kumar, Proceedings of the IEEE International Conference on Data Mining Workshops, ICDM Workshops '06. the IEEE International Conference on Data Mining Workshops, ICDM Workshops '06M. Shaneck, Y. Kim, and V. Kumar. Privacy preserving nearest neighbor search. In Proceedings of the IEEE International Conference on Data Mining Workshops, ICDM Workshops '06.
TinyGarble: Highly compressed and scalable sequential garbled circuits. E Songhori, S Hussain, A.-R Sadeghi, T Schneider, F Koushanfar, Proceedings of the IEEE Symposium on Security and Privacy, SP'15. the IEEE Symposium on Security and Privacy, SP'15E. Songhori, S. Hussain, A.-R. Sadeghi, T. Schneider, and F. Koushanfar. TinyGarble: Highly compressed and scalable sequential garbled circuits. In Proceedings of the IEEE Symposium on Security and Privacy, SP'15.
Secure knn computation on encrypted databases. W K Wong, D W Cheung, B Kao, N Mamoulis, Proceedings of the International Conference on Management of Data, SIGMOD'09. the International Conference on Management of Data, SIGMOD'09W. K. Wong, D. W.-l. Cheung, B. Kao, and N. Mamoulis. Secure knn computation on encrypted databases. In Proceedings of the International Conference on Management of Data, SIGMOD'09.
Top 10 algorithms in data mining. X Wu, V Kumar, J Ross Quinlan, J Ghosh, Q Yang, H Motoda, G J Mclachlan, A Ng, B Liu, P S Yu, Z.-H Zhou, M Steinbach, D J Hand, D Steinberg, Knowl. Inf. Syst. X. Wu, V. Kumar, J. Ross Quinlan, J. Ghosh, Q. Yang, H. Motoda, G. J. McLachlan, A. Ng, B. Liu, P. S. Yu, Z.-H. Zhou, M. Steinbach, D. J. Hand, and D. Steinberg. Top 10 algorithms in data mining. Knowl. Inf. Syst., Dec. 2007.
Secure nearest neighbor revisited. X Xiao, F Li, B Yao, Proceedings of the IEEE International Conference on Data Engineering. the IEEE International Conference on Data Engineering13X. Xiao, F. Li, and B. Yao. Secure nearest neighbor revisited. In Proceedings of the IEEE International Conference on Data Engineering, ICDE'13.
K nearest neighbor classification across multiple private databases. L Xiong, S Chitti, L Liu, Proceedings of the ACM International Conference on Information and Knowledge Management, CIKM'06. the ACM International Conference on Information and Knowledge Management, CIKM'06L. Xiong, S. Chitti, and L. Liu. K nearest neighbor classification across multiple private databases. In Proceedings of the ACM International Conference on Information and Knowledge Management, CIKM'06.
Preserving data privacy in outsourcing data aggregation services. L Xiong, S Chitti, L Liu, ACM Trans. Internet Technol. L. Xiong, S. Chitti, and L. Liu. Preserving data privacy in outsourcing data aggregation services. ACM Trans. Internet Technol., Aug. 2007.
How to generate and exchange secrets. A Yao, Proceedings of the IEEE Annual Symposium on Foundations of Computer Science, FOCS'86. the IEEE Annual Symposium on Foundations of Computer Science, FOCS'86A. Yao. How to generate and exchange secrets. In Proceedings of the IEEE Annual Symposium on Foundations of Computer Science, FOCS'86.
A crypto-based approach to privacy-preserving collaborative data mining. J Zhan, S Matwin, Proceedings of the IEEE International Conference on Data Mining Workshops, ICDM Workshops'13. the IEEE International Conference on Data Mining Workshops, ICDM Workshops'13J. Zhan and S. Matwin. A crypto-based approach to privacy-preserving collaborative data mining. In Proceedings of the IEEE International Conference on Data Mining Workshops, ICDM Workshops'13.
Privacy-preserving distributed k-nearest neighbor mining on horizontally partitioned multi-party data. F Zhang, G Zhao, T Xing, Advanced Data Mining and Applications. F. Zhang, G. Zhao, and T. Xing. Privacy-preserving distributed k-nearest neighbor mining on horizontally partitioned multi-party data. In Advanced Data Mining and Applications, Lecture Notes in Computer Science. 2009.
Secure k-nn computation on encrypted cloud data without sharing key with query users. Y Zhu, R Xu, T Takagi, Proceedings of the International Workshop on Security in Cloud Computing, Cloud Computing'13. the International Workshop on Security in Cloud Computing, Cloud Computing'13Y. Zhu, R. Xu, and T. Takagi. Secure k-nn computation on encrypted cloud data without sharing key with query users. In Proceedings of the International Workshop on Security in Cloud Computing, Cloud Computing'13.
| []
|
[
"Implications of LHCb Data for Lepton Flavour Universality Violation E-mails",
"Implications of LHCb Data for Lepton Flavour Universality Violation E-mails"
]
| [
"T Hurth ",
"F Mahmoudi ",
"D Martínez Santos ",
"S Neshatpour [email protected] ",
"[email protected] Hurth@cern Ch ",
"Diego Martinez ",
"Santos@cern ",
"Ch ",
"\nPRISMA+ Cluster of Excellence and Institute for Physics (THEP)\nJohannes Gutenberg University\nD-55099MainzGermany\n",
"\nInstitut de Physique\nUniversité de Lyon\nUniversité Claude Bernard Lyon 1\nCNRS/IN2P3\n\n",
"\nTheoretical Physics Department\nInfinis de Lyon\nUMR 5822\nCERN\nF-69622, CH-1211Villeurbanne, Geneva 23France, Switzerland\n",
"\nInstituto Galego de Física de Altas Enerxías\nUniversidade de Santiago de Compostela\nSpain\n",
"\nINFN-Sezione di Napoli\nVia Cintia80126NapoliItalia\n"
]
| [
"PRISMA+ Cluster of Excellence and Institute for Physics (THEP)\nJohannes Gutenberg University\nD-55099MainzGermany",
"Institut de Physique\nUniversité de Lyon\nUniversité Claude Bernard Lyon 1\nCNRS/IN2P3\n",
"Theoretical Physics Department\nInfinis de Lyon\nUMR 5822\nCERN\nF-69622, CH-1211Villeurbanne, Geneva 23France, Switzerland",
"Instituto Galego de Física de Altas Enerxías\nUniversidade de Santiago de Compostela\nSpain",
"INFN-Sezione di Napoli\nVia Cintia80126NapoliItalia"
]
| []
| We analyse the new physics implications of theoretically clean → observables in a modelindependent approach and compare their coherence with the implications of other rare -decays. A statistical comparison is done between the New Physics explanation and hadronic contributions as the source of the anomalies in angular observables of the → * decay. We make projections for future measurements that indicate that LHCb will be in the position to discover lepton non-universality via a single observable using the Run 3 data. The global fit of rare -decays is given within a multidimensional fit involving all the 20 relevant Wilson coefficients. | 10.22323/1.398.0564 | [
"https://arxiv.org/pdf/2112.08343v1.pdf"
]
| 245,144,927 | 2112.08343 | 6b7dfbbef8d97e71a0d2d1516622b76f5e90b9c0 |
Implications of LHCb Data for Lepton Flavour Universality Violation E-mails
15 Dec 2021
T Hurth
F Mahmoudi
D Martínez Santos
S Neshatpour [email protected]
[email protected] Hurth@cern Ch
Diego Martinez
Santos@cern
Ch
PRISMA+ Cluster of Excellence and Institute for Physics (THEP)
Johannes Gutenberg University
D-55099MainzGermany
Institut de Physique
Université de Lyon
Université Claude Bernard Lyon 1
CNRS/IN2P3
Theoretical Physics Department
Infinis de Lyon
UMR 5822
CERN
F-69622, CH-1211Villeurbanne, Geneva 23France, Switzerland
Instituto Galego de Física de Altas Enerxías
Universidade de Santiago de Compostela
Spain
INFN-Sezione di Napoli
Via Cintia80126NapoliItalia
Implications of LHCb Data for Lepton Flavour Universality Violation E-mails
15 Dec 2021
We analyse the new physics implications of theoretically clean → observables in a modelindependent approach and compare their coherence with the implications of other rare -decays. A statistical comparison is done between the New Physics explanation and hadronic contributions as the source of the anomalies in angular observables of the → * decay. We make projections for future measurements that indicate that LHCb will be in the position to discover lepton non-universality via a single observable using the Run 3 data. The global fit of rare -decays is given within a multidimensional fit involving all the 20 relevant Wilson coefficients.
Theoretically clean vs the rest of the observables
Recent LHCb measurements have indicated tensions with the Standard Model (SM) predictions in a number of → decays. There are tensions in the angular observables of the → * + − decay with the most significant tension in the 5 observable [1]. Similar tensions have also been measured in the + → * + + − decay [2]. Furthermore, the branching ratio of several -decays such as → + − , → + − and Λ → Λ + − have been measured to be below the SM prediction [3][4][5]. The recent LHCb measurement on the lepton flavour universality violating (LFUV) observable has confirmed the tension with the SM with 3.1 significance [6]. LHCb has measured similar deviations in * in the two low 2 bins with 2.3 and 2.5 significance [7]. To study the New Physics (NP) implication of these measurements, all the relevant -decay observables should be considered. However, the precision of the theoretical predictions is not the same for all these observables. Due to the cancellation of hadronic uncertainties in the numerator and the denominator, the LFUV observables ( * ) = BR( → ( * ) )/BR( → ( * ) ) are predicted very precisely in the SM, with theoretical uncertainty less than 1 (3)% for the 2 ∈ [1.1, 6] ( [0.045, 1.1]) GeV 2 bin. Another clean observable with small theoretical uncertainty (less than 5%) is the branching ratio of the → + − decay. On the other hand, the rest of the → observables in general suffer from larger theoretical uncertainties due to hadronic contributions. Although with the appropriate choice of angular observables, less sensitivity from local form factor uncertainties is achievable, there are still contributions from power corrections of non-local hadronic effects which are not well-known within QCD factorisation and are usually "guesstimated" (for a study of the impact of the local and non-local hadronic uncertainties on NP fits see Ref. [8]). Therefore, we separate the theoretically "clean observables" from the rest of the → observables and compare the NP implications and coherence of NP fits to these two data sets. For the analysis we have used the SuperIso public program [9]. From Table 1, we see that in the one-dimensional fits to the clean observables there are several NP scenarios explaining the data with more than 4 significance better than the SM [10]. For the one-dimensional fit to all → observables except the clean ones (right panel of Table. 1), the most favoured scenario is NP in
( ) 9
with a significance of 6.5 . However, this significance depends on the choice of form factors as well as the guesstimated size of the non-factorisable power corrections (here assumed to be 10% compared to leading order QCD factorisation contributions). Compared with the NP fit to the rest of the observables, there are favoured scenarios such as NP in 9 , resulting in coherent best fit values for both sets of observables. This is also the most favoured scenario in the global fit where the clean observables and the rest of the → observables are considered together [10] (see Refs. [11][12][13] for other recent global fits).
NP or hadronic contributions in → * observables
The impact of the guesstimated size of power corrections on the significance of NP in 9 can be clearly seen by describing the → * + − decay in terms of helicity amplitudes, with NP effects in 9 (and 7 ) and power corrections ℎ , both contributing to the vectorial helicity amplitude [14]
( ) = − eff 9˜− 9˜− + 2 2 2ˆ( eff 7˜− 7˜− ) − 16 2 (LO QCDf + ℎ ) . (1)
Instead of making assumptions on the size of the power corrections, these contributions can be parameterised by a number of free parameters and fitted directly to the data. A general description of the power corrections involves several free parameters [15,16] which with the current experimental data results in fitted parameters that are loosely constrained [17]. A minimalistic description of the hadronic effect is given by [17,18]
ℎ ( 2 ) = −˜( 2 ) 16 2 2 2 Δ ,PC 9 ,(2)
which involves only three real free parameters corresponding to each helicity = 0, ± (six if assumed complex). This description with smaller degrees of freedom (dof) in principle has a better chance of giving a constrained fit and can be considered as a null test for NP; if the three fitted hadronic parameters (one free parameter corresponding to each helicity) differ from each other, NP in NP 9 can be ruled out. Although it is possible that the fitted power corrections for each helicity are very similar to mimic NP in NP 9 , it is highly improbable, furthermore there are theoretical arguments that the positive helicity amplitude should be suppressed compared to the two other helicities [19].
For the fit do data, we consider only the experimental measurements on → * + − observables in the 2 ≤ 8 GeV 2 bins since the power corrections for the low-and high-2 regions Table 2: On the left, fit of hadronic power corrections for the three helicities ( = 0, ±) with real Δ ,PC 9 , using the data on → * ¯/ observables with 2 bins 8 GeV 2 . On the right, the significance of the improved description of the hadronic fit as well as the NP fit compared to the SM and to each other. are not necessarily the same. From the left panel of Table 2, it is clear that although the central value of the best fit point for each helicity is different, the three free parameters cannot be strongly constrained and are compatible with each other within 68% confidence interval. As given in the right panel of Table 2, including either NP contributions ( NP 9 ) or power corrections (Δ ,NP 9 ), a better description of the data is obtained with a significance of more than 5 compared to the SM. It should be noted that the NP scenario with NP 9 contributions is embedded in the hadronic fit, hence it is possible to make a statistical comparison between the two fits. And as given in the right panel of Table 2, the improvement of the hadronic fit compared to the NP description is less than 1 suggesting that there is no indication to introduce two more dof for the hadronic fit .
Future projections of clean observables
We consider three benchmark points for the planned LHCb upgrades and make predictions for the clean observables. For the benchmarks, we consider the two LHCb upgrades with 50 and 300 fb −1 integrated luminosity as well as an intermediate stage with 18 fb −1 of data. Assuming that in future measurements, the current experimental central values remain the same, with the future reduced experimental uncertainties (see [10] for details) it is not possible to get acceptable fits. Assigning the global 9 as a nuisance parameter to take into account unknown power corrections -as done for example in Ref. [20] -is inappropriate as there is no theory indication that the three helicities would be described by a common hadronic effect. Even considering the weak sensitivity on the positive helicity, at least two independent free parameters would be necessary to describe the power corrections.
Instead, we make an equally strong assumption; we presume that future data correspond to projecting the observables with the current fitted values of each of the three most favoured scenarios of the left panel in Table 1. As given in Table 3, already with 18 fb −1 data, the NP significance will be more than 6 in all three scenarios. However, the significance is quite dependent on the presumed reduction in statistical uncertainties, as can be seen in Fig. 1 where Pull SM is shown for each of the individual LFUV observables when assuming the current central value of 9 ( 10 ) from the clean observables remains unchanged. The lower [upper] limit in each band is when assuming current systematic uncertainties do not improve [having ultimate systematic uncertainty of 1% for the LFUV observables and 5% for BR( → + − )]. For the 9 ( 10 ) scenario, alone can reach 5 significance with ∼ 15 (20) fb −1 integrated luminosity. In Table. 4 we present the 20-dimensional global fit where we obtain Pull SM = 5.5 . However, considering that two of the Wilson coefficients are degenerate and taking into account the criterion presented in Refs. [21,22], the effective degrees of freedom are 19 resulting in Pull SM = 5.6 .
Multidimensional global fit and look-elsewhere effect
Conclusions
The and * ratios measured by the LHCb collaboration suggest lepton flavour universality violating new physics. This implication is enforced by considering the rest of the → observables.
Figure 1 :
1Significance of Pull SM for each of the projected LFUV observables, individually.
NP does not necessarily present itself in only one or two operator structures, and in principle all the 20 relevant Wilson coefficients can receive NP contributions. Furthermore, while lookelsewhere effect (LEE) can be introduced when focusing on a subset of observables, this can also happen when choosing a posteriori one and/or two operators. However, in the case where the fit includes all relevant observables and the maximum number of Wilson coefficients are set to be free, then LEE is avoided as there are no a posteriori decisions and the p-values take into account the number of degrees of freedom and finally insensitive parameters and flat directions can be eliminated based on profile likelihoods and correlations of the fit.
Table 1: Comparison of one operator NP fits to clean observables on the left and to the rest of the → observables on the right (assuming 10% error for the power corrections).Only
( * ) , , → + −
( 2
SM = 28.19)
b.f. value
2
min
Pull SM
9
−1.00 ± 6.00 28.1
0.2
9
0.80 ± 0.21 11.2
4.1
9
−0.77 ± 0.21 11.9
4.0
10
0.43 ± 0.24 24.6
1.9
10
−0.78 ± 0.20
9.5
4.3
10
0.64 ± 0.15
7.3
4.6
LL
0.41 ± 0.11 10.3
4.2
LL
−0.38 ± 0.09
7.1
4.6
All obs. except
( * ) , , → + − ( 2
SM = 200.1)
b.f. value
2
min
Pull SM
9
−1.01 ± 0.13 158.2
6.5
9
0.70 ± 0.60 198.8
1.1
9
−1.03 ± 0.13 156.0
6.6
10
0.34 ± 0.23 197.7
1.5
10
−0.50 ± 0.50 199.0
1.0
10
0.41 ± 0.23 196.5
1.9
LL
0.33 ± 0.29 198.9
1.1
LL
−0.75 ± 0.13 167.9
5.7
Table 3 :
3Predictions of Pull SM for the LHCb upgrade scenarios with 18, 50 and 300 fb −1 luminosity collected,
Table 4 :
420-dimensional global fit to the → data, assuming 10% error for the power corrections.
However, some of the latter observables might suffer from underestimated non-local hadronic uncertainties. We suggested a minimal description of these contributions which can work as a null test for new physics. Nonetheless, with the current data no conclusive judgment is possible. Moreover, we showed that assuming any of the favoured new physics scenarios remain, future LHCb measurements of lepton flavour universality violating observables can establish beyond the Standard Model physics with more than 5 significance already with 18 fb −1 data. Furthermore, for an unbiased determination of the new physics structure, we also considered a 20-dimensional fit, still finding a large significance for the new physics description of the → data.
. R Aaij, LHCbarXiv:2003.04831Phys. Rev. Lett. 125111802R. Aaij et al. [LHCb], Phys. Rev. Lett. 125 (2020) no.1, 011802 [arXiv:2003.04831].
. R Aaij, LHCbarXiv:2012.13241Phys. Rev. Lett. 12616161802R. Aaij et al. [LHCb], Phys. Rev. Lett. 126 (2021) no.16, 161802 [arXiv:2012.13241].
. R Aaij, LHCbarXiv:1403.8044JHEP. 06133R. Aaij et al. [LHCb], JHEP 06 (2014), 133 [arXiv:1403.8044].
. R Aaij, LHCbarXiv:1503.07138JHEP. 06115R. Aaij et al. [LHCb], JHEP 06 (2015), 115 [erratum: JHEP 09 (2018), 145] [arXiv:1503.07138].
. R Aaij, LHCbarXiv:2105.14007R. Aaij et al. [LHCb], [arXiv:2105.14007].
. R Aaij, LHCbarXiv:2103.11769R. Aaij et al. [LHCb], [arXiv:2103.11769].
. R Aaij, LHCbarXiv:1705.05802JHEP. 0855R. Aaij et al. [LHCb], JHEP 08 (2017), 055 [arXiv:1705.05802].
. T Hurth, F Mahmoudi, S Neshatpour, arXiv:1603.00865Nucl. Phys. B. 909T. Hurth, F. Mahmoudi and S. Neshatpour, Nucl. Phys. B 909, 737-777 (2016) [arXiv:1603.00865].
. F Mahmoudi, arXiv:0710.2067Comput. Phys. Commun. 178F. Mahmoudi, Comput. Phys. Commun. 178 (2008) 745, [arXiv:0710.2067];
. F Mahmoudi, arXiv:0808.3144Comput. Phys. Commun. 1801579F. Mahmoudi, Comput. Phys. Commun. 180 (2009) 1579, [arXiv:0808.3144];
. F Mahmoudi, Comput. Phys. Commun. 1801718F. Mahmoudi, Comput. Phys. Commun. 180 (2009) 1718;
. S Neshatpour, F Mahmoudi, arXiv:2105.03428PoS. 202036S. Ne- shatpour and F. Mahmoudi, PoS TOOLS2020 (2021) 036, [arXiv:2105.03428].
. T Hurth, F Mahmoudi, D M Santos, S Neshatpour, arXiv:2104.10058T. Hurth, F. Mahmoudi, D. M. Santos and S. Neshatpour, [arXiv:2104.10058].
. L S Geng, B Grinstein, S Jäger, S Y Li, J Martin Camalich, R X Shi, arXiv:2103.12738L. S. Geng, B. Grinstein, S. Jäger, S. Y. Li, J. Martin Camalich and R. X. Shi, [arXiv:2103.12738].
. W Altmannshofer, P Stangl, arXiv:2103.13370W. Altmannshofer and P. Stangl, [arXiv:2103.13370].
. M Algueró, B Capdevila, S Descotes-Genon, J Matias, M Novoa-Brunet, arXiv:2104.08921M. Algueró, B. Capdevila, S. Descotes-Genon, J. Matias and M. Novoa-Brunet, [arXiv:2104.08921].
. S Jäger, J Martin Camalich, arXiv:1212.2263JHEP. 0543S. Jäger and J. Martin Camalich, JHEP 05 (2013), 043 [arXiv:1212.2263].
. M Ciuchini, M Fedele, E Franco, S Mishima, A Paul, L Silvestrini, M Valli, arXiv:1512.07157JHEP. 06116M. Ciuchini, M. Fedele, E. Franco, S. Mishima, A. Paul, L. Silvestrini and M. Valli, JHEP 06 (2016), 116 [arXiv:1512.07157].
. V G Chobanova, T Hurth, F Mahmoudi, D Santos, S Neshatpour, arXiv:1702.02234JHEP. 0725V. G. Chobanova, T. Hurth, F. Mahmoudi, D. Martinez Santos and S. Neshatpour, JHEP 07 (2017), 025 [arXiv:1702.02234].
. T Hurth, F Mahmoudi, S Neshatpour, arXiv:2006.04213Phys. Rev. D. 102555001T. Hurth, F. Mahmoudi and S. Neshatpour, Phys. Rev. D 102 (2020) no.5, 055001 [arXiv:2006.04213].
S Neshatpour, V G Chobanova, T Hurth, F Mahmoudi, D. Martinez Santos, arXiv:1705.10730Proceedings of 52nd Rencontres de Moriond on QCD and High Energy Interactions. 52nd Rencontres de Moriond on QCD and High Energy InteractionsS. Neshatpour, V. G. Chobanova, T. Hurth, F. Mahmoudi and D. Martinez Santos, Proceedings of 52nd Rencontres de Moriond on QCD and High Energy Interactions, pp. 87-90, 2017 [arXiv:1705.10730].
. S Jäger, J Martin Camalich, arXiv:1412.3183Phys. Rev. D. 93114028S. Jäger and J. Martin Camalich, Phys. Rev. D 93 (2016) no.1, 014028 [arXiv:1412.3183].
. G Isidori, D Lancierini, P Owen, N Serra, arXiv:2104.05631Phys. Lett. B. 822136644G. Isidori, D. Lancierini, P. Owen and N. Serra, Phys. Lett. B 822 (2021), 136644 [arXiv:2104.05631].
. A Arbey, T Hurth, F Mahmoudi, S Neshatpour, arXiv:1806.02791Phys. Rev. D. 98995027A. Arbey, T. Hurth, F. Mahmoudi and S. Neshatpour, Phys. Rev. D 98 (2018) no.9, 095027 [arXiv:1806.02791].
. T Hurth, A Arbey, F Mahmoudi, S Neshatpour, arXiv:1812.07602Nucl. Part. Phys. Proc. T. Hurth, A. Arbey, F. Mahmoudi and S. Neshatpour, Nucl. Part. Phys. Proc. 303-305 (2018), 2-7 [arXiv:1812.07602].
| []
|
[
"Interferometry in dense nonlinear media and interaction-induced loss of contrast in microfabricated atom interferometers",
"Interferometry in dense nonlinear media and interaction-induced loss of contrast in microfabricated atom interferometers"
]
| [
"Maxim Olshanii \nPermanent Address: Department of Physics & Astronomy\nUniversity of Southern California\n90089Los AngelesCA\n\nITAMP\nHarvard\n02138CambridgeMassachusetts\n",
"Vanja Dunjko \nPermanent Address: Department of Physics & Astronomy\nUniversity of Southern California\n90089Los AngelesCA\n\nITAMP\nHarvard\n02138CambridgeMassachusetts\n",
"Ying-Ju Wang ",
"Dana Z Anderson ",
"Victor M Bright ",
"Eric A Cornell ",
"Quentin Diot ",
"Tetsuo Kishimoto ",
"Mara Prentiss ",
"R A Saravanan ",
"Stephen R Segal ",
"Saijun Wu "
]
| [
"Permanent Address: Department of Physics & Astronomy\nUniversity of Southern California\n90089Los AngelesCA",
"ITAMP\nHarvard\n02138CambridgeMassachusetts",
"Permanent Address: Department of Physics & Astronomy\nUniversity of Southern California\n90089Los AngelesCA",
"ITAMP\nHarvard\n02138CambridgeMassachusetts"
]
| []
| In this paper we update the existing schemes for computation of atom-interferometric signal in single-atom interferometers to interferometry with dense Bose-condensed atomic samples. Using the theory developed we explain the fringe contrast degradation observed, for longer duration of interferometric cycle, in the Michelson interferometer on a chip recently realized at JILA (Phys. Rev. Lett. 94, 090405 (2005)). We further suggest several recipes for suppression of the interaction-related contrast degradation. | null | [
"https://arxiv.org/pdf/cond-mat/0505358v2.pdf"
]
| 118,597,504 | cond-mat/0505358 | 7309ddc636a06fa9e28f1c359de97b2732369335 |
Interferometry in dense nonlinear media and interaction-induced loss of contrast in microfabricated atom interferometers
6 Jun 2005
Maxim Olshanii
Permanent Address: Department of Physics & Astronomy
University of Southern California
90089Los AngelesCA
ITAMP
Harvard
02138CambridgeMassachusetts
Vanja Dunjko
Permanent Address: Department of Physics & Astronomy
University of Southern California
90089Los AngelesCA
ITAMP
Harvard
02138CambridgeMassachusetts
Ying-Ju Wang
Dana Z Anderson
Victor M Bright
Eric A Cornell
Quentin Diot
Tetsuo Kishimoto
Mara Prentiss
R A Saravanan
Stephen R Segal
Saijun Wu
Interferometry in dense nonlinear media and interaction-induced loss of contrast in microfabricated atom interferometers
6 Jun 2005(Dated: June 30, 2021)numbers: 0375Dg0375Gg0375Kk
In this paper we update the existing schemes for computation of atom-interferometric signal in single-atom interferometers to interferometry with dense Bose-condensed atomic samples. Using the theory developed we explain the fringe contrast degradation observed, for longer duration of interferometric cycle, in the Michelson interferometer on a chip recently realized at JILA (Phys. Rev. Lett. 94, 090405 (2005)). We further suggest several recipes for suppression of the interaction-related contrast degradation.
In this paper we update the existing schemes for computation of atom-interferometric signal in single-atom interferometers to interferometry with dense Bose-condensed atomic samples. Using the theory developed we explain the fringe contrast degradation observed, for longer duration of interferometric cycle, in the Michelson interferometer on a chip recently realized at JILA (Ying-Ju Wang, Dana Z. Anderson Introduction.-Atom interferometers [1,2,3,4] offer an unprecedented precision in inertial measurements. Supplemented with a highly coherent input source provided by Bose-condensed atoms [5,6,7,8,9,10], atom interferometers may potentially supersede the conventional laser-based devices. In a recent experiment [11] a miniature Michelson-type interferometer was realized on an atom chip [12,13,14,15,16,17], thus further approaching practical implementations of the device. Generally, miniaturization of atomic devices leads to an increased role of interatomic interactions, due to higher densities and density gradients. Indeed, a strong suppression of contrast was observed in [11] for longer durations of the interferometric cycle. The goal of our paper is to explain this effect and suggest recipes for suppressing the interaction-related fringe degradation.
The role of interatomic interactions in interferometric processes has been studied by several authors [18,19,20,21], with the main emphasis on the potential loss of firstorder coherence. In our paper we focus on a different effect: distortion of the interferometric path due to the mean-field pressure.
Interferometric scheme.-In this paper we consider the Michelson interferometric scheme (see Fig. 1(a)). Atoms are supposed to be confined transversally by a monomode atom guide, with no transverse excitations allowed. The initial state of atoms is a perfectly coherent state
ψ(z, t = 0−) = χ(z) ,(1)
normalized for convenience to the total number of atoms N :
+∞ −∞ dz |χ(z)| 2 = N . We further assume that at every stage of the process the wave function can be approximately decomposed into a sum of three spatial harmonics,
ψ = 1 √ 2 n=−1, 0, +1 Φ n (z, t) e inQz ,(2)
where Φ n (z, t) are slow functions of coordinate. (Note that even though higher harmonics can be generated dur-ing the interferometric process, we show below that they are (a) small under typical conditions, and (b) if necessary can be taken into account a posteriori.) Splitting, reflection, and recombining pulses perform the following instant transformations of the vector (Φ −1 , Φ 0 , Φ +1 ):
A split. =Â rec. = 1 2 1 √ 2 −1 √ 2 0 √ 2 −1 √ 2 1 A refl. = 0 0 1 0 0 0 1 0 0 .(3)
Such ideal interferometric elements were proposed in [22] and successfully experimentally realized in [11]. The splitting, reflection, and recombination pulses are applied in succession, separated by an equal time interval T . Immediately after the recombination pulse the population of atoms in the central peak is detected; this constitutes the interferometric signal. Between the splitting and recombination pulses the wave function can be approximately decomposed as
ψ(z, t) ≈ 1 √ 2 e iφ+(z, t) e i(mv+(t)z−ǭ+(t)t)/h χ(z −z + (t)) +e iφ−(z, t) e i(mv−(t)z−ǭ−(t)t)/h χ(z −z − (t)) ,(4)
where the phases φ ± will be shown to be approximately real. Herē
z ± (t) = ±V Q t for 0 < t < T ±V Q T ∓ V Q (t − T ) for T < t < 2T(5)
are the classical trajectories, originating at z = 0, corresponding to the right (+) and left (-) arms of the interferometer; are the corresponding classical velocities; finally,ǭ + (t) = ǫ − (t) = E Q ≡ mV 2 Q /2 are the corresponding kinetic energies. The velocity V Q is given by V Q ≡hQ/m, where m is the atomic mass.
v ± (t) =ż ± (t) = ±V Q for 0 < t < T ∓V Q for T < t < 2T(6)
The resulting differential phase shift
∆φ(z) ≡ φ + (z, 2T ) − φ − (z, 2T )(7)
consists of two parts: ∆φ(z) = ∆φ signal + ∆φ distortion (z). The first, spatially independent part is the useful signal, related to the effect the interferometer measures. The second part is the result of the distortion caused by unaccounted-for external fields and, in our case, meanfield interactions. The distortion phase shift leads to two effects. The first is a correction to the signal phase shift. This effect can in principle be accounted for, if the nature of the distortion is known. The second effect is a degradation of contrast in the interferometric signal. This degradation can not be eliminated easily if ∆φ distortion (z) changes substantially over the length of the atomic cloud. If both effects are taken into account, then the interferometric signal
S(∆φ signal ) ≡ N −1 × +∞ −∞
dz cos 2 ((∆φ signal + ∆φ distortion (z))/2)|χ(z)| 2
can be shown to be
S(∆φ signal ) = 1 2 + M 2 cos(∆φ signal − δ) ,
where the fringe contrast M and fringe shift δ are given by
M = A 2 + B 2 (8) δ = arg(A, B) ,(9)
and
A = N −1 +∞ −∞ dz cos(∆φ distortion (z))|χ(z)| 2 B = N −1 +∞ −∞ dz sin(∆φ distortion (z))|χ(z)| 2 .
The goal of this paper is to calculate the fringe contrast degradation caused by mean-field interactions, compare the results with the experimentally observed values, and suggest methods for eliminating this degradation.
Analysis of the mean-field effects.-We describe the evolution of the atomic cloud using the nonlinear Schrödinger equation
ih ∂ ∂t ψ +h 2 2m ∂ 2 ∂z 2 ψ = U (z, t)ψ + g 1D |ψ| 2 ψ ,(10)
where U (z, t) is an external field comprising the object of the measurement and any other auxiliary fields
present; g 1D = −h 2 /µ a 1D is the one-dimensional cou- pling constant; a 1D = (−a 2 ⊥ /2a) [1 − C(a/a ⊥ )]
is the onedimensional scattering length; µ = m/2 is the reduced mass; a ⊥ = h/µ ω ⊥ is the size of the transverse ground state of the guide; C = 1.4603 . . . (see [23]); lastly, a is the three-dimensional s-wave scattering length. We further decompose the wave function into a quasi-Fourier series
ψ = 1 √ 2 n=±1, ±3,... ψ n ,(11)
where each term obeys
ih ∂ ∂t +h 2 2m ∂ 2 ∂z 2 ψ n = (12) U (z, t)ψ n + g 1D 2 n2, n3 ψ ⋆ n2+n3−n ψ n2 ψ n3 .
The initial condition for the system (12) is given by the result of applying the beamsplitter (3) to the initial condition (1). We get
ψ ±1 (z, t = 0+) = χ(z)e ±Qz ψ n =±1 (z, t = 0+) = 0 .
Let us assume that the ψ ±1 components of the wave function remain dominant, as they are initially, and then verify this assumption for self-consistency. Here and throughout the paper we will suppose that the beamsplitter recoil energy E Q is much greater than the mean-field energy g 1D |χ| 2 : Under these assumptions the strongest higher harmonics generated are
ε nl ∼ g 1D |χ| 2 E Q ≪ 1 .(13)ψ ±3 = − 1 16 g 1D ψ ⋆ ∓1 ψ ±1 E Q ψ ±1 ,(14)
and it is indeed small provided the parameter (13) is small. Another assumption used to derive (14) was that the beamsplitter recoil energy E Q dominates another energy scale associated with the deviation of the momentum distribution of the dominant harmonics ψ ±1 from that of strict δ-peaks around p = ±hQ. As we will see from the following, in a realistic experiment this energy scale satisfies an even stricter requirement of being dominated by the "time-of-flight" energyh/T . In general, according to our findings the dynamics of higher harmonics is simply reduced to an adiabatic following of the dominant ψ ±1 ones. This observation greatly simplifies further analysis. In short, one may safely exclude the couplings to higher harmonics from the equations of evolution for ψ ±1 , obtain results, and add the correction (14) a posteriori.
The equations for the dominant harmonics ψ ±1 now become
ih ∂ ∂t +h 2 2m ∂ 2 ∂z 2 ψ ±1 = U ± (z, t)ψ ±1 ,(15)
where the effective potentials for the right (+) and left (-) arms of the interferometer read
U ± (z, t) = U (z, t) + g 1D 2 |ψ ±1 | 2 + g 1D |ψ ∓1 | 2 . (16)
The first part of the mean-field potential is the contribution of the part of the cloud to which the given interferometer arm belongs. The second, twice-as-strong part is the influence of the opposite arm. The factor of two difference between the two contributions (illustrated in Fig. 2) can be traced to the difference between the Hartreetype interaction among atoms in the condensate, and the Hartree-Fock-type interaction between atoms in the condensate and those out of the condensate.
In what follows we study the effect of the mean-field interaction (16) on the differential phase shift (7) and fringe contrast (8).
Differential phase shift.
-From now on we will be using the representation (4) for the wave function, where the phase factors φ ± (z, t) (generally complex) are still unknown. The equation (15) under the representation (4) leads to the following equations for the phase factors
∂ ∂τ ± φ ± = −U ± (z, t)/h + D ± (z, t) .(17)
Here
∂ ∂τ ± ≡ ∂ ∂t +v ± (t) ∂ ∂z ;(18)
the classical velocityv ± is given in (6); the potentials (16) are
U ± (z, t) = U (z, t)(19)+ g 2 |χ(z −z ± (t))| 2 + g|χ(z −z ∓ (t))| 2 ;
the classical coordinatez ∓ (t) is represented by (5). Finally, the correction term D is
D ± = D J, ± + D dyn., ± + D dyn.−init., ± + D init., ± ,(20)
where
D J, ± = ih 2 2m ∂ 2 ∂z 2 φ ± ,(21)D dyn., ± = −h 2 2m ∂ ∂z φ ± 2 ,(22)D dyn.−init., ± ,(23)
= ih m ∂ ∂z φ ± ∂ ∂z ln(χ(z −z ± (t))) , and
D init., ± =h 2 2m ∂ 2 ∂z 2 χ(z −z ± (t)) χ(z −z ± (t)) .(24)
Now we decompose the phase shift
φ ± =φ ± + δφ ±(25)
into a sum of the principal part obeying
∂ ∂τ ±φ ± = −U (z, t) ± /h ,(26)
and the correction originating from the D term in (17). We assume that the correction δφ ± is small,
δφ ± ≪ 1 ,(27)
which we will justify later.
Finally the differential phase shift reads
∆φ(z) = −h −1 2T 0 dt ′ × (28) {U + (z −z + (2T ) +z + (t ′ ), t ′ ) −U − (z −z − (2T ) +z − (t ′ ), t ′ )}
In what follows we will assume that the initial atomic wave function has the Thomas-Fermi profile
χ(z) = mω 2 BEC 2 (R 2 − z 2 )θ(R − |z|)(29)
Having in mind certain applications we will also add an external harmonic oscillator potential
U (z, t) = mω 2 0 2 z 2(30)
of an arbitrary frequency ω 0 . We will further classify both as distortion effects and set the useful signal to zero.
To the order of approximations we made, the caused by distortion fringe shift and fringe contrast degradation analyzed below are independent of the useful signal, and thus the latter can indeed be omitted. The distortion differential phase shift now becomes equal to the total one: ∆φ distortion (z) = ∆φ(z).
Interaction-induced loss of contrast: small interferometers.-First consider the case of small interferometers, where the interferometric arms are substantially overlapped during the whole interferometric cycle, V Q T ≪ R. In this case we can easily compute the combined mean-field and harmonic-oscillator contribution to the differential phase shift ∆φ (see (28)
HereK = mω 2 T 2 V Q /h(32)
has the meaning of the half of the differential momentum acquired by atoms in the mean field and in the harmonic oscillator field (see Fig. 1(b)),
ω 2 = 2ω 2 BEC − ω 2 0 ,(34)
and f (ξ) = 3(sin(ξ) − ξ cos(ξ))/ξ 3 .
Here and below ω BEC is the frequency of the harmonic trap for which the state (29) would be the ground state, and R is the Thomas-Fermi radius. Note that for the configuration considered the fringes are suppressed, but either not shifted at all or shifted by π (see (9)). Notice also that for a particular case of ω 0 = √ 2ω BEC the differential phase shift vanishes and thus the fringe contrast is strictly 100%. We will discuss the implications of this phenomenon below.
The expression (32) for the fringe contrast in small interferometers is the first principal result of this paper.
Remedies for the contrast degradation: small interferometers.-As one can see from (31), the distortion of the differential phase shift in small interferometers disappears completely if the frequency of the external harmonic trap is chosen to be √ 2 higher than the frequency of the mean-field potential:
ω 0 = √ 2 ω BEC ⇒ ∆φ distortion = 0 .(36)
This can be realized in two ways: (a) The first scheme is applicable if the longitudinal frequency ω can be controlled independently of the transverse frequency ω ⊥ . In this case, one should start by preparing the condensate in the ground state of the longitudinal trap. Then, just before the splitting pulse, one should increase the longitudinal frequency by a factor of √ 2 in a short ramp ( Fig. 3(a)).
(b) The second scheme assumes that both longitudinal and transverse frequencies are controlled by the same source, and thus the ratio between them is always the same. Then the change in the longitudinal frequency will affect the nonlinear coupling constant g (to which the square of the condensate frequency ω 2 BEC is linearly proportional) due to the simultaneous change in the transverse frequency. Note that g is linearly proportional to ω ⊥ . In this case one satisfies the condition (36) by increasing, prior to the splitting pulse, both the longitudinal and the transverse frequencies by a factor of 2 ( Fig. 3(b)).
Note that both schemes are completely insensitive to possible fluctuations in the number of particles in the condensate.
Interaction-induced loss of contrast: large interferometers.-Consider now the the case of large interferometers where for some period of time during the cycle the arms are totally spatially separated: V Q T ≥ R. The differential phase shift in this case reads
∆φ distortion (z) = 2K 0 z − z 3 /l 3 ,(37)
where
K 0 = −mω 2 0 T 2 V Q /h (38) l = (mω 2 BEC /3hV Q ) 1 3 .(39)
Notice that unlike in the small interferometer case, for no choice of parameters the differential phase shift can be completely eliminated, and that the contribution from the harmonic potential grows with the time duration of the cycle while the one from the mean-field is stationary. This makes us to believe that for large interferometers the most promising implementation will be the free-space one with no longitudinal confinement present. In the case of ω 0 = 0 the fringe contrast assumes the following compact expression:
M = F (η) ,(40)
(see Fig. 4) where
η = (2mω 2 BEC R 3 /3hV Q ) 1 3 = (g 1D N/hV Q ) 1 3 ,(41)
is the parameter governing the fringe contrast, F (η) = 3 2 1 F 2 (1/6; 1/2, 7/6; −η 6 /16)
− sin(η 3 /2)/η 3 ,(42)
and n F m (a 1 , . . . , a n ; b 1 , . . . , b m ; ξ) is the generalized hypergeometric function. Notice that the parameter η depends on neither the shape of the atomic cloud nor on the interferometric cycle duration. For a given set of atomic and waveguide parameters the large contrast requirement η ≪ 1 defines a universal limit for the number of atoms:
N ≪hV Q /g 1D ,(43)
The expression (40) is the second principal result of our paper.
Limits of the validity of our computational scheme.-In order for our conclusions be valid the correction (27) originating from the neglected terms in the kinetic energy must be small. We have performed a thorough investigation aimed at understanding the physical meaning of the neglected corrections and estimating their value. The results are as follows.
The correction δφ ± can be decomposed into a sum of four terms δφ ± = δφ J, ± + δφ dyn., ± + δφ dyn.−init., ± + δφ init., ± with the following interpretation:
e δφJ, ± ≈ |J U± | − 1 2 (45)
e δφ dyn., ± +δφ dyn.−init., ± +δφinit., ± (46)
≈ χ(z −z ± (t))|e −i t 0 dt (pU ± +p)/2mh |χ(z −z ± (t)) .
The first correction is related to the expansion factor (Jacobian) of the bunch of trajectories of classical particles moving in the field U ± . The second, third, and forth corrections originate from the neglected kinetic energy, both the initial kinetic energy (coming from the momentum distribution of χ) and that acquired in the field U ± . The above corrections (together with the nonlinear correction (13)) lead to the following requirements for the validity of the approximations used:
ε J = (ωT ) 2 ≪ 1 (47) ε dyn. = mω 4z2 T 3 /h ≪ 1 (48) ε init. =hT /mR 2 ≪ 1 (49) ε nl = (ωR/V Q ) 2 ≪ 1 ,(50)
wherez is the typical atomic coordinate, given byz ∼ R (z ∼ V Q T ) for small (large) interferometers. For a typical set of parameters of the JILA experiment with ω = 2π × 3.2Hz and T = 10 −3 s (corresponding to the small interferometer case), the values of these parameters are indeed small, validating our approximation: ε J = ε dyn.−init. = 4.1 × 10 −4 , ε dyn. = 4.7 × 10 −4 , ε init. = 3.6 × 10 −4 , and ε nl = 3.6 × 10 −3 .
Comparison with the JILA experiment.-The parameters of the JILA's experiment on Michelson interferometer on a magnetic chip [11] lie in the range intermediate between the small and large interferometer regimes and requires no assumption on the distance between the arms V Q vis a vis the cloud size R. The equation for the fringe contrast in this case must be integrated numerically. In [11] a time-and space-localized pulse of magnetic field was used as the phase signal. A stationary harmonic potential of frequency ω 0 = 2π × 5 Hz was present in each realization. The contrast was traced as a function of the duration of the interferometric cycle 2T , as depicted in Fig. 5. Other parameters read ω ⊥ = 2π × 100 Hz, Q = 4π/λ, where λ = 780 nm is the wavelength of light used to produce the interferometric elements, R = 45 µm, and a = 100.4 a B . In the experiment the number of atoms varied from one value of the cycle duration to another; these numbers are shown in Fig. 5(a). The value of ω BEC was extracted from ω 2 BEC = (4/9)g 1D N/mR 3 . The results of the comparison are shown at the Fig. 5(a).
One can see from the Fig. 5 that for the parameters chosen the role of interatomic interactions in contrast degradation is relatively small and the main source of the effect is the stationary harmonic trap. This is entirely unexpected since the strength of the interactions was very close to the strength of the trap, typically ω 2 BEC = .25÷.4ω 2 0 , and moreover in the small interferometer regime the strength interactions become multiplied by a factor of 2 of Fock origin (see (34). One can show further that the relatively weak role of interactions in the JILA experiment is not related to any small parameter, but is solely an interplay of numerical prefactors. To illustrate this point we show at the Fig. 5(b) a theoretical prediction for N = 4.5×10 4 atoms exhibiting a dominant role of the interatomic interactions.
Summary and outlook.-(1) We have developed a simple computational scheme that allows to include the effect of interatomic interactions in calculation of fringe shift and fringe degradation in waveguide based atom interferometers.
(2) In two cases we have found simple analytic expression for fringe contrast. These cases are: small interferometers where the spatial separation between the arms is much smaller than the atomic cloud size, V Q T ≪ R; and large interferometers where at at least one instance the arms are fully separated, V Q T ≥ R.
(3) In the case of small interferometers the analytic expression for the contrast degradation allowed us to suggest a simple recipe canceling the destructive effect of interactions completely.
(4) In the case of large interferometers the effect of interactions can not, to our knowledge, be canceled entirely. Furthermore, the interatomic interactions set an universal limit on the number of atoms involved in the interferometric process: N ≪hV Q /g 1D . Notice that this bound depends on the characteristics of the atom and the waveguide only (where the "beam-splitter velocity" V Q is supposed to be linked to the atomic transition frequency), while neither the timing of cycle nor the size of the atomic cloud enter. dation. This finding is can not be traced to any small parameter, but is a mere interplay of numerical prefactors. A moderate ten-fold increase (to N = 4.5 × 10 4 ) in the number of atoms will reduce, for large interferometer case, the fringe visibility to 50%, even without an additional longitudinal trap present, reflecting the universal limit outlined above.
(6) Michelson interferometer is a closed-loop whitelight scheme by design; it is supposed to produce clear fringes even for input sources with a short coherence length. As one can see from the Fig. 2(b), the mean-field pressure leads to two distinct effects. The first effect is the change in the relative momentum of the interferometer arms; this is what we addressed in the present work. The second effect is the distortion of the interferometric path, as a result of which the path becomes open, and thus the interferometer becomes sensitive to the longitudinal coherence. For zero-temperature condensates this does not lead to any loss of contrast. At finite temperature the degradation due to the broken interferometer loop becomes relevant, and we are going to study this effect in the nearest future.
FIG. 1 :
1Interferometric loop of the Michelson interferometer with single atoms (a) and with Bose condensates (b).
FIG. 2 :
2An artistic view of the mean-field effects in a Michelson interferometer. Notice that the mean-field potential is different for the right arm (a) and the left arm (b).
), as well as the resulting fringe contrast M (see 8). They read ∆φ distortion (z) = 2Kz (31) and M = |f (2KR)| .
FIG. 3 :
3Two schemes for the preparation of the initial wave packet, designed to eliminate the interatomic-interactioninduced degradation of the fringe contrast in small interferometers. (a) Situation when the longitudinal confinement can be controlled independently from the transverse one. (b) Situation when the longitudinal and transverse confinements are linearly linked one to another.
FIG. 4 :
4Fringe contrast ratio vs. the universal parameter η for large interferometers.
( 5 )FIG. 5 :
55Using the method developed we have analyzed the results of the recent JILA experiment on Michelson interferometer on atom chip. In spite of comparable strength of interatomic interactions and stationary longitudinal trap present in the experiment, our results indicate a relatively weak role of interactions in fringe contrast degra-Fringe contrast ratio vs. duration of the interferometric cycle corresponding to the parameters of the JILA experiment with Michelson interferometer on a chip, with magnetic gradient as the phase element. (a) Curves correspond to N = 3. × 10 3 atoms, the as same 4 ms point in the experiment. The theoretical predictions for the actual experimental numbers of atoms at every run are also shown. (b) The same as (a), but for N = 4.5 × 10 4 atoms, where a reduction of the fringe contrast to 50% is expected.
, Victor M. Bright, Eric A. Cornell, Quentin Diot, Tetsuo Kishimoto, Mara Prentiss, R. A. Saravanan, Stephen R. Segal, Saijun Wu, Phys. Rev. Lett. 94, 090405(2005)). We further suggest several recipes for suppression of the interaction-related contrast degradation.PACS numbers: 03.75Dg,03.75Gg,03.75.Kk
AcknowledgmentsWe are grateful to Ying-Ju Wang and Dana Z. Anderson for providing us with the recent experimental data and for enlightening discussions on the subject. This work was supported by a grant from Office of Naval Research N00014-03-1-0427, and through the National Science Foundation grant for the Institute for Theoretical Atomic and Molecular Physics at Harvard University and Smithsonian Astrophysical Observatory.
Atom Interferometry. P. R. BermanNew YorkAcademicAtom Interferometry, edited by P. R. Berman (Academic, New York, 1997).
. O Carnal, J Mlynek, Phys. Rev. Lett. 662689O. Carnal and J. Mlynek, Phys. Rev. Lett. 66, 2689 (1991).
. D W Keith, C R Ekstrom, Q A Turchette, D E Pritchard, Phys. Rev. Lett. 662693D. W. Keith, C. R. Ekstrom, Q. A. Turchette, and D. E. Pritchard, Phys. Rev. Lett. 66, 2693 (1991).
. M Kasevich, S Chu, Phys. Rev. Lett. 67181M. Kasevich and S. Chu, Phys. Rev. Lett. 67, 181 (1991).
. J E Simsarian, J Denschlag, M Edwards, C W Clark, L Deng, E W Hagley, K Helmerson, S L Rolston, W D Phillips, Phys. Rev. Lett. 852040J. E. Simsarian, J. Denschlag, M. Edwards, C. W. Clark, L. Deng, E. W. Hagley, K. Helmerson, S. L. Rolston, and W. D. Phillips, Phys. Rev. Lett. 85, 2040 (2000).
. Y Torii, Y Suzuki, M Kozuma, T Sugiura, T Kuga, L Deng, E W Hagley, Phys. Rev. 6141602Y. Torii, Y. Suzuki, M. Kozuma, T. Sugiura, T. Kuga, L. Deng, and E. W. Hagley, Phys. Rev. A61, 041602 (2000).
. B P Anderson, M A Kasevich, Science. 2821686B. P. Anderson, M. A. Kasevich, Science 282 1686 (1998).
. D Hellweg, L Cacciapuoti, M Kottke, T Schulte, K Sengstock, W Ertmer, J J Arlt, Phys. Rev. Lett. 9110406D. Hellweg, L. Cacciapuoti, M. Kottke, T. Schulte, K. Sengstock, W. Ertmer, J. J. Arlt, Phys. Rev. Lett. 91, 010406 (2003).
. Artur Widera, Olaf Mandel, Markus Greiner, Susanne Kreim, Theodor W Hnsch, Immanuel Bloch, Phys. Rev. Lett. 92160406Artur Widera, Olaf Mandel, Markus Greiner, Susanne Kreim, Theodor W. Hnsch, and Immanuel Bloch, Phys. Rev. Lett. 92, 160406 (2004).
. Y Shin, M Saba, T A Pasquini, W Ketterle, D E Pritchard, A E Leanhardt, Phys. Rev. Lett. 9250405Y. Shin, M. Saba, T. A. Pasquini, W. Ketterle, D. E. Pritchard, A. E. Leanhardt, Phys. Rev. Lett. 92, 050405 (2004).
. Ying-Ju Wang, Dana Z Anderson, Victor M Bright, Eric A Cornell, Quentin Diot, Tetsuo Kishimoto, Mara Prentiss, R A Saravanan, Stephen R Segal, Saijun Wu, Phys. Rev. Lett. 9490405Ying-Ju Wang, Dana Z. Anderson, Victor M. Bright, Eric A. Cornell, Quentin Diot, Tetsuo Kishimoto, Mara Prentiss, R. A. Saravanan, Stephen R. Segal, Saijun Wu, Phys. Rev. Lett. 94, 090405 (2005).
. P K Rekdal, S Scheel, P L Knight, E A Hinds, Phys. Rev. A. 7013811P. K. Rekdal, S. Scheel, P. L. Knight, E. A. Hinds, Phys. Rev. A 70, 013811 (2004).
. C J Vale, B Upcroft, M J Davis, N R Heckenberg, H Rubinsztein-Dunlop, J.Phys. B. 372959C. J. Vale, B. Upcroft, M. J. Davis, N. R. Heckenberg, H. Rubinsztein-Dunlop, J.Phys. B 37 2959 (2004).
. S Schneider, A Kasper, Ch Hagen, M Bartenstein, B Engeser, T Schumm, I Bar-Joseph, R Folman, L Feenstra, J Schmiedmayer, Phys. Rev. A. 6723612S. Schneider, A. Kasper, Ch. vom Hagen, M. Bartenstein, B. Engeser, T. Schumm, I. Bar-Joseph, R. Folman, L. Feenstra, and J. Schmiedmayer, Phys. Rev. A 67, 023612 (2003).
. H Ott, J Fortagh, G Schlotterbeck, A Grossmann, C Zimmermann, Phys. Rev. Lett. 87230401H. Ott, J. Fortagh, G. Schlotterbeck, A. Grossmann, and C. Zimmermann, Phys. Rev. Lett. 87, 230401 (2001).
. W Hnsel, P Hommelhoff, T W Hnsch, J Reichel, Nature. 413498W. Hnsel, P. Hommelhoff, T. W. Hnsch, and J. Reichel, Nature (London) 413, 498 (2001).
. M Vengalattore, W Rooijakkers, M Prentiss, Phys. Rev. A. 6653403M. Vengalattore, W. Rooijakkers, and M. Prentiss, Phys. Rev. A 66, 053403 (2002).
. A Rohrl, M Naraschewski, A Schenzle, H Wallis, Phys. Rev. Lett. 784143A. Rohrl, M. Naraschewski, A. Schenzle, H. Wallis, Phys. Rev. Lett. 78, 4143 (1997).
. M D Girardeau, K K Das, E M Wright, Phys. Rev. 6623604M. D. Girardeau, K. K. Das, E. M. Wright, Phys. Rev. A66, 023604 (2002).
. S Chen, R Egger, Phys. Rev. 6863605S. Chen and R. Egger, Phys. Rev. A68, 063605 (2003).
. J A Stickney, A A Zozulya, Phys. Rev. A. 6653601J. A. Stickney and A. A. Zozulya, Phys. Rev. A 66, 053601 (2002).
. Saijun Wu, Yingju Wang, Quentin Diot, physics/0408011Mara PrentissSaijun Wu, Yingju Wang, Quentin Diot, Mara Prentiss, e-print physics/0408011.
. M Olshanii, Phys. Rev. Lett. 81938M. Olshanii, Phys. Rev. Lett. 81, 938 (1998).
| []
|
[
"THE EFFECT OF STRUCTURAL DIVERSITY OF AN ENSEMBLE OF CLASSIFIERS ON CLASSIFICATION ACCURACY",
"THE EFFECT OF STRUCTURAL DIVERSITY OF AN ENSEMBLE OF CLASSIFIERS ON CLASSIFICATION ACCURACY"
]
| [
"Lesedi Masisi \nSchool of Electrical and Information Engineering\nUniversity of the Witwatersrand Private\nBag 3 Wits 2050 South Africa\n",
"Fulufhelo V Nelwamondo \nSchool of Electrical and Information Engineering\nUniversity of the Witwatersrand Private\nBag 3 Wits 2050 South Africa\n",
"Tshilidzi Marwala \nSchool of Electrical and Information Engineering\nUniversity of the Witwatersrand Private\nBag 3 Wits 2050 South Africa\n"
]
| [
"School of Electrical and Information Engineering\nUniversity of the Witwatersrand Private\nBag 3 Wits 2050 South Africa",
"School of Electrical and Information Engineering\nUniversity of the Witwatersrand Private\nBag 3 Wits 2050 South Africa",
"School of Electrical and Information Engineering\nUniversity of the Witwatersrand Private\nBag 3 Wits 2050 South Africa"
]
| []
| This paper aims to showcase the measure of structural diversity of an ensemble of 9 classifiers and then map a relationship between this structural diversity and accuracy. The structural diversity was induced by having different architectures or structures of the classifiers The Genetical Algorithms (GA) were used to derive the relationship between diversity and the classification accuracy by evolving the classifiers and then picking 9 classifiers out on an ensemble of 60 classifiers. It was found that as the ensemble became diverse the accuracy improved. However at a certain diversity measure the accuracy began to drop. The Kohavi-Wolpert variance method is used to measure the diversity of the ensemble. A method of voting is used to aggregate the results from each classifier. The lowest error was observed at a diversity measure of 0.16 with a mean square error of 0.274, when taking 0.2024 as maximum diversity measured. The parameters that were varied were: the number of hidden nodes, learning rate and the activation function. | null | [
"https://arxiv.org/pdf/0804.4741v1.pdf"
]
| 6,010,542 | 0804.4741 | 765f0635f7a2b9e2ef7121c8218bb31bbc77f630 |
THE EFFECT OF STRUCTURAL DIVERSITY OF AN ENSEMBLE OF CLASSIFIERS ON CLASSIFICATION ACCURACY
Lesedi Masisi
School of Electrical and Information Engineering
University of the Witwatersrand Private
Bag 3 Wits 2050 South Africa
Fulufhelo V Nelwamondo
School of Electrical and Information Engineering
University of the Witwatersrand Private
Bag 3 Wits 2050 South Africa
Tshilidzi Marwala
School of Electrical and Information Engineering
University of the Witwatersrand Private
Bag 3 Wits 2050 South Africa
THE EFFECT OF STRUCTURAL DIVERSITY OF AN ENSEMBLE OF CLASSIFIERS ON CLASSIFICATION ACCURACY
Genetical Algorithms (GA)EnssembleclassificationStructural diversityIdentity Structure (IDS)multi layered perceptron (MLP)
This paper aims to showcase the measure of structural diversity of an ensemble of 9 classifiers and then map a relationship between this structural diversity and accuracy. The structural diversity was induced by having different architectures or structures of the classifiers The Genetical Algorithms (GA) were used to derive the relationship between diversity and the classification accuracy by evolving the classifiers and then picking 9 classifiers out on an ensemble of 60 classifiers. It was found that as the ensemble became diverse the accuracy improved. However at a certain diversity measure the accuracy began to drop. The Kohavi-Wolpert variance method is used to measure the diversity of the ensemble. A method of voting is used to aggregate the results from each classifier. The lowest error was observed at a diversity measure of 0.16 with a mean square error of 0.274, when taking 0.2024 as maximum diversity measured. The parameters that were varied were: the number of hidden nodes, learning rate and the activation function.
Introduction
Developing an efficient way for classification has been a popular topic. It has been found that as opposed to using one classifier an ensemble of classifiers is more efficient [1]- [3]. The reason is that a committee of classifiers in making decision is better than one classifier. The individual classifiers which form this committee have created large interest when compared to accuracy of the ensemble [4]- [6]. Large research has been done in optimizing the diversity of the ensemble and the aggregation methods for the decision made by the ensemble [7]. This has led to developments in diversity measures and a relationship between these measures with the ensemble accuracy. Current methods use the outcomes of the individual classifiers of the ensemble to measure diversity [8]- [13]. These methods are applicable due to the way diversity was defined [14].
This study focuses on structural diversity. This means that, the individual parameters of the classifiers are used to measure structural diversity as opposed to viewing the outcome of the individual classifiers. This is in agreement with Sharkey [15], who stated that diversity can be induced by varying the architecture of the classifiers. It also further implies that diversity will not be induced by using different learning schemes such as bagging and boosting in sampling the data for training, this is done so that only the architectural parameters of the classifiers would induce diversity. Same data will be used to train the ensemble of classifiers. This will lead to knowledge on whether structural diversity has the potential to pose improvements on the classification.
There are a number of aggregation schemes such as minimum, maximum, product, average, simple majority, weighted majority, Naïve Bayes and decision templates to name a few, see [14], [16]. However for this study the majority vote scheme was used to aggregate the individual classifiers for a final solution. This report includes a section on the Identity structure (IDS), Kohavi-Wolpert Variance Method (KW), The neural network parameters, Genetical Algorithms (GA), The model, Implementation, Results and then lastly the conclusion and discussion.
Identity Structure (IDS)
The Identity Structure (IDS) is derived from taking into account the parameters that make up a Neural Network (NN). These parameters include the activation functions, number of hidden nodes and the learning rate. There are other types of the Neural Networks (NN) that can be used to form the IDS. A number of artificial machines can therefore be used for a hybrid ensemble. However this is beyond the scope of this study. For this study a Multi Layered Perceptron (MLP) was used. The parameters of concern were the number of hidden nodes, activation function and the learning rate. These parameters make up the Identity Structure of the classifiers (IDS). The IDS can be viewed as:
ܵܦܫ ൌ ൦ ݄݁݊݅ܿܽܯ ݁ݕݐ ݎܾ݁݉ݑܰ ݂ ݄݅݀݀݁݊ ݏ݁݀݊ ݊݅ݐܽݒ݅ݐܿܣ ݊݅ݐܿ݊ݑ݂ ݃݊݅݊ݎܽ݁ܮ ݁ݐܽݎ ൪
The IDS was decrypted into a binary format, that contained 12 bits. That means a one would indicate that the parameter of the classifiers is active and a zero would mean the opposite. The first bit represented the machine type used, the five following bits represented the number of hidden nodes, the following three bits the activation functions and then lastly the last three bits represented the learning rates used by the classifier. Only three learnig rates were considered (0.01, 0.02, 0.03 and 0.04). That means a binary string of "0 0 1" would represent a 0.01 learning rate. Three activation functions were considered hence the three bits. These activation functions include the: Linear, Logistic and the Softmax. The first, second and third bit of the three bits represented the linear, Logistic and the Softmax respectively.
A one for the first bit of the decrypted IDS represented the MLP as the machine type used for that classifier. The number of hidden nodes is set not to exceed 30, hence five bits, see the five bold bits on the decrypted IDS below. This conversion makes the IDS less complex and would reduce the computational cost on the calculations for diversity. Suppose that the classifier was an MLP and had 5 hidden nodes and used a linear activation function and a learning rate of 0.02, then the IDS would be: Each of the parameters of the IDS will have to be evaluated for measuring differences between the identities of the classifiers. The methods used to measure diversity are as follows: the Yule's Q-static for two classifiers, correlation coefficient (ρ), Kohavi-Wolpert variance (kw), Entropy measure (Ent), measure of difficulty (θ) and Coincident Failure Diversity (CFD) [17], to name a few. These methods are mainly applied at the outcome of the classifiers and not at the building blocks (structure) of the classifiers [7]. However the Kohavi-Wolpert variance (kw) method can be applied to measure the structural diversity, which was derived from the variance formulation [17]. This is because diversity in this study is defined as the variance among the architectures of the individual classifiers.
IDS
Kohavi-Wolpert Variance Method (KW)
This method is applied in measuring the variance of the outputs of the classifiers in the ensemble. It falls under the family of Non-pairwise measures [7]. As mentioned above this equation is used to evaluate the outcomes of the classifiers. However, for this study it will be used to measure the variance of the different identities of the classifiers by evaluating the differences of the individual IDS of the classifiers. That means for this study:
݈൫ܸ ൯ ൌ ∑ ܦ , ୀଵ(1)
ܸ , is a vector of the classifiers, L is the total number of classifiers. ܸ can be viewed as, ܸ ൌ ܦܫൣ ଵ, ் , … , ܦܫ , ் ൧. Equation (2) defines the overall variance calculation of the ensemble.
ݓ݇ ൌ ଵ ே మ ∑ ݈൫ܸ ൯ ቀL െ ݈൫ܸ ൯ቁ ே ୀଵ (2) j = 1,…,N,
where N is the number of the identity parameters (classifier type, complexity, activation function and the learning rate). This will result in the variance of the ensemble.
The Neural Network Parameters
The structural diversity is based on the parameters of the neural network. See Figure 1 for a MLP neural network. The MLP is composed of the input layer, hidden layer and the output layer, hence it is multi layered, see Figure 1. An MLP is built with different parameters, such as the activation functions, hidden nodes, biases, weights, etc. For this study diverse MLPs were created in a sense that they had different learning rates, activation functions and the number of hidden nodes. This would be considered as a diverse ensemble as compared to having the same MLPs with the same number of hidden nodes, activation function and the learning rate. This can clearly be seen from the IDS defined above.
It is clear that the diversity is not induced on the training of the neural network which is quiet a popular practice.
But diversity is derived from some of the building blocks of the individual classifiers. See equation (3) that describes the output of the neural network [18].
ݕ ൌ ݂ ௨௧ ቌ ݓ ሺଶሻ ெ ୀଵ ݂ ൭ ݓ ሺଵሻ ݔ ௗ ୀଵ ݓ ሺଵሻ ൱ ݓ ሺଶሻ ൱ ሺ3ሻ
Where ݂ ௨௧ and ݂ are the activation functions at the output layer and at the hidden layers respectively, M is the number of the hidden units, d is the number of input units, ݓ ሺଵሻ and ݓ ሺଶሻ are the weights in the first and second layer respectively moving from input i to hidden unit j, and ݓ ሺଵሻ indicate the biases for the unit j.
It was the outer activation functions which were varied to induce diversity. It can also be observed from equation (3) that varying the number of the hidden nodes will affect the generalization ability of the neural network.
Diversity and Genetical Algorithms (GA)
GA are evolutionary algorithms that aim to find a global solution to a given problem by applying the principles of evolutionary biology, such as mutation, crossover, reproduction and natural selection [20]. The GA have high capabilities to search large spaces for an optimal solution. The search process of the GA includes:
1. Generation of a population of offspring, normally taken as chromosomes
2. An evaluation function, that evaluates the fittest chromosome, if not fit genetic operations take over, such as: mutation and crossover. The mutation induces diversity in the search space.
3. This process continues until the fittest chromosome is attained.
However in this study the evaluation function is the diversity measure, the GA tries to meet a certain diversity (KW) among the ensemble of 9 classifiers, see Figure 2.
The chromosomes are the indexes for the vector that contains 60 classifiers. The GA will then evolve the classifiers for a specified diversity value.
The GA faced difficulties in attaining the specified diversity. This was because the diversity measure specified could not be attained from the current ensemble of 60 classifiers. To prevent this problem from occurring one would need to:
• Build the ensemble of 60 classifiers with known KW values for any possible combination of the 9 ensembles.
• Initially run the GA for any KW values and then use the set of KW values that the GA can approximate. As the target values in the next run.
The second option seems much feasible than the first option because on the first option it would mean that there would be no need for the GA. The first option further implies that the GA would be synchronized with the KW measure. The GA was empirically optimized for an initial population of 20 chromosomes, 28 Generations with a crossover rate of 0.08.
The Model
The model describes the basic flow of the algorithm for developing an ensemble of 9 classifiers from the 90 classifiers. The method of voting was then applied on the 9 chosen classifiers for generating the classification accuracy of the ensemble.
Implementation
A vector of classifiers was created which was composed of 60 classifiers. This was because the more the classifiers there was, the better the search space for the GA for an optimal solution. All the classifiers in the vector were trained. The GA only looked for a solution for an ensemble of 9 classifiers. This means only 15% of the classifiers out of the ensemble of 60 classifiers was used at a time. This was to ensurereasonable diversity values in the search space. An odd number for the ensemble was chosen solemnly to avoid a tie when the method of voting was used.
The evaluation function was composed of two variables, the diversity measure and the targeted diversity (ܶ ௪ ). See equation (5) for the evaluation function.
݂ ீ ൌ െሺ݇ݓ െ ܶ ௪ ሻ ଶ(5)
Where: ݂ ீ is the evaluation function, ݓ݇ is the diversity measured of the 9 classifiers and ܶ ௪ is the targeted diversity.
The targeted diversity is the diversity the GA is searching for in the ensemble of 60 classifiers. That means the GA was searching for a group of 9 classifiers that would meet the targeted diversity. The structural diversity of the 60 ensemble was first calculated and was found to be 0.2024. Hence the targeted diversity value was ranged below this value (0.2024).
The GA tries to optimize the evaluation function by finding its maximum. Equation (5) will reach its maximum when the measured diversity is equal to the targeted diversity. GA was then optimized by first searching the KW values which the GA could nearly reach and then they were used in the second run as the target diversity values. This was to avoid the GA searching for the target values that did not exist from any combination of 9 classifiers from the 60 ensemble of classifiers.
Vector of classifiers
The classifiers were created via the normal distribution by creating them at random, the activation functions, hidden nodes, and the learning rate were chosen at random. This was so that the vector contained an ensemble of classifiers which were not biased. However a precaution was taken so that weak classifiers were not created, all the classifiers had the number of hidden nodes larger than the number of input features.
The vector also had classifiers that had a classification mean square error of less than 0.45 on the validation data set. For it was difficult to attain low mean square errors with the data used. The ensemble of 90 vectors was optimized by using an ensemble that produced a greater diversity measure. This diversity measure is 0.2024. This would be able to provide the GA with better classifiers that could generate the required diversity (KW).
The Nine Ensemble of Classifiers
The validation data set was used to select the nine classifiers from the vector of 60 classifiers. The classifiers were decrypted into a set of binary numbers as stated before. This binary number represented the IDS of the individual classifier. See table 1 for one of the ensemble of 9 classifiers. C1 C2 C3 C4 C5 C6 C7 C8 C9 1 1 1 1 1 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 1 0 1 1 0 1 1 1 0 0 0 1 0 1 0 0 1 0 0 1 1 1 0 0 0 0 1 1 0 1
1 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 1 1 0 1 1 1 1 1 1 0 0 1
The maximum diversity given buy the ensemble of 60 classifiers was 0.2024, hence also the GA could not find any KW value beyond this point. This further limited the number of points that could be used to map the relationship between structural diversity and accuracy.
The Data
The interstate conflict data was used for this study. There are 7 features and one output, see table 1 for the data input features. [19]. This shows clearly that the data is complicated for training a neural network. However for this study it was just used to show how structural diversity relates with the ensemble accuracy.
A data sample of 1006 was used for training, 317 samples for validation and 552 for testing. The total data used was therefore 1875. This data has seven feature inputs as mentioned, however the data was normalized between 0 and 1, to have equal importance of all the features, by using equation (4):
ܺ ൌ ݔ െ ݔ ݔ ௫ െ ݔ ሺ4ሻ
Where ݔ and ݔ ௫ are the minimum and maximum values of the features of the data samples observed, respectively. Figure 3 shows the results from the GA with the first run of the GA with arbitrary target values. However Figure 4 shows the graph of error Vs structural diversity with the optimized target values. The figures were obtained from using the validation data set. The ensemble of 9 classifiers chosen by the GA was then tested on the testing data set so as to bring more sense to the results, see table 2. The testing data was applied on the ensemble that produced 0.16 and 0.11 diversity values. It can be seen that the results from Figure 3 and Figure 4 follow the expected trend. The error decreased with increasing diversity. However there is a point where the degree of diversity becomes unfavorable. The error began to increase with an increase in diversity. This is in alignment with [7], who stated that diversity can either profit the system or it could bring about poor performance on the classification. It can also be observed from the graphs that the data points of interest are not to scale. The occurrence of a change is not consistent. This is could be attributed to:
Results
• The fact that there was a lot of rounding off values in the software package (Matlab),
• The other factor is that the ensemble of 60 classifiers was not designed with a linear or with consistent increments of diversity values.
• The targeted diversity values might not have been possible to be extracted from the ensemble and due to that the GA will provided its local solution.
Mean square error was used in all instances to calculate the classification accuracy. However it was just used as reference so as to observe the behavior of the ensemble with the changing diversity measured.
Conclusion and discussion
This paper presented a measure of structural diversity as defined in this paper and then a relationship between structural diversity and classification accuracy were mapped. As diversity increases the generalization ability of the ensemble improved, this was seen by the classification error decreasing. However there was a point where diversity made the ensemble weaker to classify. This study has also shown that diversity of an ensemble can be induced by having an ensemble that is composed of classifiers that have different parameters such as activation functions, number of hidden nodes and the learning rate. This is in alignment with Sharkey [15]. The methods used were computationally expensive since they made use of the GA and the training of 60 classifiers. This study agrees with most literatures that diversity does improve the accuracy of the ensemble [7]. This was observed by using the testing data set on the ensemble that had a low classification error. This study was limited by the bank of classifiers (60 classifiers) that were created at random. This ensemble had 0.2024 diversity measures which meant that only small samples could be used to verify the relationship between diversity and accuracy. All the errors on the testing data set showed that diversity can be used to measure the potential for improvement on the ensemble of classifiers.
Figure 1 :
1The MLP structure showing the inputs, the layers and the activation function
Figure 2 :
2The mapping process of diversity and accuracy
Figure 3 :Figure 4 :
34GA Optimised GA on the same 60 ensemble.
Table 1 :
1The IDS of the 9 classifiers
Table 1 :
1The interstate conflict data
inputs
values
Allies
0-1
Contingency
0-1
Distance
Log10(Km)
Major Power
1-0
Capability
Log10
Democracy
-10-10
Dependency
continuous
The output is a binary number, a zero represented no
conflict where else a one represented conflict. There are a
total of 27,737 cases in the cold war population. The
26,846 are the peaceful dyads year and 875 conflict dyads
year
Table 2 :
2Classification error on the testing data setKw
Error (Initial Kw) Errors (Optimized Kw)
0.11
0.3128
0.2821
0.16
0.2821
0.2749
. J Kittler, M Hatef, R Matas, & J Duin, On combining classifiers. Intell. 203J. Kittler, M. Hatef, R. Matas & J. Duin, On combining classifiers. Intell. 20 (3), 1998, 226-239.
Combining predictors in. L Breiman, Combining Artificial Neural Nets. A.J.C SharkeyLondonSpringerL. Breiman, Combining predictors in: A.J.C Sharkey (Ed.), Combining Artificial Neural Nets, Springer, London, 1999.
H Drucker, Boosting using using neural networks. A.J.C. SharkeyLondonSpringerCombining Artificial Neural NetsH. Drucker, Boosting using using neural networks, in: A.J.C. Sharkey (Ed.), Combining Artificial Neural Nets. Springer, London, 1999.
Hybrid methods in pattern recognition. F Giacint, & G Roli, World Scientific PresSingaporeF. Giacint & G. Roli, Hybrid methods in pattern recognition, World Scientific Pres, Singapore, 2002.
Diversity in multiple classifier ensembles based on binary feature quantisation with application to face recognition. K Sirlantzis, S C Hoque & M, Fairhurst, Department of Electronics, University of Kent, United KindomK. Sirlantzis, S. Hoque & M.C. Fairhurst, Diversity in multiple classifier ensembles based on binary feature quantisation with application to face recognition, Department of Electronics, University of Kent, United Kindom, 2008, 437-445.
Assessing the predictive accuracy of diversity measures with domain-dependent, asymetric misclassification costs. G Mordechai, J H E May & W, Spangler, Information Fussion. 6G. mordechai, J.H. May & W.E. Spangler, Assessing the predictive accuracy of diversity measures with domain-dependent, asymetric misclassification costs, Information Fussion 6, 2005, 37-48.
An experiment study on diversity for bagging and boosting with linear classifiers. L I Kuncheva, M P W Duin & R, Skurichina, Information Fusion. 3L.I. Kuncheva, M. Duin & R.P.W. Skurichina, An experiment study on diversity for bagging and boosting with linear classifiers, Information Fusion 3, 2002, 248- 250.
Error reduction through learning multiple descriptions. K M J Ali & M, Pazzani, Machine Learning. 243K.M. Ali & M.J. Pazzani, Error reduction through learning multiple descriptions, Machine Learning 24 (3), 1996, 173-202.
On the accuracy of metalearning for data minin. P K J Chan & D, Stolfo, Jornal of Intelligent Information System. 8P.K. Chan & D.J. Stolfo, On the accuracy of meta- learning for data minin,. Jornal of Intelligent Information System 8, 1997, 5-28.
Saving IT's soul: human-centered information management. T H Davenport, Harvard Business Review. T.H. Davenport, Saving IT's soul: human-centered information management, Harvard Business Review, 1994, 119-131.
An experimental comparison of three methods for constructing ensembles of decission trees: bagging and boosting, and randomization. T G Dietterich, Machine Learning. 402T.G. Dietterich, An experimental comparison of three methods for constructing ensembles of decission trees: bagging and boosting, and randomization, Machine Learning 40 (2), 2000, 139-157.
Neural network ensemble. & P Hansen, Salamon, IEEE Transaction on Partten Analysis and Machine Intelligence. 1210l. Hansen & P. Salamon, Neural network ensemble, IEEE Transaction on Partten Analysis and Machine Intelligence 12 (10), 1990, 993-1001.
Neural network ensembles cross validation and active learning. A Krogh, & J Vedelsby, G. Tesauro, D.S. Touretzky, T.K. LeenMIT PressCambridgeA. Krogh & J. Vedelsby, Neural network ensembles cross validation and active learning, in: G. Tesauro, D.S. Touretzky, T.K. Leen (Eds.), MIT Press, Cambridge, 1995.
Relationship between combination methods and measures of diversity in combining classifiers. A C I Shipp & L, Kuncheva, Elsevier Science B.VUKA.C. Shipp & L.I. Kuncheva, Relationship between combination methods and measures of diversity in combining classifiers, Elsevier Science B.V., UK, 2002, 135-148.
Multi-Net systems. A Sharkey, Combining artificial neural nets: Ensemble and Modular Multi-net Systems. Springer-VerlagA. Sharkey, Multi-Net systems, Combining artificial neural nets: Ensemble and Modular Multi-net Systems, Springer-Verlag, 1999, 1-30.
Linear and order statistics combiners for pattern classification. K Tumer, & J Ghosh, Combining Artificial Neural Nets. K.Tumer & J. Ghosh, Linear and order statistics combiners for pattern classification, Combining Artificial Neural Nets, London, 1999.
Bias plus variance decomposition for zero-one loss functions. R H Kohavi & D, Wolpert, Morgan KaufmannR. Kohavi & D.H. Wolpert, Bias plus variance decomposition for zero-one loss functions, Morgan Kaufmann, 1996, 275-283.
C M Bishop, Neural networks for pattern recognition. Oxford University PressC.M. Bishop, Neural networks for pattern recognition ( Oxford University Press, 1995).
T Marwala, & M Lagazio, Modeling and Controlling Interstate Conflict. T. Marwala & M. Lagazio, Modeling and Controlling Interstate Conflict.
Dynamic Protein Classification: Adaptive Models Based on Incremental Learning Strategies. University of the Witwatersrand, MSc theses. S Mohamed, S. Mohamed, Dynamic Protein Classification: Adaptive Models Based on Incremental Learning Strategies. University of the Witwatersrand, MSc theses. 2006.
| []
|
[
"Normalized Wasserstein for Mixture Distributions with Applications in Adversarial Learning and Domain Adaptation",
"Normalized Wasserstein for Mixture Distributions with Applications in Adversarial Learning and Domain Adaptation"
]
| [
"Yogesh Balaji [email protected] \nDepartment of Computer Science\nDepartment of Computer Science\nUniversity of Maryland\nUMIACS University of Maryland\nUniversity of Maryland\n\n",
"Rama Chellappa \nDepartment of Computer Science\nDepartment of Computer Science\nUniversity of Maryland\nUMIACS University of Maryland\nUniversity of Maryland\n\n",
"Soheil Feizi [email protected] \nDepartment of Computer Science\nDepartment of Computer Science\nUniversity of Maryland\nUMIACS University of Maryland\nUniversity of Maryland\n\n"
]
| [
"Department of Computer Science\nDepartment of Computer Science\nUniversity of Maryland\nUMIACS University of Maryland\nUniversity of Maryland\n",
"Department of Computer Science\nDepartment of Computer Science\nUniversity of Maryland\nUMIACS University of Maryland\nUniversity of Maryland\n",
"Department of Computer Science\nDepartment of Computer Science\nUniversity of Maryland\nUMIACS University of Maryland\nUniversity of Maryland\n"
]
| []
| Understanding proper distance measures between distributions is at the core of several learning tasks such as generative models, domain adaptation, clustering, etc. In this work, we focus on mixture distributions that arise naturally in several application domains where the data contains different sub-populations. For mixture distributions, established distance measures such as the Wasserstein distance do not take into account imbalanced mixture proportions. Thus, even if two mixture distributions have identical mixture components but different mixture proportions, the Wasserstein distance between them will be large. This often leads to undesired results in distance-based learning methods for mixture distributions. In this paper, we resolve this issue by introducing the Normalized Wasserstein measure.The key idea is to introduce mixture proportions as optimization variables, effectively normalizing mixture proportions in the Wasserstein formulation. Using the proposed normalized Wasserstein measure leads to significant performance gains for mixture distributions with imbalanced mixture proportions compared to the vanilla Wasserstein distance. We demonstrate the effectiveness of the proposed measure in GANs, domain adaptation and adversarial clustering in several benchmark datasets. | null | [
"https://export.arxiv.org/pdf/1902.00415v2.pdf"
]
| 59,553,627 | 1902.00415 | 7774b754cc460a33fbb26b9f70f4ece6434c096d |
Normalized Wasserstein for Mixture Distributions with Applications in Adversarial Learning and Domain Adaptation
Yogesh Balaji [email protected]
Department of Computer Science
Department of Computer Science
University of Maryland
UMIACS University of Maryland
University of Maryland
Rama Chellappa
Department of Computer Science
Department of Computer Science
University of Maryland
UMIACS University of Maryland
University of Maryland
Soheil Feizi [email protected]
Department of Computer Science
Department of Computer Science
University of Maryland
UMIACS University of Maryland
University of Maryland
Normalized Wasserstein for Mixture Distributions with Applications in Adversarial Learning and Domain Adaptation
Understanding proper distance measures between distributions is at the core of several learning tasks such as generative models, domain adaptation, clustering, etc. In this work, we focus on mixture distributions that arise naturally in several application domains where the data contains different sub-populations. For mixture distributions, established distance measures such as the Wasserstein distance do not take into account imbalanced mixture proportions. Thus, even if two mixture distributions have identical mixture components but different mixture proportions, the Wasserstein distance between them will be large. This often leads to undesired results in distance-based learning methods for mixture distributions. In this paper, we resolve this issue by introducing the Normalized Wasserstein measure.The key idea is to introduce mixture proportions as optimization variables, effectively normalizing mixture proportions in the Wasserstein formulation. Using the proposed normalized Wasserstein measure leads to significant performance gains for mixture distributions with imbalanced mixture proportions compared to the vanilla Wasserstein distance. We demonstrate the effectiveness of the proposed measure in GANs, domain adaptation and adversarial clustering in several benchmark datasets.
Introduction
Quantifying distances between probability distributions is a fundamental problem in machine learning and statistics with several applications in generative models, domain adaptation, clustering, etc. Popular probability distance measures include optimal transport measures such as the Wasserstein distance [22] and divergence measures such as the Kullback-Leibler (KL) divergence [4].
Classical distance measures, however, can lead to some issues for mixture distributions. A mixture distribution is the probability distribution of a random variable X where X = X i with probability π i for 1 ≤ i ≤ k. k is the number of mixture components and π = [π 1 , ..., π k ] T is the vector of mixture (or mode) proportions. The probability distribution of each X i is referred to as a mixture component (or, a mode). Mixture distributions arise naturally in different applications where the data contains two or more sub-populations. For example, image datasets with different labels can be viewed as a mixture (or, multi-modal) distribution where samples with the same label characterize a specific mixture component.
If two mixture distributions have exactly same mixture components (i.e. same X i 's) with different mixture proportions (i.e. different π's), classical distance measures between the two will be large. This can lead to undesired results in several distance-based machine learning methods. To illustrate this issue, consider the Wasserstein distance between two distributions P X and P Y , defined as [22] W (P X , P Y ) := min
P X,Y E [ X − Y ] ,(1)
marginal X (P X,Y ) = P X , marginal Y (P X,Y ) = P Y where P X,Y is the joint distribution (or coupling) whose marginal distributions are equal to P X and P Y . When no confusion arises and to simplify notation, in some equations, we use W (X, Y ) notation instead of W (P X , P Y ).
The Wasserstein distance optimization is over all joint distributions (couplings) P X,Y whose marginal distributions match exactly with input distributions P X and P Y . This requirement can cause issues when P X and P Y are mixture distributions with different mixture proportions. In this case, due to the marginal constraints, samples belonging to very different mixture components will have to be coupled together in P X,Y (e.g. Figure 1(a)). Thus, using this distance measure can then lead to undesirable outcomes in problems such as domain adaptation. This motivates the need for developing a new distance measure to take into account mode imbalances in mixture distributions.
In this paper, we propose a new distance measure that resolves the issue of imbalanced mixture proportions for multi-modal distributions. Our developments focus on a class of optimal transport measures, namely the Wasserstein distance Eq (1). However, our ideas can be extended naturally to other distance measures (eg. adversarial distances [6]) as well.
Let G be an array of generator functions with k components defined as G := [G 1 , ..., G k ]. Let P G,π be a mixture probability distribution for a random variable X where X = G i (Z) with probability π i for 1 ≤ i ≤ k. Throughout the paper, we assume that Z has a normal distribution.
By relaxing the marginal constraints of the classical Wasserstein distance (1), we introduce the Normalized Wasserstein measure (NW measure) as follows:
W N (P X , P Y ) := min G,π (1) ,π (2) W (P X , P G,π (1) ) + W (P Y , P G,π (2) ).
There are two key ideas in this definition that help resolve mode imbalance issues for mixture distributions. First, instead of directly measuring the Wasserstein distance between P X and P Y , we construct two intermediate (and potentially mixture) distributions, namely P G,π (1) and P G,π (2) . These two distributions have the same mixture components (i.e. same G) but can have different mixture proportions (i.e. π (1) and π (2) can be different). Second, mixture proportions, π (1) and π (2) , are considered as optimization variables. This effectively normalizes mixture proportions before Wasserstein distance computations. See an example in Figure 1 (b, c) for a visualization of P G,π (1) and P G,π (2) , and the re-normalization step.
In this paper, we show the effectiveness of the proposed Normalized Wasserstein measure in three application domains. In each case, the performance of our proposed method significantly improves against baselines when input datasets are mixture distributions with imbalanced mixture proportions. Below, we briefly highlight these results:
Domain Adaptation: In Section 4, we formulate the problem of domain adaptation as minimizing the normalized Wasserstein measure between source and target feature distributions. On classification tasks with imbalanced datasets, our method significantly outperforms baselines (e.g. ∼ 20% gain in synthetic to real adaptation on VISDA-3 dataset).
GANs: In Section 5, we use the normalized Wasserstein measure in GAN's formulation to train mixture models with varying mode proportions. We show that such a generative model can help capture rare modes, decrease the complexity of the generator, and re-normalize an imbalanced dataset.
Adversarial Clustering: In Section 6, we formulate the clustering problem as an adversarial learning task using Normalized Wasserstein measure.
Normalized Wasserstein Measure
In this section, we introduce the normalized Wasserstein measure and discuss its properties. Recall that G is an array of generator functions defined as G := [G 1 , ..., G k ] where G i : R r → R d . Let G be the set of all possible G function arrays. Let π be a discrete probability mass function with k elements, i.e. π = [π 1 , π 2 , · · · , π k ] where π i ≥ 0 and i π i = 1. Let Π be the set of all possible π's. Let P G,π be a mixture distribution, i.e. it is the probability distribution of a random variable X such that X = G i (Z) with probability π i for 1 ≤ i ≤ k. We assume that Z has a normal density, i.e. Z ∼ N (0, I). We refer to G and π as mixture components and proportions, respectively.
The set of all such mixture distributions is defined as:
P G,k := {P G,π : G ∈ G, π ∈ Π} (2)
where k is the number of mixture components. Given two distributions P X and P Y belonging to the family of mixture distributions P G,k , we are interested in defining a distance measure agnostic to differences in mode proportions, but sensitive to shifts in mode components, i.e., the distance function should have high values only when mode components of P X and P Y differ. If P X and P Y have the same mode components but differ only in mode proportions, the distance should be low. The main idea is to introduce mixture proportions as optimization variables in the Wasserstein distance formulation (1). This leads to the following distance measure which we refer to as the Normalized Wasserstein measure (NW measure), W N (P X , P Y ), defined as:
min G,π (1) ,π (2) W (P X , P G,π (1) ) + W (P Y , P G,π (2) ) (3) k j=1 π (i) j = 1 i = 1, 2, π (i) j ≥ 0 1 ≤ j ≤ k, i = 1, 2.
Since the normalized Wasserstein's optimization (3) includes mixture proportions π (1) and π (2) as optimization variables, if two mixture distributions have similar mixture components with different mixture proportions (i.e. P X = P G,π (1) and P Y = P G,π (2) ), although the Wasserstein distance between the two can be large, the introduced normalized Wasserstein measure between the two will be zero. Note that W N is defined with respect to a set of generator functions G = [G 1 , ..., G k ]. However, to simplify the notation, we make this dependency implicit. We would like to point our that our proposed NW measure is a semidistance measure (and not a distance) since it does not satisfy all properties of a distance measure. Please refer to Appendix for more details.
To compute the NW measure, we use an alternating gradient descent approach similar to the dual computation of the Wasserstein distance [1]. Moreover, we impose the π constraints using a soft-max function. Please refer to Appendix. C for more details.
To illustrate how NW measure is agnostic to mode imbalances between distributions , consider an unsupervised domain adaptation problem with MNIST-2 (i.e. a dataset with two classes: digits 1 and 2 from MNIST) as the source dataset, and noisy MNIST-2 (i.e. a noisy version of it) as the target dataset (details of this example is presented in Section 4.2). The source dataset has 4/5 digits one and 1/5 digits two, while the target dataset has 1/5 noisy digits one and 4/5 noisy digits two. The couplings produced by esti-mating the Wasserstein distance between the two distributions is shown in yellow lines in Figure 1-a. We observe that there are many couplings between samples from incorrect mixture components. The normalized Wasserstein measure, on the other hand, constructs intermediate modenormalized distributions P 1 and P 2 , which get coupled to the correct modes of source and target distributions, respectively (see panels (b) and (c) in Figure 1)).
Theoretical Results
For NW measure to work effectively, the number of modes k in NW formulation (Eq. (3)) must be chosen appropriately. For instance, given two mixture distributions with k components each, Normalized Wasserstein measure with 2k modes would always give 0 value. In this section, we provide some theoretical conditions under which the number of modes can be estimated accurately. We begin by making the following assumptions for two mixture distributions X and Y whose NW distance we wish to compute:
• (A1) If mode i in distribution X and mode j in distribution Y belong to the same mixture component, then their Wasserstein distance is ≤ i.e., if X i and Y j correspond to the same component, W (P Xi , P Yj ) < .
• (A2) The minimum Wasserstein distance between any two modes of one mixture distribution is at least δ i.e., W (P Xi , P Xj ) > δ and W (P Yi , P Yj ) > δ ∀i = j. Also, non-overlapping modes between X and Y are separated by δ i.e., for non-overlapping modes X i and Y j , W (P Xi , P Yj ) > δ. This ensures that modes are well-separated.
• (A3) We assume that each mode X i and Y i have density at least η i.e., P Xi ≥ η ∀i, P Yi ≥ η ∀i. This ensures that every mode proportion is at least η.
• (A4) Each generator G i is powerful enough to capture exactly one mode of distribution P X or P Y .
Theorem 1 Let P X and P Y be two mixture distributions satisfying (A1)-(A4) with n 1 and n 2 mixture components, respectively, where r of them are overlapping. Let
k * = n 1 + n 2 − r. Then, k * is smallest k for which N W (k) is small (O( )) and N W (k) − N W (k − 1) is relatively large (in the O(δη) )
The proof is presented in Appendix. A. All assumptions made are reasonable and hold in most practical situations: (A1)-(A3) enforces that non-overlapping modes in mixture distribitions are separated, and overlapping modes are close in Wasserstein distance. To enforce (A4), we need to prevent multi-mode generation in one mode of G. This can be satisfied by using the regularizer in Eq. (11). Note that in the above theorem, k * is the optimal k that should be used in the Normalized Wasserstein formulation. The theorem presents a way to estimate k * . Please refer to Section 7 for experimental results. In many applications like domain adaption, however, the number of components k is known beforehand, and this step can be skipped.
Normalized Wasserstein for Domain Adaptation under covariate and label shift
In this section, we demonstrate the effectiveness of the NW measure in Unsupervised Domain Adaptation (UDA) both for supervised (e.g. classification) and unsupervised (e.g. denoising) tasks. Note that the term unsupervised in UDA means that the label information in the target domain is unknown while unsupervised tasks mean that the label information in the source domain is unknown.
First, we consider domain adaptation for a classification task. Let (X s , Y s ) represent the source domain while (X t , Y t ) denote the target domain. Since we deal with the classification setup, we have Y s , Y t ∈ {1, 2, ..., k}. A common formulation for the domain adaptation problem is to transform X s and X t to a feature space where the distance between the source and target feature distributions is sufficiently small, while a good classifier can be computed for the source domain in that space [6]. In this case, one solves the following optimization:
min f ∈F L cl (f (X s ), Y s ) + λ dist (f (X s ), f (X t )) (4)
where λ is an adaptation parameter and L cl is the empirical classification loss function (e.g. the cross-entropy loss). The distance function between distributions can be adversarial distances [6,21], the Wasserstein distance [20], or MMD-based distances [14,15].
When X s and X t are mixture distributions (which is often the case as each label corresponds to one mixture component) with different mixture proportions, the use of these classical distance measures can lead to the computation of inappropriate transformation and classification functions. In this case, we propose to use the NW measure as the distance function. Computing the NW measure requires training mixture components G and mode proportions π (1) , π (2) . To simplify the computation, we make use of the fact that labels for the source domain (i.e. Y s ) are known, thus source mixture components can be identified using these labels. Using this information, we can avoid the need for computing G directly and use the conditional source feature distributions as a proxy for the mixture components as follows:
G i (Z) dist = f (X (i) s ),(5)X (i) s = {X s |Y s = i}, ∀1 ≤ i ≤ k,
where dist = means matching distributions. Using (5), the formulation for domain adaptation can be written as
min f ∈F min π L cl (X s , Y s ) + λW i π (i) f (X (i) s ), f (X t ) .(6)
The above formulation can be seen as a version of instance weighting as source samples in X (i) s are weighted by π i . Instance weighting mechanisms have been well studied for domain adaptation [23, 24]. However, different from these approaches, we train the mode proportion vector π in an end-to-end fashion using neural networks and integrate the instance weighting in a Wasserstein optimization. Of more relevance to our work is the method proposed in [3], where the instance weighting is trained end-to-end in a neural network. However, in [3], instance weights are maximized with respect to the Wasserstein loss, while we show that the mixture proportions need to minimized to normalize mode mismatches. Moreover, our NW measure formulation can handle the case when mode assignments for source embeddings are unknown (as we discuss in Section 4.2). This case cannot be handled by the approach presented in [3].
For unsupervised tasks when mode assignments for source samples are unknown, we cannot use the simplified formulation of (5). In that case, we use a domain adaptation method solving the following optimization:
min f ∈F L unsup (X s ) + λW N (f (X s ), f (X t )) ,(7)
where L unsup (X s ) is the loss corresponding to the desired unsupervised learning task on the source domain data.
UDA for supervised tasks
4.1.1 MNIST → MNIST-M
In the first set of experiments 1 , we consider adaptation between MNIST→ MNIST-M datasets. We consider three settings with imbalanced class proportions in source and target datasets: 3 modes, 5 modes, and 10 modes. More details can be found in Table. 9 of Appendix. We use the same architecture as [6] for feature network and discriminator. We compare our method with the following approaches: (1) Source-only which is a baseline model trained only on source domain with no domain adaptation performed, (2) DANN [6], a method where adversarial distance between source and target distibutions is minimized, and (3) Wasserstein [20] where Wasserstein distance between source and target distributions is minimized. Table 1 summarizes our results of this experiment. We observe that performing domain adaptation using adversarial distance and Wasserstein distance leads to decrease in performance compared to the baseline model. This is an outcome of not accounting for mode imbalances, thus resulting in negative transfer, i.e., samples belonging to incorrect classes are coupled and getting pushed to be close in the embedding space. Our proposed NW measure, however, accounts for mode imbalances and leads to a significant boost in performance in all three settings. In the experiment of Section 4.1.1 on digits dataset, models have been trained from scratch. However, a common practice used in domain adaptation is to transfer knowledge from a pretrained network (eg. models trained on Ima-geNet) and fine-tune on the desired task. To evaluate the performance of our approach in such settings, we consider adaptation on the VISDA dataset [18]; a recently proposed benchmark for adapting from synthetic to real images. We consider a subset of the entire VISDA dataset containing the following three classes: aeroplane, horse and truck. The source domain contains (0.55, 0.33, 0.12) fraction of samples per class, while that of the target domain is (0.12, 0.33, 0.55). We use a Resnet-18 model pre-trained on ImageNet as our feature network. As shown in Table 2, our approach significantly improves the domain adaptation performance over the baseline and other compared methods.
Mode balanced datasets
The previous two experiments demonstrated the effectiveness of our method when datasets are imbalanced. In this section, we study the case where source and target domains have mode-balanced datasets -the standard setting considered in the most domain adaptation methods. We perform experiment on MNIST→MNIST-M adaptation using the entire dataset. Table 3 reports the results obtained. We observe that our approach performs on-par with the standard wasserstein distance minimization.
UDA for unsupervised tasks
For unsupervised tasks on mixture datasets, we use the formulation of Eq (7) to perform domain adaptation. To empirically validate this formulation, we consider the image denoising problem. The source domain consists of digits {1, 2} from MNIST dataset as shown in Fig 2(a). Note that the color of digit 2 is inverted. The target domain is a noisy version of the source, i.e. source images are perturbed with random i.i.d Gaussian noise N (0.4, 0.7) to obtain target images. Our dataset contains 5, 000 samples of digit 1 and 1, 000 samples of digit 2 in the source domain, and 1, 000 samples of noisy digit 1 and 5, 000 samples of noisy digit 2 in the target. The task is to perform image denoising by dimensionaly reduction, i.e., given a target domain image, we need to reconstruct the corresponding clean image that looks like the source. We assume that no (source, target) correspondence is available in the dataset.
To perform denoising when the (source, target) correspondence is unavailable, a natural choice would be to minimize the reconstruction loss in source while minimizing the distance between source and target embedding distributions. We use the NW measure as our choice of distance measure. This results in the following optimization:
min f,g E x∼Xs g(f (x)) − x 2 2 + λW N (f (X s ), f (X t ))
where f (.) is the encoder and g(.) is the decoder. As our baseline, we consider a model trained only on source using a quadratic reconstruction loss. Fig 2(b) shows source and target embeddings produced by this baseline. In this case, the source and the target embeddings are distant from each other. However, as shown in Fig 2(c), using the NW formulation, the distributions of source and target embeddings match closely (with estimated mode proportions) . We measure the L 2 reconstruction loss of the target domain, err recons,tgt = E x∼Xt g(f (x)) − x 2 2 , as a quantitative evaluation measure. This value for different approaches is shown in Table 4. We observe that our method outperforms the compared approaches.
Normalized Wasserstein GAN
Learning a probability model from data is a fundamental problem in statistics and machine learning. Building on the success of deep learning, a recent approach to this problem is using Generative Adversarial Networks (GANs) [8]. GANs view this problem as a game between a generator whose goal is to generate fake samples that are close to the real data training samples, and a discriminator whose goal is to distinguish between the real and fake samples.
Most GAN frameworks can be viewed as methods that minimize a distance between the observed probability distribution, P X , and the generative probability distribution, P Y , where Y = G(Z). G is referred to as the generator function. In several GAN formulations, the distance between P X and P Y is formulated as another optimization which characterizes the discriminator. Several GAN architectures have been proposed in the last couple of years. A summarized list includes GANs based on optimal transport measures (e.g. Wasserstein GAN+Weight Clipping If the observed distribution P X is a mixture one, the proposed normalized Wasserstein measure (3) can be used to compute a generative model. Instead of estimating a single generator G as done in standard GANs, we estimate a mixture distribution P G,π using the proposed NW measure. We refer to this GAN as the Normalized Wasserstein GAN (or NWGAN) formulated as the following optimization:
min G,π W N (P X , P G,π ).(8)
In this case, the NW distance simplifies as
min G,π W N (P X , P G,π ) = min G,π min G ,π (1) ,π (2) W (P X , P G ,π (1) ) + W (P G,π , P G ,π (2) ) = min G,π W (P X , P G,π ).(9)
There are couple of differences between the proposed NWGAN and the existing GAN architecures. The generator in the proposed NWGAN is a mixture of k models, each producing π i fraction of generated samples. We select k a priori based on the application domain while π is computed within the NW distance optimization. Modeling the generator as a mixture of k neural networks has also been investigated in some recent works [10, 7]. However, these methods assume that the mixture proportions π are known beforehand, and are held fixed during the training. In contrast, our approach is more general as the mixture proportions are also optimized. Estimating mode proportions have several important advantages: (1) we can estimate rare modes, (2) an imbalanced dataset can be re-normalized, (3) by allowing each G i to focus only on one part of the distribution, the quality of the generative model can be improved while the complexity of the generator can be reduced. In the following, we highlight these properties of NWGAN on different datasets.
Mixture of Gaussians
First, we present the results of training the NWGAN on a two dimensional mixture of Gaussians. The input data is a mixure of 9 Gaussians, each centered at a vertex of a 3 × 3 grid as shown in Figure 3. The mean and the covariance matrix for each mode are randomly chosen. The mode proportion for mode i is chosen as π i = i 45 for 1 ≤ i ≤ 9. Generations produced by NWGAN using k = 9 affine generator models on this dataset is shown in Figure 3. We also compare our method with WGAN [1] and MGAN [10]. Since MGAN does not optimize over π, we assume uniform mode proportions (π i = 1/9 for all i). To train WGAN, a non-linear generator function is used since a single affine function cannot model a mixture of Gaussian distribution.
To evaluate the generative models, we report the following quantitative scores: (1) the average mean error which is the mean-squared error (MSE) between the mean vectors of real and generated samples per mode averaged over all modes, (2) the average covariance error which is the MSE between the covariance matrices of real and generated samples per mode averaged over all modes, and (3) the π estimation error which is the normalized MSE between the π vector of real and generated samples. Note that computing these metrics require mode assignments for generated samples. This is done based on the closeness of generative samples to the ground-truth means.
We report these error terms for different GANs in Table 5. We observe that the proposed NWGAN achieves best scores compared to the other two approaches. Also, from Figure 3, we observe that the generative model trained by MGAN misses some of the rare modes in the data. This is because of the error induced by assuming fixed mixture pro-portions when the ground-truth π is non-uniform. Since the proposed NWGAN estimates π in the optimization, even rare modes in the data are not missed. This shows the importance of estimating mixture proportions specially when the input dataset has imbalanced modes.
A Mixture of CIFAR-10 and CelebA
One application of learning mixture generative models is to disentangle the data distribution into multiple components where each component represents one mode of the input distribution. Such disentanglement is useful in many tasks such as clustering (Section 6). To test the effectiveness of NWGAN in performing such disentanglement, we consider a mixture of 50, 000 images from CIFAR-10 and 100, 000 images from CelebA [12] datasets as our input distribution. All images are reshaped to be 32 × 32.
To highlight the importance of optimizing mixture proportion to produce disentangled generative models, we compare the performance of NWGAN with a variation of NWGAN where the mode proportion π is held fixed as π i = 1 k (the uniform distribution). Sample generations produced by both models are shown in Figure 4. When π is held fixed, the model does not produce disentangled representations (in the second mode, we observe a mix of CI-FAR and CelebA generative images.) However, when we optimize π, each generator produces distinct modes. . Sample generations of NWGAN with k = 2 on a mixture of CIFAR-10 and CelebA datasets for fixed and optimized π's. When π is fixed, one of the generators produces a mix of CIFAR and CelebA generative images (boxes in red highlight some of the CelebA generations in the model producing CIFAR+CelebA). However, when π is optimized, the model produces disentangled representations.
Adversarial Clustering
In this section, we use the proposed NW measure to formulate an adversarial clustering approach. More specifically, let the input data distribution have k underlying modes (each representing a cluster), which we intend to recover. The use of deep generative models for performing clustering has been explored in [25] (using GANs) and [13](using VAEs). Different from these, our approach makes use of the proposed NWGAN for clustering, and thus explicitly handles data with imbalanced modes.
Let P X be observed empirical distribution. Let G * and π * be optimal solutions of NWGAN optimization (9). For a given point x i ∼ P X , the clustering assignment is computed using the closest distance to a mode i.e.,
C(x i ) = arg min 1≤j≤k min Z x i − G j (Z) 2 .(10)
To perform an effective clustering, we require each mode G j to capture one mode of the data distribution. Without enforcing any regularization and using rich generator functions, one model can capture multiple modes of the data distribution. To prevent this, we introduce a regularization term that maximizes the weighted average Wasserstein distances between different generated modes. That is,
R = (i,j)|i>j π i π j W (G i (Z), G j (Z)) .(11)
This term encourages diversity among generative modes. With this regularization term, the optimization objective of a regularized NWGAN becomes min G,π W (P X , P G,π ) − λ reg R where λ reg is the regularization parameter. We test the proposed adversarial clustering method on an imbalanced MNIST dataset with 3 digits containing 3, 000 samples of digit 2, 1, 500 samples of digit 4 and 6, 000 samples of digit 6. We compare our approach with k-means clustering and Gaussian Mixture Model (GMM) in Table 6. Cluster purity, NMI and ARI scores are used as quantitative metrics (refer to Appendix E.3 for more details). We observe that our clustering technique is able to achieve good performance over the compared approaches.
Choosing the number of modes
As discused in Section 3, choosing the number of modes (k) is crucial for computing NW measure. While this information is available for tasks such as domain adaptation, it is unknown for others like generative modeling. In this section, we experimentally validate our theoretically justified algorithm for estimating k. Consider the mixture of Gaussian dataset with k = 9 modes presented in Section 5.1. On this dataset, the NWGAN model (with same architecture as that used in Section 5.1) was trained with varying number of modes k. For each setting, the NW measure between the generated and real data distribution is computed and plotted in Fig 5. We observe that k = 9 satisfies the condition discussed in Theorem 1: optimal k * is the smallest k for which N W (k) is small, N W (k − 1) − N W (k) is large, and N W (k) saturates after k * .
Conclusion
In this paper, we showed that Wasserstein distance, due to its marginal constraints, can lead to undesired results when when applied on imbalanced mixture distributions. To resolve this issue, we proposed a new distance measure called the Normalized Wasserstein. The key idea is to optimize mixture proportions in the distance computation, effectively normalizing mixture imbalance. We demonstrated the usefulness of NW measure in three machine learning tasks: GANs, domain adaptation and adversarial clustering. Strong empirical results on all three problems highlight the effectiveness of the proposed distance measure.
Acknowledgements
Balaji and Chellappa were supported by MURI program from the Army Research Office (ARO) under the grant W911NF17-1-0304. Feizi was supported by the US National Science Foundation (NSF) under the grant CDS&E:1854532, and Capital One Services LLC.
[3] Qingchao Chen, Yang Liu, Zhaowen Wang, Ian Wassell, and Kevin Chetty. For NW measure to normalize mode proportions appropriately, we need a good estimate of the number of mode proportions. Theorem 1 provides conditions under which the mode proportions can provable be estimated.
Let P X and P Y be two mixture distributions whose NW measure we wish to compute. Let P X and P Y have n 1 and n 2 modes respectively, with r modes overlapping. Let k * = n 1 + n 2 − r. We make the following assumptions • (A1) If mode i in distribution X and mode j in distribution Y belong to the same mixture component, then their Wasserstein distance is ≤ i.e., if X i and Y j correspond to the same component, W (P Xi , P Yj ) < .
• (A2) The minimum Wasserstein distance between any two modes of one mixture distribution is at least δ i.e., W (P Xi , P Xj ) > δ and W (P Yi , P Yj ) > δ ∀i = j. Also, non-overlapping modes between X and Y are separated by δ i.e., for non-overlapping modes X i and Y j , W (P Xi , P Yj ) > δ. This ensures that modes are well-separated.
• (A3) We assume that each mode X i and Y i have density at least η i.e., P Xi ≥ η ∀i, P Yi ≥ η ∀i. This ensures that every mode proportion is at least η.
• (A4) Each generator G i is powerful enough to capture exactly one mode of distribution P X or P Y .
Lemma 1 N W (k) is a monotonically decreasing function with respect to k.
This is because in N W (k + 1), we add one additional mode compared to N W (k). If we have π (1) , π (2) for this new mode to be 0 and give the same assignements as N W (k) to the rest of the modes, N W (k + 1) = N W (k). Since computing N W (k) contains a minimization over mode assignments, the N W (k + 1) ≤ N W (k)∀k. Hence, it is monotonically decreasing.
Lemma 2 N W (k * ) ≤
This is because at k = k * , we can make the following mode assignments.
• Assign n 1 +n 2 −r modes of NW to each of n 1 +n 2 −r non-overlapping modes in P X and P Y with the same mixture .
• Assign the remaining r modes of NW to the overlapping modes of either P X or P Y . WLOG, let us assume we assign them to r overlapping modes of P X .
• Choose π (1) to be same as π for P X , with 0 to nonoverlapping components of P Y
• Choose π (2) to be same as π for P Y , with 0 to nonoverlapping components of P X Let us denote N Ov(X) to be non-overlapping modes of X, Ov(X) to be overlapping modes of X, N Ov(Y ) to be non-overlapping modes of Y , and Ov(Y ) to be overlapping modes of Y . Then, under the mode assignments given above, N W (k * ) can be evaluated as,
W N (P X , P Y ) := min G,π (1) ,π (2) W (P X , P G,π (1) ) + W (P Y , P G,π (2) ). = i∈N Ov(X) π X i W (P Xi , P Xi ) + i∈Ov(X) π X i W (P Xi , P Xi )+ i∈N Ov(Y ) π Y i W (P Yi , P Yi ) + i∈Ov(Y ) π Y i W (P Yi , P Xi ) = 0 + 0 + 0 + i∈Ov(Y ) π Y i W (P Yi , P Xi ) ≤
The last step follows from (A1) i.e., overlapping modes are separated by a Wasserstein distance of .
Lemma 3 N W (k * − 1) ≥ δ 2 η By assumption (A2), we know that any two modes have separation of at least δ. In the distribution P X + P Y , there are n 1 + n 2 − r unique cluster centers, each pair of clusters at a Wasserstein distance δ distance apart. In N W (k * − 1), generators have n 1 + n 2 − r − 1 modes, which is 1 less than the number of modes in P X + P Y . Now, let us assume that N W (k * − 1) < δ 2 η. Then, W (P X , P G,π (1) ) + W (P Y , P G,π (2) ) < δ 2 η
Since each mode of P X and P Y has density at least η (by (A3)), the above condition can be satisfied only if
∀i ∈ [n 1 ], ∃j ∈ [k * − 1] s.t. W (P Xi , P Gj ) < δ 2 (12) ∀i ∈ [n 2 ], ∃j ∈ [k * − 1] s.t. W (P Yi , P Gj ) < δ 2(13)
Accounting for r mode overlap between X and Y , there will be n 1 + n 2 − r unique constraints in Eq. (12) and Eq. (13).
Since, G has only k * − 1 modes, by Pigeonhole principle, there should be at least one pair (i, j) that is matched to the same G j . WLOG, let us consider both i and j to belong to P X , although each can either belong to P X or P Y . Then,
W (P Xi , G k ) < δ 2 W (P Xj , G k ) < δ 2
Then, by triangle inequality, W (P Xi , P Xj ) < δ. This contradicts assumption (A2). Hence N W (k * − 1) ≥ δ 2 η Theorem 1 Let P X and P Y be two mixture distributions satisfying (A1)-(A4) with n 1 and n 2 mixture components, respectively, where r of them are overlapping. Let k * = n 1 + n 2 − r. Then, k * is smallest k for which
N W (k) is small (O( )) and N W (k) − N W (k − 1) is relatively large (in the O(δη) )
Proof: From Lemma 2 and Lemma 1, we know that N W (k) ≤ ∀k ≥ k * . Similarly, from Lemma 3 and Lemma 1, we N W (k) ≥ δ 2 η ∀k < k * . Hence, k * is the smallest k for which N W (k) is small (O( )) and N W (k) − N W (k − 1) is relatively large (in the O(δη) ). Hence, proved.
B. Properties of Normalized Wasserstein measure
The defined NW measure is not a distance because it does not satisfy the properties of a distance measure.
• In general, W N (P X , P Y ) = 0.
However, if P X ∈ P G,π , W N (P X , P X ) = 0. Moreover, if ∃G, π s.t. W N (P G,π , P X ) < (i.e., P G,π approximates P X within factor), then W N (P X , P X ) ≤ 2 . This follows from the definition of NW measure. So, when the class of generators are powerful enough, this property is satisfied within 2 approximation
• Normalized Wasserstein measure is symmetric.
W N (P X , P Y ) = W N (P Y , P X )
• Normalized Wasserstein measure does not satisfy triangle inequality.
C. Optimizing Normalized Wasserstein using duality NW measure between two distributions P X and P Y is defined as min G,π (1) ,π (2) W (P X , P G,π (1) ) + W (P Y , P G,π (2) ) Similar to [1], using the dual of Wasserstein distance, we can write the above optimization as min G,π (1) ,π (2) max
D1∈1−Lip E[D 1 (X)] − E[ i π (1) i D 1 (G i (Z))]+ max D2∈1−Lip E[D 2 (Y )] − E[ i π (2) i D 2 (G i (Z))](14)
Here, D 1 and D 2 are 1-Lipschitz functions, and π (1) and π (2) are k−dimesional vectors lying in a simplex i.e.,
i π (1) i = 1, i π (2) i = 1
To enforce these constraints, we use the softmax function as follows.
π (1) = softmax(π (1) ), π (2) = softmax(π (2) )
The new variablesπ (1) andπ (2) become optimization variables. The softmax function ensures that the mixture probabilities π (1) and π (2) lie in a simplex.
The above equations are optimized using alternating gradient descent given by the following algorithm. Sample minibatch x ∼ P X , y ∼ P Y
5:
Sample minibatch z ∼ N (0, 1)
6:
Compute Normalized Wasserstein as Minimize N W w.r.tπ (1) andπ (2) 10: end for 11:
N W = E[D 1 (x)] − E[ i π (1) i D 1 (G i (z))]+ E[D 2 (y)] − E[ i π (2) i D 2 (G i (z))]
Minimize N W w.r.t G 12: end for D. Comparative analysis of mixture distributions
In this section, we propose a test using a combination of Wasserstein distance and NW measure to identify if two mixture distributions differ in mode components or mode proportions. Such a test can provide better understanding while comparing mixture distributions. Suppose P X and P Y are two mixture distributions with the same mixture components but different mode proportions. I.e., P X and P Y both belong to P G,k . In this case, depending on the difference between π (1) and π (2) , the Wasserstein distance between the two distributions can be arbitrarily large. Thus, using the Wasserstein distance, we can only conclude that the two distributions are different. In some applications, it can be informative to have a test that determines if two distributions differ only in mode proportions. We propose a test based on a combination of Wasserstein and the NW measure for this task. This procedure is shown in Table. 7. We note that computation of p-values for the proposed test is beyond the scope of this paper.
We demonstrate this test on 2D Mixture of Gaussians. We perform experiments on two settings, each involving two datasets D 1 and D 2 , which are mixtures of 8 Gaussians:
Setting 1: Both D 1 and D 2 have same mode components, with the i th mode located at (r cos( 2πi 8 ), r sin( 2πi 8 )). Setting 2: D 1 and D 2 have shifted mode components.
The i th mode of D 1 is located at (r cos( 2πi 8 ), r sin( 2πi 8 )), while the i th mode of D 2 is located at (r cos( 2πi+π 8 ), r sin( 2πi+π 8 )). In both the settings, the mode fraction of D 1 is π i = i+2 52 , and that of D 2 is π i = 11−i 52 . We use 2, 000 data points from D 1 and D 2 to compute Wasserstein distance and the NW measure in primal form by solving a linear program. The computed distance values are reported in Table 8. In setting 1, we observe that the Wasserstein distance is large while the NW measure is small. Thus, one can conclude that the two distributions differ only in mode proportions. In setting 2, both Wasserstein and NW measures are large. Thus, in this case, distributions differ in mixture components as well.
E. Additional results
E.1. CIFAR-10
We present the results of training NWGAN on CIFAR-10 dataset. We use WGAN-GP [9] with Resnet-based generator and discriminator models as our baseline method. The proposed NWGAN was trained with k = 4 modes using the same network architectures as the baseline. Sample generations produced by each mode of the NWGAN is shown in Figure 6. We observe that each generator model captures distinct variations of the entire dataset, thereby approximately disentangling different modes in input images. For quantitative evaluation, we compute inception scores for the baseline and the proposed NWGAN. The inception score for the baseline model is 7.56, whereas our model achieved an improved score of 7.89.
E.2. Domain adaptation under uniform mode proportions
In this section, we present results on domain adaptation on mode-balanced VISDA dataset -source and target domains contain 3 classes -aeroplane, horse and truck with uniform mode proportion. The results of performing adaptation using NW measure in comparison with classical distance measures are reported in Table 10. We observe that NW measure performs on-par with the compared methods on this dataset. This experiment demonstrates the effectiveness of NW measure on a range of settings -when the source and target datasets are balanced in mode proportions,
E.3. Adversarial clustering: Quantitative metrics
• Cluster purity: Cluster purity measures the extent to which clusters are consistent i.e., if each cluster constains similar points or not. To compute the cluster purity, the cardinality of the majority class is computed for each cluster, and summed over and divided by the total number of samples.
• ARI -Adjusted Rand Index: The rand index computes the similarity measure between two clusters by considering all pairs of samples, and counting the pairs of samples having the same cluster in the ground-truth and predicted cluster assignments. Adjusted rand index makes sure that ARI score is in the range (0, 1)
• NMI -Normalized Mutual Information: NMI is the normalized version of the mutual information between the predicted and the ground truth cluster assignments.
E.4. Adversarial clustering of CIFAR+CelebA
In this section, we show the results of performing adversarial clustering on a mixture of CIFAR-10 and CelebA datasets. The same dataset presented in Section 3.2 of the main paper is used in this experiment (i.e) the dataset contains CIFAR-10 and CelebA samples in 1 : 2 mode proportion. NWGAN was trained with 2 modes -each employing Resnet based generator-discriminator architectures (same architectures and hyper-parameters used in Section 3.2 of main paper). Quantitative evaluation of our approach in comparison with k − means is given in Table 11. We observe that our approach outperforms k − means clustering. However, the clustering quality is poorer that the one obtained on imbalanced MNIST dataset. This is because the samples generated on MNIST dataset had much better quality than the one produced on CIFAR-10. So, as long as the underlying GAN model produces good generations, our adversarial clustering algorithm performs well. As discussed in Section 3.1 of the main paper, the input dataset is a mixture of 8 Gaussians with varying mode proportion. Normalized Wasserstein GAN was trained with linear generator and non-linear discriminator models using
F. Architecture and hyper-parameters
F.2.3 Domain adaptation for Image denoising
The architectures and hyper-parameters used in our method for image denoising experiment (Section 4.2 of the main paper) are presented in Table 16. To perform adaptation using Normalized Wasserstein measure, we need to train the intermediate distributions P G,π (1) and P G,π (2) (as discussed in Section 2, 4.2 of the main paper). We denote the generator and discriminator models corresponding to P G,π (1) and P G,π (2) as Generator (RW) and Discriminator (RW) respectively. In practice, we noticed that the Generator (RW) and Discriminator (RW) models need to be trained for a certain number of iterations first (which we call initial iterations) before performing adaptation. So, for these initial iterations, we set the adaptation parameter λ as 0. Note that the encoder, decoder, generator (RW) and discriminator (RW) models are trained during this phase, but the adaptation is not performed. After these initial iterations, we turn the adaptation term on. The hyperparameters and model architectures are given in Table 16. The same architectures are used for Source only and Wasserstein.
F.3. Adversarial clustering
For adversarial clustering in imbalanced MNIST dataset (Section 5 of the main paper), the architectures and hyperparameters used are given in Table 17.
F.4. Hypothesis testing
For hypothesis testing experiment (Section 6 of the main paper), the same model architectures and hyper-parameters as the MOG experiment (Table 12) was used.
Figure 1 .
1An illustration of the effectiveness of the proposed Normalized Wasserstein measure in domain adaptation. The source domain (shown in red) and the target domain (shown in blue) have two modes with different mode proportions. (a) The couplings computed by estimating Wasserstein distance between source and target distributions (shown in yellow lines) match several samples from incorrect and distant mode components. (b,c) Our proposed normalized Wasserstein measure (3) constructs intermediate mixture distributions P1 and P2 (shown in green) with similar mixture components to source and target distributions, respectively, but with optimized mixture proportions. This significantly reduces the number of couplings between samples from incorrect modes and leads to 42% decrease in target loss in domain adaptation compared to the baseline.
Figure 2 .
2Domain adaptation for image denoising. (a) Samples from source and target domains. (b) Source and target embeddings learnt by the baseline model. (c) Source and target embeddings learnt by minimizing the proposed NW measure. In (b) and (c), red and green points indicate source and target samples, respectively.
[1], WGAN+Gradient Penalty [9]), GANs based on divergence measures (e.g. the original GAN's formulation [8], DCGAN [19], f -GAN [17]), GANs based on momentmatching (e.g. MMD-GAN [5, 11]), and other formulations (e.g. Least-Squares GAN [16], BigGAN [2], etc.)
Figure 3 .
3Mixture of Gaussian experiments. In all figures, red points indicate samples from the real data distribution while blue points indicate samples from the generated distribution. NWGAN is able to capture rare modes in the data and produces a significantly better generative model than other methods.
Figure 4
4Figure 4. Sample generations of NWGAN with k = 2 on a mixture of CIFAR-10 and CelebA datasets for fixed and optimized π's. When π is fixed, one of the generators produces a mix of CIFAR and CelebA generative images (boxes in red highlight some of the CelebA generations in the model producing CIFAR+CelebA). However, when π is optimized, the model produces disentangled representations.
Figure 5 .
5Choosing k: Plot of NW measure vs number of modes
iterations = N iter 2: Critic iterations = N critic 3: for t = 1 : N iter do 4:
Figure 6 .
6Sample generations produced by the proposed NWGAN trained on CIFAR-10 with k = 4 generator modes.
Table 1 .
1Mean classification accuracies (in %) averaged over 5 runs on imbalanced MNIST→MNIST-M adaptationMethod
3 modes 5 modes 10 modes
Source only
66.63
67.44
63.17
DANN
62.34
57.56
59.31
Wasserstein
61.75
60.56
58.22
NW
75.06
76.16
68.57
4.1.2 VISDA
Table 2 .
2Mean classification accuracies (in %) averaged over 5 runs on synthetic to real adaptation on VISDA dataset (3 classes)Method
Accuracy (in %)
Source only
53.19
DANN
68.06
Wasserstein
64.84
NW
73.23
Table 3 .
3Domain adaptation on mode-balanced datasets:
MNIST→MNIST-M. Average classification accuracies averaged
over 5 runs are reported
Method
Accuracy (in %)
Source only
60.22
DANN
85.24
Wasserstein
83.47
NW
84.16
Table 4 .
4errrecons,tgt for an image denoising taskMethod
err recons,tgt
Source only
0.31
Wasserstein
0.52
NW
0.18
Training on target (Oracle)
0.08
Table 5 .
5Quantitative Evaluation on Mixture of GaussiansMethod
Avg. µ error Avg. Σ error π error
WGAN
0.007
0.0003
0.0036
MGAN
0.007
0.0002
0.7157
NWGAN
0.002
0.0001
0.0001
Table 6 .
6Clustering results on Imbalanced MNIST datasetMethod Cluster Purity NMI ARI
k-means
0.82
0.49 0.43
GMM
0.75
0.28 0.33
NW
0.98
0.94 0.97
Table 7 .
7Comparative analysis of two mixture distributionsWasserstein
distance
NW
measure
Conclusion
High
High
Distributions differ in mode components
High
Low
Distributions have the same components,
but differ in mode proportions
Low
Low
Distributions are the same
Table 8 .
8Hypothesis test between two MOG -D1 and D2Setting
Wasserstein Distance NW measure
Setting 1
1.51
0.06
Setting 2
1.56
0.44
NW becomes equivalent to Wasserstein distance and mini-
mizing it is no worse than minimizing the classical distance
measures. On the other hand, when mode proportions of
source and target domains differ, NW measure renormal-
izes the mode proportions and effectively performs domain
adaptation. This illustrates the usefulness of NW measure
in domain adaptation problems.
Table 9 .
9MNIST → MNIST-M settingsTable 10. Domain adaptation on mode-balanced datasets: VISDA. Average classification accuracies averaged over 5 runs are reported the architectures and hyper-parameters as presented in Table 12. The architecture used for training Vanilla WGAN is provided inTable 13. The same architecture is used for MGAN, however we do not use the ReLU non-linearities in the Generator function (to make the generator affine so that the model is comparable to ours). For WGAN and MGAN, we use the hyper-parameter details as provided in the respective papers -[9] and [10].F.1.2 CIFAR-10 + CelebATo train models on CIFAR-10 + CelebA dataset (Section 3.2 of the main paper), we used the Resnet architectures of WGAN-GP [9] with the same hyper-parameter configuration for the generator and the discriminator networks. In Normalized WGAN, the learning rate of mode proportion π was 5 times the learning rate of the discriminator.F.2. Domain adaptation for mixture distributions F.2.1 Digit classificationFor MNIST→MNIST-M experiments (Section 4.1.1 of the main paper), following [6], a modified Lenet architecture was used for feature network, and a MLP network was used for domain classifier. The architectures and hyperparameters used in our method are given inTable 14. The same architectures are used for the compared approaches -Source only, DANN and Wasserstein.For the experiments on VISDA dataset with three classes (Section 4.1.2 of the main paper), the architectures and hyper-parameters used in our method are given in Table 15. The same architectures are used for the compared approaches: source only, Wasserstein and DANN.Config
3 modes
5 modes
10 modes
Classes
{1, 4, 8}
{0, 2, 4, 6, 8}
{0, 1, . . . 9}
Proportion of
source samples
{0.63, 0.31,
0.06}
{0.33, 0.26, 0.2,
0.13, 0.06}
{0.15, 0.15, 0.15, 0.12, 0.12
0.11, 0.05, 0.05, 0.05, 0.05}
Proportion of
target samples
{0.06, 0.31,
0.63}
{0.06, 0.13, 0.2,
0.26, 0.33}
{0.05, 0.05, 0.05, 0.05, 0.11
0.12, 0.12, 0.15, 0.15, 0.15}
Method
Classification accuracy (in %)
Source only
63.24
DANN
84.71
Wasserstein
90.08
Normalized Wasserstein
90.72
Table 11. Performance of clustering algorithms on CI-
FAR+CelebA dataset
Method
Cluster Purity NMI
ARI
k-means
0.667
0.038 0.049
Normalized Wasserstein
0.870
0.505 0.547
F.2.2 VISDA
Table 12 .
12Architectures and hyper-parameters: Mixture of Gaussians with Normalized Wasserstein GAN Table 13. Architectures: Mixture of Gaussians with vanilla WGAN model Generator Discriminator Linear(2 → 512) + ReLU Linear(2 → 512) + ReLU Linear(512 → 512) + ReLU Linear(512 → 512) + ReLU Linear(512 → 512) + ReLU Linear(512 → 512) + ReLUGenerator
Discriminator
Linear(2 → 64)
Linear(2 → 128)
Linear(64 → 64)
LeakyReLU(0.2)
Linear(64 → 64)
Linear(128 → 128)
Linear(64 → 2)
LeakyReLU(0.2)
Linear(128 → 2)
Hyperparameters
Discriminator learning rate
0.00005
Generator learning rate
0.00005
π learning rate
0.01
Batch size
1024
Optimizer
RMSProp
Number of critic iters
10
Weight clip
[−0.003, 0.003]
Linear(512 → 2)
Linear(512 → 2)
Table 14 .
14Architectures and hyper-parameters: Domain adaptation for MNIST→MNIST-M experiments Feature network Conv(3 → 32, 5 × 5 kernel) + ReLU + MaxPool(2) Conv(32 → 48, 5 × 5 kernel) + ReLU + MaxPool(2) Domain discriminator Classifier Linear(768 → 100) + ReLU Linear(768 → 100) + ReLU Linear(100 → 1) Linear(100 → 100) + ReLU Linear(100 → 10)Hyperparameters
Feature network learning rate
0.0002
Discriminator learning rate
0.0002
Classifier learning rate
0.0002
π learning rate
0.0005
Batch size
128
Optimizer
Adam
Number of critic iters
10
Weight clipping value
[−0.01, 0.01]
λ
1
Table 15 .
15Architectures and hyper-parameters: Domain adaptation on VISDA dataset Feature network Resnet-18 model pretrained on ImageNet till the penultimate layer Domain discriminator Classifier Linear(512 → 512) + LeakyReLU(0.2) Linear(512 → 3) Linear(512 → 512) + LeakyReLU(0.2) Linear(512 → 512) + LeakyReLU(0.2) Table 16. Architectures and hyper-parameters: Domain adaptation for image denoising experiment Encoder Decoder Conv(3 → 64, 3 × 3 kernel) Linear(2 → 128) +ReLU + MaxPool(2) Conv(128 → 64, 3 × 3 kernel) Conv(64 → 128, 3 × 3 kernel) + ReLU + Upsample(2) +ReLU + MaxPool(2) Conv(64 → 64, 4 × 4 kernel) Conv(128 → 128, 3 × 3 kernel) + ReLU + Upsample(4) +ReLU + MaxPool(2) Conv(64 → 3, 3 × 3 kernel) Conv(128 → 128, 3 × 3 kernel) Linear(128 → 2) Domain discriminator Linear(2 → 64) + ReLU Linear(64 → 64) + ReLU Linear(64 → 1) Generator (RW) Discriminator (RW) Linear(2 → 128) Linear(2 → 128) + ReLU Linear(128 → 128) Linear(128 → 128) + ReLU Linear(128 → 2)Linear(512 → 1)
Hyperparameters
Feature network learning rate
0.000001
Discriminator learning rate
0.00001
Classifier learning rate
0.00001
π learning rate
0.0001
Batch size
128
Optimizer
Adam
Number of critic iters
10
Weight clipping value
[−0.01, 0.01]
λ
1
Table 17 .
17Architectures and hyper-parameters: Mixture models on imbalanced-MNIST3 dataset ConvTranspose(256 → 128, 4 × 4 kernel, stride 2) Spectralnorm(Conv(64 → 128, 4 × 4 kernel, stride 2)) Batchnorm + ReLU LeakyReLU(0.2) ConvTranspose(128 → 64, 4 × 4 kernel, stride 2) Spectralnorm(Conv(128 → 256, 4 × 4 kernel, stride 2)) Batchnorm + ReLU LeakyReLU(0.2) ConvTranspose(64 → 1, 4 × 4 kernel, stride 2) Spectralnorm(Conv(256 → 1, 4 × 4 kernel, stride 1))Generator
Discriminator
ConvTranspose(100 → 256, 4 × 4 kernel, stride 1)
Spectralnorm(Conv(1 → 64, 4 × 4 kernel, stride 2))
Batchnorm + ReLU
LeakyReLU(0.2)
Tanh()
Hyperparameters
Discriminator learning rate
0.00005
Generator learning rate
0.0001
π learning rate
0.001
Batch size
64
Optimizer
RMSProp
Number of critic iters
5
Weight clip
[−0.01, 0.01]
λ reg
0.01
Code available at https://github.com/yogeshbalaji/ Normalized-Wasserstein
. Martin Arjovsky, Soumith Chintala, Léon Bottou, Gan Wasserstein, arXiv:1701.07875711arXiv preprintMartin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. arXiv preprint arXiv:1701.07875, 2017. 3, 6, 7, 11
Large scale GAN training for high fidelity natural image synthesis. CoRR, abs/1809.11096. Andrew Brock, Jeff Donahue, Karen Simonyan, Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural im- age synthesis. CoRR, abs/1809.11096, 2018. 6
| [
"https://github.com/yogeshbalaji/"
]
|
[
"Saving RNN Computations with a Neuron-Level Fuzzy Memoization Scheme",
"Saving RNN Computations with a Neuron-Level Fuzzy Memoization Scheme"
]
| [
"Franyell Silfa [email protected] \nComputer Architecture Deparment\nUniversitat Politecnica de Catalunya\n\n",
"Jose-Maria Arnau [email protected] \nComputer Architecture Deparment\nUniversitat Politecnica de Catalunya\n\n",
"Antonio Gonzàlez [email protected] \nComputer Architecture Deparment\nUniversitat Politecnica de Catalunya\n\n"
]
| [
"Computer Architecture Deparment\nUniversitat Politecnica de Catalunya\n",
"Computer Architecture Deparment\nUniversitat Politecnica de Catalunya\n",
"Computer Architecture Deparment\nUniversitat Politecnica de Catalunya\n"
]
| []
| Recurrent Neural Networks (RNNs) are a key technology for applications such as automatic speech recognition or machine translation. Unlike conventional feedforward DNNs, RNNs remember past information to improve the accuracy of future predictions and, therefore, they are very effective for sequence processing problems.For each application run, recurrent layers are executed many times for processing a potentially large sequence of inputs (words, images, audio frames, etc.). In this paper, we observe that the output of a neuron exhibits small changes in consecutive invocations. We exploit this property to build a neuron-level fuzzy memoization scheme, which dynamically caches each neuron's output and reuses it whenever it is predicted that the current output will be similar to a previously computed result, avoiding in this way the output computations.The main challenge in this scheme is determining whether the new neuron's output for the current input in the sequence will be similar to a recently computed result. To this end, we extend the recurrent layer with a much simpler Bitwise Neural Network (BNN), and show that the BNN and RNN outputs are highly correlated: if two BNN outputs are very similar, the corresponding outputs in the original RNN layer are likely to exhibit negligible changes. The BNN provides a low-cost and effective mechanism for deciding when fuzzy memoization can be applied with a small impact on accuracy.We evaluate our memoization scheme on top of a state-of-the-art accelerator for RNNs, for a variety of different neural networks from multiple application domains. We show that our technique avoids more than 26.7% of computations, resulting in 21% energy savings and 1.4x speedup on average. | null | [
"https://arxiv.org/pdf/2202.06563v1.pdf"
]
| 246,822,554 | 2202.06563 | 3d99cdfc6358a73f0e4955cc20f189a5bed7a98b |
Saving RNN Computations with a Neuron-Level Fuzzy Memoization Scheme
Franyell Silfa [email protected]
Computer Architecture Deparment
Universitat Politecnica de Catalunya
Jose-Maria Arnau [email protected]
Computer Architecture Deparment
Universitat Politecnica de Catalunya
Antonio Gonzàlez [email protected]
Computer Architecture Deparment
Universitat Politecnica de Catalunya
Saving RNN Computations with a Neuron-Level Fuzzy Memoization Scheme
Recurrent Neural Networks (RNNs) are a key technology for applications such as automatic speech recognition or machine translation. Unlike conventional feedforward DNNs, RNNs remember past information to improve the accuracy of future predictions and, therefore, they are very effective for sequence processing problems.For each application run, recurrent layers are executed many times for processing a potentially large sequence of inputs (words, images, audio frames, etc.). In this paper, we observe that the output of a neuron exhibits small changes in consecutive invocations. We exploit this property to build a neuron-level fuzzy memoization scheme, which dynamically caches each neuron's output and reuses it whenever it is predicted that the current output will be similar to a previously computed result, avoiding in this way the output computations.The main challenge in this scheme is determining whether the new neuron's output for the current input in the sequence will be similar to a recently computed result. To this end, we extend the recurrent layer with a much simpler Bitwise Neural Network (BNN), and show that the BNN and RNN outputs are highly correlated: if two BNN outputs are very similar, the corresponding outputs in the original RNN layer are likely to exhibit negligible changes. The BNN provides a low-cost and effective mechanism for deciding when fuzzy memoization can be applied with a small impact on accuracy.We evaluate our memoization scheme on top of a state-of-the-art accelerator for RNNs, for a variety of different neural networks from multiple application domains. We show that our technique avoids more than 26.7% of computations, resulting in 21% energy savings and 1.4x speedup on average.
INTRODUCTION
Recurrent Neuronal Networks (RNNs) represent the state-of-the-art solution for many sequence processing problems such as speech recognition [15], machine translation [34] or automatic caption generation [32]. Not surprisingly, data recently published in [20] show that around 30% of machine learning workloads in Google's datacenters are RNNs, whereas Convolutional Neuronal Networks (CNNs) only represent 5% of the applications. Unlike CNNs, RNNs use information of previously processed inputs to improve the accuracy of the output, and they can process variable length input/output sequences.
Although RNN training can be performed efficiently on GPUs [7], RNN inference is more challenging. The small batch size (just one input sequence per batch) and the data dependencies in recurrent layers severely constrain the amount of parallelism. Hardware acceleration is key for achieving high-performance and energyefficient RNN inference and, to this end, several RNN accelerators have been recently proposed [18,22,17,23].
Neurons in an RNN are recurrently executed for processing the elements in an input sequence. An analysis of the output results reveals that many neurons produce very similar outputs for consecutive elements in the input sequence. On average, the relative difference between the current and previous output of a neuron is smaller than 23% in our set of RNNs, whereas previous work in [28] has reported similar results. Since RNNs are inherently error tolerant [36], we propose to exploit the aforementioned property to save computations by using a neuron-level fuzzy memoization scheme. With this approach, the outputs of a neuron are dynamically cached in a local memoization buffer. When the next output is predicted to be extremely similar to the previously computed result, the neuron's output is read from the memoization buffer rather than recalculating it, avoiding all the corresponding computations and memory accesses. Figure 1 shows the potential benefits of this memoization scheme by using an oracle that accurately predicts the relative difference between the next output of the neuron and the previous output stored in the memoization buffer. The memoized value is used when this difference is smaller than a given threshold, shown in the x-axis of Figure 1. As it can be seen, the RNNs can tolerate relative errors in the outputs of a neuron in the range of 30-50% with a negligible impact on accuracy. With these thresholds, a memoization scheme with an oracle predictor can save more than 30% of the computations. If the difference between the previous and current output predicted is smaller than the threshold, the memoized output is employed instead of calculating the new one.
A key challenge for our memoization scheme is how to predict the difference between the current output and the previous output stored in the memoization buffer, without performing all the corresponding neuron computations. To this end, we propose to extend each recurrent layer with a Bitwise Neural Network (BNN) [21]. We do this by reducing each input and weight to one bit that represents the sign as described in [11]. We found that BNN outputs are highly correlated with the outputs of the original recurrent layer, i.e. a similar BNN outputs indicates a high likelihood of having similar RNN output (although BNN outputs are very different to RNN outputs). The BNN is extremely small, hardware-friendly and very effective at predicting when memoization can be safely applied.
Note that by simply looking at the inputs, i.e. predicting that similar inputs will produce similar outputs, might not be accurate. Small changes in an input that is multiplied by a large weight will introduce a significant change in the output of the neuron. Our BNN approach takes into account both the inputs and the weights.
In short, we propose a neuron-level hardware-based fuzzy memoization scheme that works as follows. The output of a neuron in the last execution is dynamically cached in a memoization table, together with the output of the corresponding BNN. For every new input in the sequence, the BNN is first computed and the result is compared with the BNN output stored in the memoization table. If the difference between the new BNN output and the cached output is smaller than a threshold, the neuron's cached output is used as the current output, avoiding all the associated computations and memory accesses in the RNN. Otherwise, the neuron is evaluated and the memoization table is updated.
Note that only using the BNN would result in a large accuracy loss as reported elsewhere [27]. In this paper, we take a completely different approach and use the BNN to predict when memoization can be safely applied with negligible impact on accuracy. The inexpensive BNN is computed for every element of the sequence and every neuron, whereas the large RNN is evaluated on demand as indicated by the BNN. By doing so, we maintain high accuracy while saving more than 26.7% of RNN computations.
In this paper, we make the following contributions:
• We provide an evaluation of the outputs of neurons in recurrent layers, and show that they exhibit small changes in consecutive executions.
• We propose a fuzzy memoization scheme that avoids more than 26.7% of neuron evaluations by reusing previously computed results stored in a memoization buffer.
• We propose the use of a BNN to determine when memoization can be applied with small impact on accuracy. We show that BNN and RNN outputs are highly correlated.
• We show that the BNN predictor's accuracy improves significantly when it is also included during the training.
• We implement our neuron-level memoization scheme on top of a state-of-the-art RNN accelerator. The required hardware introduces a negligible area overhead, while it provides 1.4x speedup and 21% energy savings on average for several RNNs.
x t h t-1 g t i t x t h t-1 Σ f t x t h t-1 Cell State C t-1 o t x t h t-1 Ø C t h t
BACKGROUND
Recurrent Neural Networks
A Recurrent Neural Network (RNN) is a state-ofthe-art machine learning approach that has achieved tremendous success in applications such as machine translation or video description. The key characteristic of RNNs is that they include loops, a.k.a. recurrent connections, that allow the information to persist from one time-step of execution to the next ones and, hence, they have the potential to use unbounded context information (i.e. past or future) to make predictions. Another important feature is that RNNs are recurrently executed for every element of the input sequence and, thus, they are able to handle input and output with variable length. Because of these characteristics, RNNs provide an effective framework for sequence-to-sequence applications (e.g. machine translation), where they outperform feed forward Deep Neural Networks (DNNs) [16,29].
Basic RNN architectures can capture and exploit short term dependencies in the input sequence. However, capturing long term dependencies is challenging since use-ful information tend to dilute over time. In order to exploit long term dependencies, Long Short Term Memory (LSTM) [19] and Gated Recurrent Units (GRU) [10] networks were proposed. These types of RNNs represent the most successful and widely used RNN architectures. They have achieved tremendous succcess for a variety of applications such as speech recognition [24,5], machine translation [9] and video description [32]. The next subsections provide further details on the structure and behavior of these networks.
Deep RNNs
RNNs are composed of multiple layers that are stacked together to create deep RNNs. Each of these layers consists of an LSTM or a GRU cell. In addition, these layers can be unidirectional or bidirectional. Unidirectional layers only use past information to make predictions, whereas bidirectional LSTM or GRU networks use both past and future context.
The input sequence (X) is composed of N elements, i.e. X = [x 1 , x 2 , ..., x N ], which are processed by an LSTM or GRU cell in the forward direction, i.e. from x 1 to x N . For backward layers in bidirectional RNNs, the input sequence is evaluated in the backward direction, i.e from x N to x 1 . Figure 2 shows the structure of an LSTM cell. The key component is the cell state (c t ), which is stored in the cell memory. The cell state is updated by using three fully connected single-layer neural networks, a.k.a. gates. The input gate, (i t , whose computations are shown in Equation 1) decides how much of the input information, x t , will be added to the cell state. The forget gate (f t , shown in Equation 2) determines how much information will be erased from the cell state (c t−1 ). The updater gate (g t , Equation 3) controls the amount of input information that is being considered a candidate to update the cell state (c t ). Once these three gates are executed, the cell state is updated according to Equation 4. Finally, the output gate (o t , Equation 5) decides the amount of information that will be emitted from the cell to create the output (h t ). Figure 4 shows the computations carried out by an LSTM cell. As it can be seen, a neuron in each gate has two types of connections: forward connections that operate on x t and recurrent connections that take as input h t−1 . The evaluation of a neuron in one of these gates requires a dot product between weights in forward connections and x t , and another dot product between weights in recurrent connections and h t−1 . Next, a peephole connection [13] and a bias are also applied, followed by the computation of an activation function, typically a sigmoid or hyperbolic tangent.
LSTM Cell
GRU Cell
Analogous to an LSTM cell, a GRU cell includes gates to control the flow of information inside the cell. However, GRU cells do not have an independent memory cell (i.e. cell state). As it can be seen in Figure 3, in a GRU cell the update gate (z t ) controls how much of the candidate information (g t ) is used to update the cell activation. On the other hand, the reset gate (r t ) modulates the amount of information that is removed from the previous computed state. Note that GRUs do not include an output gate and, hence, the whole state of the cell is exposed at each timestep. The computations carried out by each gate in a GRU cell are very similar to those in Equations 1, 2 and 3. We omit them for the sake of brevity, the exact details are provided in [10]. For the rest of the paper, we used the term RNN cell to refer to both LSTM and GRU cells.
i t = σ(W ix x t + W ih h t−1 + b i ) (1) f t = σ(W f x x t + W f h h t−1 + b f ) (2) g t = φ(W gx x t + W gh h t−1 + b g ) (3) c t = f t c t−1 + i t g t (4) o t = σ(W ox x t + W oh h t−1 + b o ) (5) h t = o t φ(c t )(6)
Binarized Neural Networks
State-of-the-art DNNs typically consist of millions of parameters (a.k.a. weights) represented as floating point numbers using 32 or 16 bits and, hence, their storage requirements are quite large. Linear quantization may be used to reduce memory footprint and improve performance [34,20]. In addition, real-time evaluation of DNNs requires a high energy cost. As an attempt to improve the energy-efficiency of DNNs, Binarized Neural Networks (BNNs) [11] or Bitwise Neural Networks [21] are a promising alternative to conventional DNNs. BNNs use one-bit weights and inputs that are constrained to +1 or -1. Typically, the binarization is done using the following function:
x b = +1 if x >= 0, −1 otherwise,(7)
where x is either a weight or an input and x b is the binarized value which is stored as 0 or 1. Regarding the output of a given neuron, its computation is analogous to conventional DNNs, but employing the binarized version of weights and inputs, as shown in Equation 8:
y b t = w b x b t(8)
where w b and x b t are the binarized weight and input vectors respectively. Note that evaluating the neuron output (y b t ) only involves multiplications and additions that, with binarized operands, can be computed with XNORs and integer adders. BNN evaluation is orders of magnitude more efficient, in terms of both performance and energy, than conventional DNNs [11]. Nonetheless, DNNs and RNNs still deliver significantly higher accuracy than BNNs [27].
Fuzzy Memoization
Memoization is a well-known optimization technique used to improve performance and energy consumption that has been used both in software [2] and hardware [14]. In some applications, a given function is executed many times, but the inputs of different executions are not always different. Memoization exploits this fact to avoid these redundant computations by reusing the result of a previous evaluation. In general, the first time an input is evaluated, the result is cached in a memoization table. Subsequent evaluations probe the memoization table and reuse previously cached results if the current input matches a previous execution.
In a classical memoization scheme, a memoized value is only reused when it is known to be equal to the real output of the computation. However, for some applications such as multimedia [4], graphics [8], and neural networks [36], this scheme can be extended to tolerate a small loss in accuracy with negligible impact in the quality of the results, and is normally referred to as fuzzy memoization.
NEURON LEVEL MEMOIZATION
In this section, we propose a novel memoization scheme to reduce computations and memory accesses in RNNs. First, we discuss the main performance and energy bottlenecks on state-of-the-art hardware accelerators for RNN inference. Next, we introduce the key idea for our neuron-level fuzzy memoization technique. Finally, we describe the hardware implementation of our technique. Figure 6: Neuron Level memoization with Oracle Predictor. y t is the neuron output. y m corresponds to the memoized evaluation and y o t is the output of the Oracle predictor. δ, θ are the relative error and the maximum allowed output error respectively. Figure 4, RNN inference involves the evaluation of multiple single-layer feed-forward neural networks or gates that, from a computational point of view, consist of multiplying a weight matrix by an input vector (x t for forward connections and h t−1 for recurrent connections). Typically, the number of elements in the weight matrices ranges from a few thousands to millions of elements and, thus, fetching them from on-chip buffers or main memory is one of the major sources of energy consumption. Not surprisingly, it accounts for up to 80% of the total energy consumption in state-of-the-art accelerators [30]. For this reason, a very effective way of saving energy in RNNs is to avoid fetching the synaptic weights. In addition, avoiding the corresponding computations also increases the energy savings. In this work, we leverage fuzzy memoization to selectively avoid neurons evaluations and, hence, to avoid their corresponding memory accesses and computations. For fuzzy memoization to be effective, applications must be tolerant to small errors and its hardware implementation must be simple. In the next sections, we show that RNNs are resilient to small errors in the outputs of the neurons, and we provide an efficient implementation of the memoization scheme that requires simple hardware support.
Motivation
4 δ = y o t − y m y o t (9) y t = y m if δ <= θ y o t otherwise,(10)y m = y o t if δ > θ not updated otherwise,(11)
As shown in
RNNs Redundancy
Memoization schemes rely on a high degree of redundancy in the computations. For RNNs, a key observation is that the output of a given neuron tends to change lightly between consecutive input elements. Note that RNNs are used in sequence processing problems such as speech recognition or video processing, where RNN inputs in consecutive time-steps tend to be extremely similar. Prior work in [28] reports high similarity across consecutive frames of audio or video. Not surprisingly, our numbers for our set of RNNs also support this claim. Figure 5 shows the relative difference between consecutive outputs of a neuron in our set of RNNs. As it can be seen, a neuron's output exhibits minor changes (less than 10%) for 25% of consecutive input elements.
On average, consecutive outputs change by 23%. Furthermore, RNNs can tolerate small errors in the neuron output [36]. This observation is supported by data shown in Figure 1, where the accuracy curve shows the accuracy loss when the output of a neuron is reused using fuzzy memoization, for different thresholds (xaxis) that control the aggressiveness of the memoization scheme. For this study, the relative error (δ) between a predicted neuron output (y p t ) and a previously cached neuron output (y m ) is used as the discriminating factor to decide whether the previous output is reused, as shown in Figure 6. To evaluate the potential benefits of a memoization scheme, the predicted value is provided by an Oracle predictor, which is 100% accurate, i.e., its prediction is always equal to the neuron output (y p t = y t ). As shown in Figure 1, neurons can tolerate a relative output error between 0.3 and 0.5 without significantly affecting the overall network accuracy (i.e., accuracy loss smaller than 1%). On the other hand, the reuse curve shows the percentage of neuron computations that could be avoided through this memoization with an Oracle predictor. Note that by allowing neurons to have an output error between 0.3 to 0.5, at least 30% of the total network computations could be avoided.
The memoization scheme must add a small overhead to the system to achieve significant savings. Therefore, the critical challenge is approximating the Oracle predictor's behavior with simple hardware to decide when memoization can be safely applied with a negligible impact on the overall RNN accuracy. We describe an effective solution in the next section.
Binary Network Correlation
A key challenge for an effective fuzzy memoization scheme is to identify when the next neuron output will be similar to a previously computed (and cached) output. Note that having similar inputs does not necessarily result in similar outputs, as inputs with small changes might be multiplied by large weights. Our proposed approach is based on a Bitwise Neural Network (BNN). In particular, each fully-connected neural network (NN) is extended to an equivalent BNN, as described in Section 3.2. We use BNNs for two reasons. First, the outputs of a BNN and its corresponding original NN are highly correlated [6], i.e., a small change in a BNN output indicates that the neuron's output in the original NN is likely to be similar. Second, BNNs can be implemented with extremely low hardware cost.
Regarding the correlation between BNN and RNN, Anderson et al. [6] show that the binarization approximately preserves the dot-products that a neural network performs for computations. Therefore, there should be a high correlation between the outputs of the fullprecision neuron and the outputs of the corresponding binarized neuron. We have empirically validated the dot product preservation property for our set of RNNs. Figure 7 shows the linear correlation between RNN outputs and the corresponding BNN outputs for EESEN network. Although the range of the outputs of the full-precision (RNN) and binarized (BNN) dot products are significantly different, their values exhibit a strong linear correlation (correlation coefficient of 0.96). On the other hand, Figure 8 shows the histogram of the correlation coefficients for the neurons in four different RNNs. As it can be seen, correlation between binarized and full-precision neurons tend to be high for all the RNNs. More specifically, for the networks EESEN, IMDB SENTIMENT, and DEEPSPEECH, 85% of the neurons have a linear correlation factor greater than 0.8 and for the Machine Translation network most of them have a correlation factor greater than 0.5. These results indicate that if the output of a binarized neuron shows very small changes with respect to a previously computed output, it is very likely that the full-precision neuron will also show small changes and, hence, memoization can be safely applied.
-5 -4 -3 -2 -1 0 1 2 3 4 5 Full-precision Output
As shown in Equation 8, the output of a given neuron in a BNN can be computed with an N-bit XOR operation for bit multiplication and an integer adder to sum the resulting bits. These two operations are orders of magnitude cheaper than those required by the traditional data representation (i.e., FP16). Therefore, a BNN represents a low overhead and accurate manner to infer when the output of a neuron is likely to exhibit significant changes with respect to its recently computed outputs.
Overview
The target of our memoization scheme is to reuse a recently computed neuron output, y m , as the output for the current time-step, y t , provided that they are very similar. Reusing the cached neuron output avoids performing all the corresponding computations and memory accesses. To determine whether y t will be similar to y m , we use a BNN as a predictor.
In our memoization scheme, we extend the RNN with a much simpler BNN. The BNN model is created by mirroring the full precision trained model of an LSTM or GRU gate, as illustrated in Figure 9. More specifically, each neuron is binarized by applying the binarization function shown in Equation 7 to its corresponding set of weights. Therefore, in an gate, every neuron n with weights vector w is mirrored to a neuron n b with weights vector w b corresponding to the element-wise binarization of w.
Our scheme stores recently computed outputs for the binary neuron n b and its associated full-precision neuron n. We refer to these memoized values as y b m and y m , respectively. On every time-step t, the binarized version of the neuron, n b , is evaluated first obtaining y b t . Next, we compute the relative difference, b t , between y b t and y b m , i.e. the current and memoized outputs of the BNN, as shown in Equation 12. Figure 10: Neuron level fuzzy memoization with binary network as predictor. y t , y m correspond to the neuron current and memoized output computed by the LSTM Network. y b t , y b m are the current and memoized output computed by the Binary Network. b t is the relative difference between BNN outputs. δ b t is the summation of relative differences in successive time-steps.
If b t is small, i.e., if the BNN b t = y b t − y b m y b t (12) δ b t = i=t i=m b i (13) y t = y m if δ b t <= θ evaluate neuron otherwise,(14)y m = y t if δ b t > θ not updated otherwise,(15)y b m = y b t if δ b t > θ not updated otherwise,(16)δ b t = 0.0 if δ b t > θ not updated otherwise,(17)
outputs are similar, it means that the outputs of the full precision neuron are likely to be similar. As we discuss in Section 3.1.2, there is a high correlation between BNN and RNN outputs. In this case, we can reuse the memoized output y m as the output of neuron n for the current time-step, avoiding all the corresponding computations. If the relative difference b t is significant, we compute the full-precision neuron output, y t , and update our memoization buffer, as shown in Equations 15, 16 and 17 so that these values can be reused in subsequent time-steps.
We have observed that applying memoization to the same neuron in a large number of successive time-steps may negatively impact accuracy, even though the relative difference b t in each individual time-step is small. We found that using a simple throttling mechanism can avoid this problem. More specifically, we accumulate the relative differences over successive time-steps where memoizaiton is applied, as shown in Equation 13. We use the summation of relative differences, δ b t , to decide whether the memoized value is reused. As illustrated in Equation 14, the memoized value is only reused when δ b t is smaller or equal than a threshold θ. Otherwise, the full-precision neuron is computed. This throttling mechanism avoids long sequences of time-steps where memoization is applied to the same neuron, since δ b t includes the differences accumulated in the entire sequence of reuses. Figure 11 shows that the throttling mechanism provides higher computation reuse for the Throttling Mechanism NO Throttling Figure 11: Computation reuse achieved by our BNNbased memoization scheme with and without the throttling mechanism, for accuracy losses of 1% and 2%. The throttling mechanism provides an extra 5% computation reuse on average for the same accuracy.
same accuracy loss. Figure 12 summarizes the overall memoization scheme, that is applied to the gates in an RNN cell as follows. For the first input element (x 0 ), i.e. the first time-step, the output values y b 0 (binarized version) and y 0 (in fullprecision) are computed for each neuron and stored in a memoization buffer. δ b 0 is set to zero. In the next time-step, with input x 1 , the value y b 1 is computed first by the BNN. Then, the relative error ( b 1 ) between y b 1 and the previously cached value, y b 0 , is computed and added to δ b 0 to obtain δ b 1 . Then, δ b 1 is compared with a threshold θ. If δ b 1 is smaller than θ, the cached value y 0 is reused, i.e. y 1 is assumed to be equal to y 0 , and δ b 1 is stored in the memoization buffer. On the contrary, if δ b 1 is larger than θ, the full precision neuron output y 1 is computed and its value is cached in a memoization buffer. In addition, y b 1 is also cached and δ b 1 is set to zero. This process is repeated for the remaining timesteps and for all the neurons in each gate.
Improving the BNN Predictor Accuracy
As discussed later in Section 5, the percentage of computation reuse achieved by the BNN predictor is smaller than the oracle's percentage. Aiming to improve the BNN predictor's accuracy, we include the memoization scheme described in Section 3.2 during the training. The intuition is that by allowing the network to reuse similar weights (i.e., less than θ) during the training, we could transfer the obtained knowledge to the inference phase. We show in Section 5 that by doing this, the accuracy of the BNN predictor increases.
To include our memoization scheme into the training, we modified the forward pass as follows. First, at timestep (t 0 ), for a given neuron (i.e., n k ), its floating-point (y 0 ) and binarized output values y b 0 are computed and cached. Second, in the next time-step (t 1 ), to set the output value of n k , we first evaluate n k using its current weights and inputs. Then, we compare its binarized output value y 1 with its binarized output in the previous time-step y b 0 . If the similarity between these two values is below a threshold (i.e., theta), the previous output Figure 12: Fuzzy memoization scheme. W x and W h are the weights for the forward (x t ) and recurrent connections (h t−1 ) respectively. y t , y m correspond to the current and cached neuron output computed in full precision. y b t , y b m are the current and cached output computed by the Binary Network. δ b t is the summation of relative differences in successive time-steps.
(y 0 ) is reused. Otherwise, the output value y 1 is cached and set as output. Finally, this process is repeated for all the time-steps and neurons in the model. Hence, our neuron-level memoization scheme is included in the inference pass during training, whereas the backward pass and update of weights are performed as usual.
Regarding the training hyper-parameters, we use the same values as the baseline implementation of the models (i.e., model without memoization). However, we train each model for several values of thetha and choose the model with the highest amount of computation reuse and an accuracy equal to the baseline model.
Finding the threshold value
The threshold θ is one of the key parameters in our scheme, and to find its value for a target accuracy loss and a given RNN model, we perform an exploration of it for different values. Each RNN model is evaluated using the training set during this process, and then the accuracy and degree of computation reuse for each threshold value is obtained. Then, for each RNN model, we select the value of θ that achieves the highest computation reuse for the target accuracy loss (i.e., less than 1%). Note that this is done only once for each RNN model. Also, once θ is determined, it is used for inference on the test dataset.
Hardware Implementation
We implement the proposed memoization scheme on top of EPUR, a state-of-the-art RNN accelerator for low-power mobile applications [30]. Figure 13 chitecture's main components and detail the necessary hardware modifications required to support our fuzzy memoization scheme.
Hardware Baseline
In E-PUR each of the Computation Units (CUs), shown in Figure 14, are composed of a dot product unit (DPU), a Multi-functional Unit (MU) and buffers to store the weights and inputs. The DPU is used to evaluate the matrix vector multiplications between the weights and inputs (i.e. x t and h t−1 ) whereas the MU is used to compute activation functions and scalar operations. Note that in E-PUR computations can be performed using 32 or 16 bits floating points operations.
In E-PUR, while evaluating an RNN cell, all the gates are computed in parallel for each input element. On the contrary, the neurons in each gate are evaluated in a sequential manner for the forward and recurrent connections. The following steps are executed in order to compute the output value (y t ) for a given neuron (i.e. n i ). First, the input and weight vectors formed by the recurrent and forward connections (i.e, x t and h t−1 ) are split into K sub-vectors of size N. Then, two sub-vectors of size N are loaded from the input and weight buffer respectively and the dot product between them is computed by the DPU, which also accumulates the result. Next, the steps are repeated for the next k th sub-vector and its result is added to the previously accumulated dot products. This process is repeated until all K sub-vectors are evaluated and added together. Once the output value y t is computed, the DPU sends it to the MU where bias and peephole calculations are performed. Finally, the MU computes the activation function and stores the result in the on-chip memory for intermediate results. Note that once the DPU sends a value to the MU, it will continue with the evaluation of the next neuron output, hence, overlapping the computations executed by the MU and DPU since they are independent. Finally, these steps are repeated until all the neurons in the gate (for all cells) are evaluated for the current input element.
Support for Fuzzy Memoization
In order to perform fuzzy memoization through a BNN, two modifications are done to each CU in E-PUR. First, the weight buffer is split into two buffers: one buffer is used to store the weight signs (sign buffer) and the other is used to store the remaining bits of the weights. Note that the sign buffer is always accessed to compute the output of the binary network (y b t ) whereas the remaining bits are only accessed if the memoized value (y m ) is not reused. The binarized weights are stored in a small memory which has low energy cost but, as a consequence of splitting the weight buffer, its area increases a bit (less than one percent).
The second modification to the CUs is the addition of the fuzzy memoization unit (FMU) which is used to evaluate the binary network and to perform fuzzy memoization. This unit takes as input two size-T vectors (i.e., number of neurons in an RNN cell). The first vector is a weight vector loaded from the sign buffer whereas the other is created as the concatenation of the forward (x t ) and the recurrent connections (h t−1 ).
As shown in Figure 15, the main components of the FMU are the BDPU that computes the binary dot product and the comparison unit (CMP) which decides when to reuse a memoized value. In addition, the FMU includes a buffer (memoization buffer) which stores the δ b t for every neuron and the latest evaluation of the neu-rons by the full precision and binary networks. BNN neurons (i.e, binary dot product) are evaluated using a bitwise XNOR operation and an adder reduction tree to gather the resulting bit vector. In the CMP unit, the relative error (δ b t ) is computed using integer and fixed-point arithmetic.
The steps to evaluate the RNN cell, described in Section 3.3.1, are executed in a slightly different manner to include the fuzzy memoization scheme. First, the binarized input and weight vectors for a given neuron in a gate are loaded into an FMU from the input and sign buffers respectively. Next, the BDPU computes the dot product and sends the result (y b t ) to the comparison unit (CMP). Then, the CMP loads the previously cached values y b m and δ b t−1 from the memoization buffer and it uses them to compute the relative error ( b t ) and the δ b t . Once δ b t is computed, it is compared with a threshold (θ) to determine whether the full-precision neuron needs to be evaluated or the previously cached value is reused instead. In the case that δ b t is greater than θ, an evaluation in full-precision is triggered. In that regard, the DPU is signaled to start the full precision evaluation which is done following the steps described in Section 3.3.1. After the full precision evaluation, the values y t , y b t , and 0.0 are cached in the memoization table corresponding to y m , y b m , and δ b t respectively. On the other hand, if memoization can be applied (i.e. δ b t is smaller than the maximum allowed error), δ b t is updated in the memoization table and the memoized value (y m ) is sent directly to the MU (bypassing the DPU), so the full-precision evaluation of the neuron is avoided. Finally, these steps are repeated until all the neurons in a gate are evaluated for the current input element. Since LSTM or GRU gates are processed by independent CUs, the above process is executed concurrently by all gates.
EVALUATION METHODOLOGY
We use a cycle-level simulator of E-PUR customized to model our scheme as described in Section 3.3.2. This simulator estimates the total energy consumption (static and dynamic) and execution time of the LSTM networks. The different pipeline components were implemented in Verilog and we synthesized them using the Synopsys Design Compiler to obtain their delay and energy consumption. Furthermore, we used a typical process corner with voltage of 0.78V. We employed CACTI [26] to estimate the delay and energy consumption (static and dynamic) of on-chip memories. Finally, to estimate timing and energy consumption of main memory we used MICRON's memory model [25]. We model 4 GB of LPDDR4 DRAM.
In order to set the clock frequency, the delays reported by Synopsys Design Compiler and CACTI are used. We set a clock frequency that allows most hardware structures to operate at one clock cycle. In addition, we evaluated alternative frequency values in order to minimize energy consumption.
Regarding the memoization unit, the configuration parameters are shown in Table 2. Since E-PUR sup- ports large LSTM networks, the memoization unit is designed to match the largest models supported by E-PUR. This unit has a latency of 5 clock cycles for the largest supported LSTM networks. In this unit, integer and fixed-point operations are used to perform most computations. The memoization buffer is modeled as 8KiB scratch-pad eDRAM.
The remaining configuration parameters of the accelerator used in our experiments are shown in Table 2. We strive to select an energy-efficient configuration for all the neural networks in Table 1. Because the baseline accelerator is designed to accommodate large LSTM networks, some of its on-chip storage and functional units might be oversized for some of our RNNs. In this case, unused on-chip memories and functional units are power gated when not needed.
As for benchmarks, we use four modern LSTM networks which are described in Table 1. Our selection includes RNNs for popular application such as speech recognition, machine translation and image description. These networks have different number of internal layers and neurons. We include both bidirectional (EESEN) and unidirectional networks (the other three). On the other hand, the length of the input sequence is also different for each RNN and it ranges from 20 to a few thousand input elements.
The software implementation of the networks was done in Tensorflow [1]. We used the network models and the test set provided in [12,33,24,9] for each RNN. The original accuracy for each RNN is listed in Table 1, and the accuracy loss is later reported as the absolute loss with respect to this baseline accuracy.
EXPERIMENTAL RESULTS
This section presents the evaluation of the proposed fuzzy memoization technique for RNNs, implemented on top of E-PUR [30]. We refer to it as E-PUR+BM. First, we present the percentage of computation reuse and the accuracy achieved. Second, we show the performance and energy improvements, followed by an analysis of the area overheads of our technique. Figure 16 shows the percentage of computation reuse achieved by the BNN and the Oracle predictors. The percentage of computation reuse indicates the percentage of neuron evaluations avoided due to fuzzy memoization. For accuracy losses smaller than 2%, the BNN obtains a percentage of computation reuse extremely similar to the Oracle. The networks EESEN and IMDB are highly tolerant to errors in neuron's outputs, thus, for these networks, our memoization scheme achieves reuse percentages of up to 40% while having an accuracy loss smaller than 3%. Note that, for classification problems, BNNs achieve an accuracy close to the stateof-the-art [27] and, hence, it is not surprising that the BNN predictor is highly accurate for approximating the neuron output. In the case of the networks DeepSpeech (speech recognition) and NNMT (machine translation), the BNN predictor is also included in the training as discussed in Section 3.2.1. For DeepSpeech, the reuse percentage is up to 24% for accuracy losses smaller than 2%. In this network, the input sequence tends to be large (i.e, 900 elements on average). As the reuse is increased, the error introduced to the output sequence of a neuron persists for a larger number of elements. Therefore, the introduced error will have a bigger impact both in the evaluation of the current layer, due to the recurrent connections, and the following layers. As a result, the overall accuracy of the network decreases faster. For MNMT, the BNN predictor and the oracle achieve similar reuse versus accuracy trade-off for up to 32% of computation reuse. Note that, for this network, the linear correlation between the BNN and the full precision neuron output is typically lower than for the other networks in the benchmark set. Figure 17 shows the energy savings and computation reuse achieved by our scheme, for different thresholds of accuracy loss. For a conservative loss of 2%, the average energy saving is 27.3%, whereas the reuse percentage is 33%. In this case, the networks DeepSpeech and MNMT have energy savings of 19.5% and 27.6%, respectively. In contrast, IMDB and EESEN are more tolerant of errors in the neuron output; thus, they exhibit the most considerable savings, 34.2% and 30%, respectively. For a highly conservative 1% of accuracy loss, the computation reuse and energy saving are 26.82% and 21% on average, respectively. EESEN and DeepSpeech achieve 25.3% and 14% energy savings, respectively, for a 1% accuracy loss. Regarding the MNMT and IMDB networks, the energy savings for 1% accuracy loss are 22.2% and 25%, respectively. Regarding the sources of energy savings, Figure 18 reports the energy breakdown, including static and dynamic energy, for the baseline accelerator and E-PUR+BM, for an accuracy loss of 1%. The sources of energy consumption are grouped into on-chip memories ("scratchpad" memories), pipeline components ("operations", i.e. multipliers), main memory (LPDDR4) and the energy consumed by our FMU component. Note that most of the energy consumption is due to the scratch-pad memories and the pipeline components, and, as it can be seen, both are reduced when using our memoization scheme. In E-PUR+BM, each time a value from the memoization buffer is reused, we avoid accessing all the neuron's weights and the input buffers, achieving significant energy savings. Besides, since the extra buffers used by E-PUR+BM are fairly small (i.e., 8 KB), the energy overhead due to the memoization scheme is negligible. The energy consumption due to the operations is also reduced, as the memoization scheme avoids neuron computations. Furthermore, the leakage of scratchpad and operations are also reduced since the memoization scheme decreases the execution time. Finally, the energy consumption due to accessing the main memory is not affected by our technique since both E-PUR and E-PUR+BM must access the main memory to load all the weights once for each input sequence. Figure 19 shows the performance improvements for the different RNNs. On average, a speedup of 1.4x is obtained for a 1% accuracy loss, whereas accuracy losses of 2% and 3% achieve improvements of 1.5x and 1.7x, respectively. The performance improvement comes from avoiding the dot product computations for the memoized neurons. Therefore, the larger the degree of computation reuse, the more significant the performance improvement. Note that the memoization scheme introduces an overhead of 5 cycles per neuron (see Table 2) which is mainly due to the evaluation of the binarized neuron. If the full-precision neuron computation is avoided, our scheme saves between 16 and 80 cycles depending on the RNN. Therefore, configurations with a low degree of computation reuse, like Deepspeech at 1% accuracy loss, exhibit more minor speedups due to the memoization scheme's overhead. On the other hand, RNNs that show higher computation reuse, such as EESEN at 2% accuracy loss, achieve a speedup of 1.55x. Figure 20 shows the accuracy and computation reuse for the oracle predictor and our memoization scheme using two different configurations. The configuration Area breakdown for E-PUR and EPUR+BM.
BNN refers to the evaluation of a trained model without memoization, whereas the configuration BNN+T includes our memoization scheme on the training phase, as explained in Section 7. As shown in Figure 20, for the Deepspeech model, the computation reuse is 13.9% for an accuracy loss of 1%. Note that the percentage of reuse increases by around 4%, compared to the implementation that does not include the memoization scheme during training. For the NNMT model, the reuses percentages also increased by 4% when adding our scheme to the training. Figure 21 shows the area breakdown of E-PUR and E-PUR+BM. Regarding the area, E-PUR has an area of 64.6 mm 2 , whereas E-PUR+BM requires 66.8 mm 2 (4% area overhead). As shown in Figure 21, the area for the on-chip memories to store the weights is 69% and 72%, for E-PUR and E-PUR+BM, respectively. E+PUR+BM requires an extra 3% since the on-chip memories for the weights are split into two separate banks: storing the BNN and the other to store the fullprecision weights. The computations' area requirements are 2% and 3%, for E-PUR and E-PUR+BM, respectively. The overhead due to computations comes from the extra logic added to implement the memoization unit.
RELATED WORK
Increasing energy-efficiency and performance of LSTM networks has attracted the attention of the architectural community in recent years [18,23,17,22]. Most of these works employ pruning and compression techniques to improve performance and reduce energy consumption. Furthermore, linear quantization is employed to decrease the memory footprint. On the contrary, our technique improves energy-efficiency by relying solely on computation reuse at the neuron level. To the best of our knowledge, this is the first work using a BNN as a predictor for a fuzzy memoization scheme. BNNs have been used previously [11,27,21] as standalone networks, whereas we employs BNNs in conjunction with the LSTM network to evaluate neurons on demand.
Fuzzy memoization has been extensively researched in the past and has been implemented both in hardware and software. Hardware schemes to reuse instructions have been proposed in [31,3,14,8]. Alvarez et al. [4] presented a fuzzy memoization scheme to improve performance of floating point operations in multimedia applications. In their scheme floating point operations are memoized using a hash of the source operands, whereas in our technique, a whole function (neuron inference) is memoized based on the values predicted by a BNN. Finally, software schemes to memoize entire functions have been presented in the past [35,2]. These schemes are tailored to general purpose programs whereas our scheme is solely focused in LSTM networks, since it exploits the intrinsic error tolerance of LSTM networks.
CONCLUSIONS
This paper has shown that 25% of neurons in an LSTM network change their output value by less than 10%, which motivated us to propose a fuzzy memoization scheme to save energy and time. A significant challenge to perform neuron-level fuzzy memoization is to predict accurately, in a simple manner, whether the output of a given neuron will be similar to a previously computed and cached value. To this end, we propose to use a Binarized Neural Network (BNN) as a predictor, based on the observation that the full-precision output of a neuron is highly correlated with the output of the corresponding BNN. We show that a BNN predictor achieves 26.7% computation reuse on average, which is very similar to the results obtained with an Oracle predictor. Moreover, we have shown that including our technique during the training phase further improves the BNN predictor's accuracy by 4% or more. We have implemented our technique on top of E-PUR, a stateof-the-art accelerator for LSTM networks. Results show that our memoization scheme achieves significant time and energy savings with minimal impact on the accuracy of the RNNs. When compared with the E-PUR accelerator, our system achieves 21% energy savings on average, while providing 1.4x speedup at the expense of a minor accuracy loss.
Figure 1 :
1Accuracy loss of different RNNs versus the relative output error threshold using an oracle predictor.
Figure 2 :
2Structure of a LSTM cell. denotes an element-wise multiplication of two vectors. φ denotes the hyperbolic tangent.
Figure 3 :
3Structure of a GRU cell.
Figure 4 :
4Computations of an LSTM cell. , φ, and σ denote element-wise multiplication, hyperbolic tangent and sigmoid function respectively.
Figure 5 :
5Relative change in neuron output between consecutive input elements.
Figure 7 :
7Outputs of the binarized neurons (y-axis) versus outputs of the full-precision neurons (x-axis) in EESEN: an RNN for speech recognition. BNN and RNN outputs are highly correlated, showing a correlation coefficient of 0.96.
Figure 8 :
8Correlation factor between the neuron output computed using full precision and the output computed with a BNN.[ , . . , , ℎ , . .
Figure 9 :
9The figure illustrates how a binary neuron is created from a full-precision neuron in the RNN network. Bin is the binarization function shown in Equation 7. Peepholes, bias and activation functions are omitted for simplicity.
Figure 13 :
13shows a high-level block diagram of this accelerator. E-PUR is composed of four computational units tailored to the evaluation of each gate in an RNN cell and a dedicated on-chip memory used to store intermediate results. In the following subsections, we outline the E-PUR ar-Overview of E-PUR architecture which consist of 4 Computation Units (CU) and an on-chip memory (OM).
Figure 14 :Figure 15 :
1415Structure Structure of the Fuzzy Memoization Unit (FMU).
Figure 16 :
16Percentage of computations that could be reused versus accuracy loss using Fuzzy Neuron Level Memoization with an Oracle and a Binary Network as predictors for several LSTM networks.
Figure 17 :
17Energy savings and computation reuse of E-PUR+BM with respect to the baseline.
Figure 18 :
18Energy breakdown for E-PUR and EPUR+BM. FMU Energy is the overhead due to the memoization scheme.
Figure 19 :
19Speedup of E-PUR+BM over the baseline (E-PUR).
Figure 20 :
20Computation reuse achieved by our BNNbased memoization scheme. BNN+T and BNN refer to our scheme on a model trained with and without memoization, respectively.
Figure 21: Area breakdown for E-PUR and EPUR+BM.
Table 1 :
1RNN Networks used for the experiments.Network
App Domain
Cell Type Layers Neurons Base Accuracy
Reuse
Dataset
IMDB Sentiment [12] Sentiment Classification
LSTM
1
128
86.5%
36.2%
IMDB dataset
DeepSpeech2 [5]
Speech Recognition
GRU
5
800
10.24 WER
16.4%
LibriSpeech
EESEN [24]
Speech Recognition
BiLSTM
10
320
23.8 WER
30.5%
Tedlium V1
MNMT [9]
Machine Translation
LSTM
8
1024
29.8 Bleu
19.0% WMT'15 En → Ge
Table 2 :
2Configuration Parameters.
E-PUR
Parameter
Value
Technology
28 nm
Frequency
500 MHz
Intermediate Memory
6 MiB
Weight Buffer
2 MiB per CU
Input Buffer
8 KiB per CU
DPU Width
16 operations
Memoization Unit
BDPU Width
2048 bits
Latency
5 cycles
Integer Width
2 bytes
Memoization Buffer
8 KiB
AcknowledgmentsThis work has been supported by the CoCoUnit ERC Advanced Grant of the EU's Horizon 2020 program (grant No 833057), the Spanish State Research Agency (MCIN/AEI) under grant PID2020-113172RB-I00, and the ICREA Academia program.
Tensorflow: Large-scale machine learning on heterogeneous distributed systems. M Abadi, A Agarwal, P Barham, E Brevdo, Z Chen, C Citro, G S Corrado, A Davis, J Dean, M Devin, S Ghemawat, I J Goodfellow, A Harp, G Irving, M Isard, Y Jia, R Józefowicz, L Kaiser, M Kudlur, J Levenberg, D Mané, R Monga, S Moore, D G Murray, C Olah, M Schuster, J Shlens, B Steiner, I Sutskever, K Talwar, P A Tucker, V Vanhoucke, V Vasudevan, F B Viégas, O Vinyals, P Warden, M Wattenberg, M Wicke, Y Yu, X Zheng, abs/1603.04467CoRR. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. J. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Józefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. G. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. A. Tucker, V. Vanhoucke, V. Vasudevan, F. B. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, "Tensorflow: Large-scale machine learning on heterogeneous distributed systems," CoRR, vol. abs/1603.04467, 2016. [Online]. Available: http://arxiv.org/abs/1603.04467
Selective memoization. U A Acar, G E Blelloch, R Harper, http:/doi.acm.org/10.1145/640128.604133SIGPLAN Not. 381U. A. Acar, G. E. Blelloch, and R. Harper, "Selective memoization," SIGPLAN Not., vol. 38, no. 1, pp. 14-25, Jan. 2003. [Online]. Available: http://doi.acm.org/10.1145/640128.604133
On the potential of tolerant region reuse for multimedia applications. C Álvarez, J Corbal, E Salamí, M Valero, ser. ICS '01C.Álvarez, J. Corbal, E. Salamí, and M. Valero, "On the potential of tolerant region reuse for multimedia applications," ser. ICS '01, 2001, pp. 218-228. [Online].
. http:/doi.acm.org/10.1145/377792.377835Available: http://doi.acm.org/10.1145/377792.377835
Fuzzy memoization for floating-point multimedia applications. C Alvarez, J Corbal, M Valero, IEEE Trans. Comput. 547C. Alvarez, J. Corbal, and M. Valero, "Fuzzy memoization for floating-point multimedia applications," IEEE Trans. Comput., vol. 54, no. 7, pp. 922-927, Jul. 2005. [Online].
. 10.1109/TC.2005.119Available: http://dx.doi.org/10.1109/TC.2005.119
Deep speech 2: End-to-end speech recognition in english and mandarin. D Amodei, R Anubhai, E Battenberg, C Case, J Casper, B Catanzaro, J Chen, M Chrzanowski, A Coates, G Diamos, E Elsen, J Engel, L Fan, C Fougner, T Han, A Y Hannun, B Jun, P Legresley, L Lin, S Narang, A Y Ng, S Ozair, R Prenger, J Raiman, S Satheesh, D Seetapun, S Sengupta, Y Wang, Z Wang, C Wang, B Xiao, D Yogatama, J Zhan, Z Zhu, abs/1512.02595CoRR. D. Amodei, R. Anubhai, E. Battenberg, C. Case, J. Casper, B. Catanzaro, J. Chen, M. Chrzanowski, A. Coates, G. Diamos, E. Elsen, J. Engel, L. Fan, C. Fougner, T. Han, A. Y. Hannun, B. Jun, P. LeGresley, L. Lin, S. Narang, A. Y. Ng, S. Ozair, R. Prenger, J. Raiman, S. Satheesh, D. Seetapun, S. Sengupta, Y. Wang, Z. Wang, C. Wang, B. Xiao, D. Yogatama, J. Zhan, and Z. Zhu, "Deep speech 2: End-to-end speech recognition in english and mandarin," CoRR, vol. abs/1512.02595, 2015. [Online]. Available: http://arxiv.org/abs/1512.02595
The high-dimensional geometry of binary neural networks. A G Anderson, C P Berg, abs/1705.07199CoRR. A. G. Anderson and C. P. Berg, "The high-dimensional geometry of binary neural networks," CoRR, vol. abs/1705.07199, 2017. [Online]. Available: http://arxiv.org/abs/1705.07199
Optimizing performance of recurrent neural networks on gpus. J Appleyard, T Kocisky, P Blunsom, arXiv:1604.01946arXiv preprintJ. Appleyard, T. Kocisky, and P. Blunsom, "Optimizing performance of recurrent neural networks on gpus," arXiv preprint arXiv:1604.01946, 2016.
Eliminating redundant fragment shader executions on a mobile gpu via hardware memoization. J.-M Arnau, J.-M Parcerisa, P Xekalakis, 2014 ACM/IEEE 41st International Symposium on Computer Architecture (ISCA). J.-M. Arnau, J.-M. Parcerisa, and P. Xekalakis, "Eliminating redundant fragment shader executions on a mobile gpu via hardware memoization," 2014 ACM/IEEE 41st International Symposium on Computer Architecture (ISCA), pp. 529-540, 2014.
Massive exploration of neural machine translation architectures. D Britz, A Goldie, M Luong, Q V Le, CoRR. D. Britz, A. Goldie, M. Luong, and Q. V. Le, "Massive exploration of neural machine translation architectures," CoRR, vol. abs/1703.03906, 2017. [Online]. Available: http://arxiv.org/abs/1703.03906
Learning phrase representations using RNN encoder-decoder for statistical machine translation. K Cho, B Van Merrienboer, Ç Gülçehre, F Bougares, H Schwenk, Y Bengio, abs/1406.1078CoRR. K. Cho, B. van Merrienboer,Ç. Gülçehre, F. Bougares, H. Schwenk, and Y. Bengio, "Learning phrase representations using RNN encoder-decoder for statistical machine translation," CoRR, vol. abs/1406.1078, 2014. [Online]. Available: http://arxiv.org/abs/1406.1078
M Courbariaux, I Hubara, D Soudry, R El-Yaniv, Y Bengio, arXiv:1602.02830Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprintM. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio, "Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1," arXiv preprint arXiv:1602.02830, 2016.
Semi-supervised sequence learning. A M Dai, Q V Le, abs/1511.01432CoRR. A. M. Dai and Q. V. Le, "Semi-supervised sequence learning," CoRR, vol. abs/1511.01432, 2015. [Online].
Recurrent nets that time and count. F A Gers, J Schmidhuber, Proceedings of the IEEE-INNS-ENNS International Joint Conference on. the IEEE-INNS-ENNS International Joint Conference onIEEE3Neural NetworksF. A. Gers and J. Schmidhuber, "Recurrent nets that time and count," in Neural Networks, 2000. IJCNN 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on, vol. 3. IEEE, 2000, pp. 189-194.
Trace-Level Reuse. A González, J Tubella, C Molina-Clemente, ICPP. 30A. González, J. Tubella, and C. Molina-Clemente, "Trace-Level Reuse," in ICPP, 1999, pp. 30-.
Speech recognition with deep recurrent neural networks. A Graves, A R Mohamed, G Hinton, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. A. Graves, A. r. Mohamed, and G. Hinton, "Speech recognition with deep recurrent neural networks," in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, May 2013, pp. 6645-6649.
Lstm: A search space odyssey. K Greff, R K Srivastava, J Koutník, B R Steunebrink, J Schmidhuber, IEEE transactions on neural networks and learning systems. K. Greff, R. K. Srivastava, J. Koutník, B. R. Steunebrink, and J. Schmidhuber, "Lstm: A search space odyssey," IEEE transactions on neural networks and learning systems, 2016.
Fpga-based accelerator for long short-term memory recurrent neural networks. Y Guan, Z Yuan, G Sun, J Cong, Design Automation Conference (ASP-DAC. IEEEY. Guan, Z. Yuan, G. Sun, and J. Cong, "Fpga-based accelerator for long short-term memory recurrent neural networks," in Design Automation Conference (ASP-DAC), 2017 22nd Asia and South Pacific. IEEE, 2017, pp. 629-634.
Ese: Efficient speech recognition engine with sparse lstm on fpga. S Han, J Kang, H Mao, Y Hu, X Li, Y Li, D Xie, H Luo, S Yao, Y Wang, H Yang, W B J Dally, http:/doi.acm.org/10.1145/3020078.3021745Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, ser. FPGA '17. the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, ser. FPGA '17New York, NY, USAACMS. Han, J. Kang, H. Mao, Y. Hu, X. Li, Y. Li, D. Xie, H. Luo, S. Yao, Y. Wang, H. Yang, and W. B. J. Dally, "Ese: Efficient speech recognition engine with sparse lstm on fpga," in Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, ser. FPGA '17. New York, NY, USA: ACM, 2017, pp. 75-84. [Online]. Available: http://doi.acm.org/10.1145/3020078.3021745
Long short-term memory. S Hochreiter, J Schmidhuber, Neural computation. 98S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997.
In-datacenter performance analysis of a tensor processing unit. N P Jouppi, C Young, N Patil, D Patterson, G Agrawal, R Bajwa, S Bates, S Bhatia, N Boden, A Borchers, R Boyle, P Cantin, C Chao, C Clark, J Coriell, M Daley, M Dau, J Dean, B Gelb, T V Ghaemmaghami, R Gottipati, W Gulland, R Hagmann, C R Ho, D Hogberg, J Hu, R Hundt, D Hurt, J Ibarz, A Jaffey, A Jaworski, A Kaplan, H Khaitan, D Killebrew, A Koch, N Kumar, S Lacy, J Laudon, J Law, D Le, C Leary, Z Liu, K Lucke, A Lundin, G Mackean, A Maggiore, M Mahony, K Miller, R Nagarajan, R Narayanaswami, R Ni, K Nix, T Norrie, M Omernick, N Penukonda, A Phelps, J Ross, M Ross, A Salek, E Samadiani, C Severn, G Sizikov, M Snelham, J Souter, D Steinberg, A Swing, M Tan, G Thorson, B Tian, H Toma, E Tuttle, V Vasudevan, R Walter, W Wang, E Wilcox, D H Yoon, http:/doi.acm.org/10.1145/3079856.3080246Proceedings of the 44th Annual International Symposium on Computer Architecture, ser. ISCA '17. the 44th Annual International Symposium on Computer Architecture, ser. ISCA '17New York, NY, USAACMN. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers, R. Boyle, P.-l. Cantin, C. Chao, C. Clark, J. Coriell, M. Daley, M. Dau, J. Dean, B. Gelb, T. V. Ghaemmaghami, R. Gottipati, W. Gulland, R. Hagmann, C. R. Ho, D. Hogberg, J. Hu, R. Hundt, D. Hurt, J. Ibarz, A. Jaffey, A. Jaworski, A. Kaplan, H. Khaitan, D. Killebrew, A. Koch, N. Kumar, S. Lacy, J. Laudon, J. Law, D. Le, C. Leary, Z. Liu, K. Lucke, A. Lundin, G. MacKean, A. Maggiore, M. Mahony, K. Miller, R. Nagarajan, R. Narayanaswami, R. Ni, K. Nix, T. Norrie, M. Omernick, N. Penukonda, A. Phelps, J. Ross, M. Ross, A. Salek, E. Samadiani, C. Severn, G. Sizikov, M. Snelham, J. Souter, D. Steinberg, A. Swing, M. Tan, G. Thorson, B. Tian, H. Toma, E. Tuttle, V. Vasudevan, R. Walter, W. Wang, E. Wilcox, and D. H. Yoon, "In-datacenter performance analysis of a tensor processing unit," in Proceedings of the 44th Annual International Symposium on Computer Architecture, ser. ISCA '17. New York, NY, USA: ACM, 2017, pp. 1-12. [Online]. Available: http://doi.acm.org/10.1145/3079856.3080246
Bitwise neural networks. M Kim, P Smaragdis, arXiv:1601.06071arXiv preprintM. Kim and P. Smaragdis, "Bitwise neural networks," arXiv preprint arXiv:1601.06071, 2016.
Fpga-based low-power speech recognition with recurrent neural networks. M Lee, K Hwang, J Park, S Choi, S Shin, W Sung, Signal Processing Systems (SiPS). IEEEM. Lee, K. Hwang, J. Park, S. Choi, S. Shin, and W. Sung, "Fpga-based low-power speech recognition with recurrent neural networks," in Signal Processing Systems (SiPS), 2016 IEEE International Workshop on. IEEE, 2016, pp. 230-235.
Fpga acceleration of recurrent neural network based language model. S Li, C Wu, H Li, B Li, Y Wang, Q Qiu, Field-Programmable Custom Computing Machines (FCCM). IEEEIEEE 23rd Annual International Symposium onS. Li, C. Wu, H. Li, B. Li, Y. Wang, and Q. Qiu, "Fpga acceleration of recurrent neural network based language model," in Field-Programmable Custom Computing Machines (FCCM), 2015 IEEE 23rd Annual International Symposium on. IEEE, 2015, pp. 111-118.
Eesen: End-to-end speech recognition using deep rnn models and wfst-based decoding. Y Miao, M Gowayyed, F Metze, Automatic Speech Recognition and Understanding (ASRU). Y. Miao, M. Gowayyed, and F. Metze, "Eesen: End-to-end speech recognition using deep rnn models and wfst-based decoding," in Automatic Speech Recognition and Understanding (ASRU), 2015 IEEE Workshop on. IEEE, 2015, pp. 167-174.
TN-53-01: LPDDR4 System Power Calculator. Micron Inc, Micron Inc., "TN-53-01: LPDDR4 System Power Calculator," https://www.micron.com/support/tools-and- utilities/power-calc.
Cacti 6.0: A tool to model large caches. N Muralimanohar, R Balasubramonian, N P Jouppi, HP Laboratories. N. Muralimanohar, R. Balasubramonian, and N. P. Jouppi, "Cacti 6.0: A tool to model large caches," HP Laboratories, pp. 22-31, 2009.
Xnor-net: Imagenet classification using binary convolutional neural networks. M Rastegari, V Ordonez, J Redmon, A Farhadi, European Conference on Computer Vision. SpringerM. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, "Xnor-net: Imagenet classification using binary convolutional neural networks," in European Conference on Computer Vision. Springer, 2016, pp. 525-542.
Computation reuse in dnns by exploiting input similarity. M Riera, J.-M Arnau, A González, Proceedings of the 45th Annual International Symposium on Computer Architecture. the 45th Annual International Symposium on Computer ArchitectureIEEE PressM. Riera, J.-M. Arnau, and A. González, "Computation reuse in dnns by exploiting input similarity," in Proceedings of the 45th Annual International Symposium on Computer Architecture. IEEE Press, 2018, pp. 57-68.
Bidirectional recurrent neural networks. M Schuster, K K Paliwal, IEEE Transactions on Signal Processing. 4511M. Schuster and K. K. Paliwal, "Bidirectional recurrent neural networks," IEEE Transactions on Signal Processing, vol. 45, no. 11, pp. 2673-2681, 1997.
E-pur: An energy-efficient processing unit for recurrent neural networks. F Silfa, G Dot, J.-M Arnau, A Gonzàlez, http:/doi.acm.org/10.1145/3243176.32431841-18:12Proceedings of the 27th International Conference on Parallel Architectures and Compilation Techniques, ser. PACT '18. the 27th International Conference on Parallel Architectures and Compilation Techniques, ser. PACT '18New York, NY, USAACM18F. Silfa, G. Dot, J.-M. Arnau, and A. Gonzàlez, "E-pur: An energy-efficient processing unit for recurrent neural networks," in Proceedings of the 27th International Conference on Parallel Architectures and Compilation Techniques, ser. PACT '18. New York, NY, USA: ACM, 2018, pp. 18:1-18:12. [Online]. Available: http://doi.acm.org/10.1145/3243176.3243184
Dynamic instruction reuse. A Sodani, G S Sohi, http:/doi.acm.org/10.1145/264107.264200ser. ISCA '97. A. Sodani and G. S. Sohi, "Dynamic instruction reuse," ser. ISCA '97, 1997, pp. 194-205. [Online]. Available: http://doi.acm.org/10.1145/264107.264200
Show and tell: A neural image caption generator. O Vinyals, A Toshev, S Bengio, D Erhan, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, "Show and tell: A neural image caption generator," in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
Show and tell: Lessons learned from the 2015 MSCOCO image captioning challenge. O Vinyals, A Toshev, S Bengio, D Erhan, abs/1609.06647CoRR. O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, "Show and tell: Lessons learned from the 2015 MSCOCO image captioning challenge," CoRR, vol. abs/1609.06647, 2016. [Online]. Available: http://arxiv.org/abs/1609.06647
Google's neural machine translation system: Bridging the gap between human and machine translation. Y Wu, M Schuster, Z Chen, Q V Le, M Norouzi, W Macherey, M Krikun, Y Cao, Q Gao, K Macherey, arXiv:1609.08144arXiv preprintY. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey et al., "Google's neural machine translation system: Bridging the gap between human and machine translation," arXiv preprint arXiv:1609.08144, 2016.
Dynamic purity analysis for java programs. H Xu, C J F Pickett, C Verbrugge, http:/doi.acm.org/10.1145/1251535.1251548ser. PASTE '07H. Xu, C. J. F. Pickett, and C. Verbrugge, "Dynamic purity analysis for java programs," ser. PASTE '07, 2007, pp. 75-82. [Online]. Available: http://doi.acm.org/10.1145/1251535.1251548
Approxann: An approximate computing framework for artificial neural network. Q Zhang, T Wang, Y Tian, F Yuan, Q Xu, Proceedings of the. theQ. Zhang, T. Wang, Y. Tian, F. Yuan, and Q. Xu, "Approxann: An approximate computing framework for artificial neural network," in Proceedings of the 2015
Test in Europe Conference & Exhibition, ser. DATE '15. Design, EDA ConsortiumSan Jose, CA, USAAutomation &Design, Automation & Test in Europe Conference & Exhibition, ser. DATE '15. San Jose, CA, USA: EDA Consortium, 2015, pp. 701-706. [Online]. Available: http://dl.acm.org/citation.cfm?id=2755753.2755913
| []
|
[
"StonkBERT: Can Language Models Predict Medium-Run Stock Price Movements? 1",
"StonkBERT: Can Language Models Predict Medium-Run Stock Price Movements? 1"
]
| [
"Stefan Pasch ",
"Daniel Ehnes "
]
| []
| []
| To answer this question, we fine-tune transformer-based language models, including BERT, on different sources of company-related text data for a classification task to predict the one-year stock price performance. We use three different types of text data: News articles, blogs, and annual reports. This allows us to analyze to what extent the performance of language models is dependent on the type of the underlying document. StonkBERT, our transformer-based stock performance classifier, shows substantial improvement in predictive accuracy compared to traditional language models. The highest performance was achieved with news articles as text source. Performance simulations indicate that these improvements in classification accuracy also translate into above-average stock market returns. | null | [
"https://arxiv.org/pdf/2202.02268v1.pdf"
]
| 246,608,240 | 2202.02268 | f21e7b4dff8c62938ab73387f3775d4d7ac2f55b |
StonkBERT: Can Language Models Predict Medium-Run Stock Price Movements? 1
Stefan Pasch
Daniel Ehnes
StonkBERT: Can Language Models Predict Medium-Run Stock Price Movements? 1
1stock marketnatural language processing (NLP)transformersfinanceBERTdeep learningfinancial news
To answer this question, we fine-tune transformer-based language models, including BERT, on different sources of company-related text data for a classification task to predict the one-year stock price performance. We use three different types of text data: News articles, blogs, and annual reports. This allows us to analyze to what extent the performance of language models is dependent on the type of the underlying document. StonkBERT, our transformer-based stock performance classifier, shows substantial improvement in predictive accuracy compared to traditional language models. The highest performance was achieved with news articles as text source. Performance simulations indicate that these improvements in classification accuracy also translate into above-average stock market returns.
Introduction
In recent years, transformer-based language models have received strong interest in both academic circles and the industry. The development of these language models has accelerated since the publication of BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al. 2018), which set new high scores in various NLP tasks and applications, such as question-answering, fact-checking, or hate-speech detection.
Unsurprisingly, transformer models have influenced the literature and work that links textual information, such as corporate communication or tweets, with stock price performance.
While initial attempts to use NLP methods for this purpose have mostly relied on bag-of-word type models (Loughran and McDonald 2011;Loughran and McDonald 2016;Jiang et al. 2019), transformer-based approaches allow to capture more complex aspects of textual information.
By integrating contextual information into the evaluation of words, initial evidence suggests that transformer models indeed achieve higher accuracy in predicting stock price movements compared to previous approaches that are, for example, based on naive bayes or dictionary methods (Chen 2021;Sivri et al. 2022).
Most of this work has focused on the stock movements in the immediate aftermath of the publication of the corresponding documents, i.e., 5 to 30 days after a publication event. Yet, many investors do not engage in day-trading and are interested in stock performance over a longer time span. A natural question that arises is whether predictions would also work on longer time horizons. Moreover, most language models that try to predict stock prices analyze only a single source of text data, e.g., one type of corporate communication. However, some types of company-related text data may be more informative than others. We aim to fill these research gaps in the following way:
First, in this paper, we investigate whether language models are able to predict the stock performance for longer time frames. To do this, we analyze the stock price movements on a one-year time frame after the publication of related text data. Second, we compare three different sources of text data and evaluate to what extent the performance of language models is dependent on the type of underlying documents. Specifically, we use news articles, blog posts and annual reports.
To address these questions, we use a financial news dataset (Gennadiy 2020), containing news articles and opinion pieces, as well as official Form-10K annual report filings for 250 firms between 2012 and 2019. Additionally, we gathered information on the stock performance of these companies. We link these data sources to fine-tune transformer models that classify the one-year stock performance based on the corresponding text data.
Analyzing the performance of these language models for our sample data reveals that our language models are able to classify the one-year performance with an accuracy of up to 10 percentage points above the expected accuracy of a random stock movement classifier.
Additionally, we find that BERT-based language models outperform traditional language classifiers in all our specifications. Further, the analysis confirms that the performance of these models is highly dependent on the underlying text data, showing a clear ranking, with news data leading to highest, blog articles to the second highest, and company reports leading to the lowest performance. This also provides interesting economic insights, as our results suggests that news articles contain information that are the most "valuable" to an AI. Potentially, blog articles, in their speculative nature, only add noise compared to news articles, whereas the informational content of company reports may be too sparse.
In our supplementary analysis, we investigate to what extent the performance of the newsbased model translates into stock return, showing that for our sample and observation period, the recommended picks indeed perform well, compared to the average performance of the entire sample.
Related Work
Finance, accounting, and economics scholars have long been interested in the interaction of textual information and stock price movements (Cutler et al. 1989;Tetlock 2007;Groß-Klußmann and Hautsch 2011). Most of these studies link news coverage or corporate communications to stock price movements, initially using rather unsophisticated but nonetheless effective bag-of-words methods, such as sentiment analysis based on dictionaries, to investigate these relationships (Loughran and McDonald 2011). Particularly official corporate disclosures have been scrutinized, with studies finding relationships between the readability of documents and stock returns. Additionally, the tone of the documents and the information provided to investors has an effect on stock market returns (Loughran and McDonald 2016). News articles and their relation to stock return have also received a lot of attention: Researchers found evidence suggesting that rising negative sentiment in a firm's news coverage lowers a firm's returns (Ahmad et al. 2016). Media coverage effects may, however, not be persistent, as increased coverage and visibility, for example, can also generate momentum returns that tend to wane in the long run (Hillert et al. 2014). Similarly, social media sentiment has also been linked to stock market returns (Duz Tan and Tas 2021;Sprenger et al. 2014).
In recent years, deep learning based language models made great strides in better understanding textual information, specifically taking the surrounding context into consideration when building their word representations (Chan et al. 2020). Starting with LSTMbased models, such as ELMO (Peters et al. 2018), and with the introduction of transformerbased models, particularly BERT in 2018 (Devlin et al. 2018), deep learning models have strongly improved the ability of NLP models to answer questions, inference textual meaning or summarize text. Moreover, specifically for text classification, BERT models achieve higher accuracy compared to traditional NLP models, such as TF-IDF based models, in various text classification applications (Gonzalez-Carvajal and Garrido-Merchan 2020).
In the domain of finance, transformer-based approaches also outperform traditional NLP approaches, such as dictionary methods, in various tasks, including sentiment analysis, sentence boundary detection, and question-answering for finance-related texts. Moreover, further performance increases could be achieved by pre-training finance-specific language models like FinBERT (Liu et al. 2020).
Not surprisingly, researchers have integrated these deep learning approaches to predict stock price movements by applying such language models on company-related text data, such as tweets or company reports. For example, Sawhney et al. (2020) combine the word encodings in twitter texts with graph neural networks to predict whether stocks decrease or increase within a 5-day lag window. Similarly, Sonkiya et al. (2021) use BERT and GAN to predict stock prices based on news articles in a 5 to 30 days window. However, to the best of our knowledge, there has been no investigation on the ability to predict medium-run stock movements utilizing this technology.
Data
Company-Related Articles
One of our principal goals is to test what type of textual information is most useful to predict medium-run stock price development. Accordingly, we scrutinize three types of company related text data: First, we utilized a dataset of historical financial news coverage available from "news") and stock market opinion pieces (which we refer to as "blogs"). Second, we use annual First, since the amount of coverage an individual equity receives is highly skewed, for example towards large firms and stocks that have larger trading volumes, we truncated the number of equities to the 250 corporations that received the most coverage in the dataset. This ensures that every equity has at least 170 individual items associated with it. Additionally, the temporal distribution of entries is also highly skewed toward the later years. Therefore, we truncated the dataset to only include 2012 forward in our analysis, as years before then have partially less than 1000 news items in total.
For the annual reports we relied on the data provided by Bill McDonald and Tim Loughran at the University of NotreDame. We relied on all available annual reports from the same companies that we selected from the news and blog dataset. In contrast to the news and blog data, annual reports are only reported once a year, and are overall much more extensive.
To deal with this, we split the reports into individual paragraphs.
Sampling
To prepare the data further, we split the data into a train, development, and test set. The train set covers the years 2012-2017 and comprises 225 equities. The dev set covers 10% of the companies (25) and also comprises the years 2012-2017. Our test set covers the same 225 firms from the train set, but in the entire year of 2019. 4 Essentially, there is a one-year gap between the train and the test set. This is necessary because we want to predict the average stock return after one year. Therefore, if we include data from 2018 there would be a large overlap in the stock-price developments of the observations in the training set and the test set, which could lead to spurious results. Accordingly, we did not use the news and blog data from 2018 in our main specification. 5
Stock Price Movements
For these companies we also gathered daily stock price data from Yahoo Finance. To abstract from general market movements, we look at the abnormal price in-or decreases for each stock, that is, a stock's price change compared to the average price change of the market. These abnormal one-year return data were then split in tertiles, dividing stocks in three equally large groups of over-, under-, and average-performers. Hence, from a random classifier we would expect an accuracy of 33%.
Training
Transformer-Based Models
We fine-tuned various transformer-based language models to classify the above-mentioned texts based on the expected stock performance of the corresponding company. We will mainly present results for fine-tuning BERT-Base (Devlin et al. 2018) For all models, we use the following fine-tuning specifications:
Traditional Text Classifiers
A classical way to conduct supervised NLP-tasks is to consider the input text as a bag-ofwords and analyze their frequency of occurrence (Gonzalez-Carvajal and Garrido-Merchan 2020). A common approach to do so is by forming Term Frequency -Inverse Document Frequency (TF-IDF) matrices that measures how often a word occurs in a text relative to the inverse number of occurances in the entire document corpus. Hence, we vectorize the text inputs using TfidfVectorizer from sklearn, similar to Gonzalez-Carvajal and Garrido-Merchan (2020). Based on these vectorized text inputs, in a second step, traditional machine learning algorithms can be applied to predict the class of the corresponding text inputs. In particular, we will use Logistic Regressions and XGBoost as benchmarks. Table 2 reports the performance of fine-tuned BERT models (StonkBERT) on a hold-out test set for the three text sources. Our first finding is that StonkBERT consistently outperforms the traditional methods in predicting the return category, irrespective of the data source under scrutiny. In fact, in some of the cases, a random draw would have outperformed the traditional models, while the StonkBERT results consistently outperform a random draw, albeit only narrowly for the annual reports.
Results
Classifier Results
More importantly, we find that the text source we use has a sizable influence on the achieved accuracy. The lowest performance is achieved using annual reports, indicating that the information therein is largely already priced in or at least of limited long-term value to investors. In contrast, models trained on blogs and news articles performed better. Comparing the accuracy between news and blogs, we find that the informational content in news is most valuable. Our best model achieves an accuracy of 43%, i.e., 10 percentage points over a random draw. Note that this is roughly in line with the work that focuses on short-term stock price
Performance Analysis
Though the language models were trained on a simple classification task with three categories, we also analyze in how far the model's predictions translate into stock market returns. This means, we analyze the average abnormal one-year return in our news article test sample based on the predictions from StonkBERT. First, we calculate the performance of the three prediction groups (good, medium, bad) in our test period, where performance is measured as the average abnormal one-year performance in a rolling window. 6 Table 3 reports the results. In our test set, the included companies showed an average performance of 6.29%. The firms that had been predicted as "good" by StonkBERT, however, showed an average performance of 16.83% in the one-year period after the predictions were made. The firms that were predicted as "average" just showed a performance of 4.72% and the "bad" predictions of -3.17%, respectively.
Correspondingly, the classification into the three performance groups was actually associated with substantial differences in the one-year stock returns. Moreover, we looked at the Top-10 predictions, which include the 10 firms where StonkBERT predicted the highest probability of being a "good" firm. Those firms in fact outperformed the market by an even higher margin do not find that they further underperformed compared to all firms that were predicted as "bad".
We also conducted performance simulations for articles published in year 2018 and also find performance differences between the predicted categories, albeit the outperformance of the Top-10 compared to the market appeared to be more modest. only. Interestingly, we find that the firms predicted as "good" outperformed the other categories in both time periods before and after the Covid related stock market crash in March 2020.
FIGURE 1 Performance Simulations over Time
Discussion
We set out to test whether transformer-based language models can learn valuable information from text data to predict the stock performance of affiliated firms over a one-year time-period.
Our results provide two interesting findings: First, the predictive capability of the model heavily depends on the informational value of the underlying text data, where in all specifications and models the text data from the news sample outperformed both the blog as well as the annual reports sample. Further, we found that state-of-the-art transformer-models indeed outperform traditional NLP approaches to predict stock returns. While our results are encouraging, there are several important limitations that need to be kept in mind.
Economic Explanation
Among finance scholars the dominant theory is the efficient market hypothesis (Fama 1970), which suggests that stock prices reflect all available information of the market, making it impossible to systematically outperform the market. Correspondingly, our results should be taken with a grain of salt and there exist various explanations why NLP models including StonkBERT may not outperform the market in the long run.
First, one explanation for the successful prediction may be the result of our specific time period, which includes very strong outperformance of tech-based stocks in general that were further amplified after the Covid crisis emerged. Correspondingly, our model may have learned that tech-based stocks outperformed their peers during the training period and inferred this trend to continue in the future. However, a purely industry-based effect should have been detected by traditional models as well. Potentially, transformer-based models may be able to pick up more fine-grained information, for example specific technological trends within industries (e.g. cloud computing, or machine learning). Another reason why we suspect that the results are not entirely driven by industry based effects is that the annual report based text data was unable to pick up on such industry based effects, despite convincing and thorough evidence that their contents are an excellent predictor of a firm's industry (Hoberg and Phillips 2016).
Similarly, our models may mainly capture a momentum-effect that has been widely studied in the finance literature (Jegadeesh and Titman 2001). Moreover, as our models learn from historical success factors it could also be susceptible to run into stock bubbles. However, it should be noted that the StonkBERT model not just predicted outperformers, which potentially reflect a bubble, but was also able to detect underperformers.
Differences Between Different Text Data
We find the highest accuracy for language models working with news data, the second highest with blog articles, and the worst accuracy for annual reports as text source. A potential explanation for the comparably weak results for our annual report models could be that the information density in annual reports are too sparse, because, for example, they contain various standard phrases and largely contain information on past events that could be already priced in by the market (Yuan et al. 2021). Another concern is the limited frequency, which means that only new information with a close temporal proximity to the report can be learned by the model.
Further, seasonal fluctuations are also not included, as the annual reports are generally released around the same time each year. An interesting avenue of future research could therefore be to test other more frequent corporate communication such as earnings reports, quarterly reports or form-8K filings, which could disentangle the frequency concerns from the considerations around limited informational content in such documents.
For the performance difference between blogs and news, a potential explanation in our estimation is the increased noise that is created through speculative attempts by bloggers to beat the market. Additionally, since these articles usually are framed as opinion pieces, their average news content should be lower compared to news articles.
Technical Considerations
For most language tasks, larger transformer models tend to outperform smaller models, e.g.
BERT-Large outperforms BERT-Base in the seminal BERT paper (Devlin et al. 2018). We, however, find that no model, including various "large" models could outperform a BERT-Base model. Moreover, we achieved the best results with just one epoch of fine-tuning the language model for the stock performance classification task. A potential explanation could be that the classification task at hand is susceptible to overfitting.
Another open question is the length of the training period and test period. In our approach, we used a simple heuristic based on the amount of data available. For example, including news data from the 1980s is unlikely to improve the predictive performance of the model for today, it may even introduce noise or outdated information that decrease the models' performance. On the other hand, using a longer training period could prevent an overfitting on short term trends that do not reflect fundamental values. Similarly, for the test period, it is so far unclear how far the performance differences between predicted groups persist after the one-year period.
reports as filed with the Securities and Exchange Comminssion (SEC), the so-called Form-10K filings. Together, these sources represent examples of three very important sources of text based information for investors. Annual reports reflect the corporate communications with their stakeholders directly (other examples would be Earnings Calls or Form-8K Filings), news coverage represents how financial news organizations cover changes in the companies' operation and its prospects, while blogs often incorporate professional and semi-professional analysis of stock price movements. The historical financial news dataset covers 95,578 news articles and 125,935 blogs related to U.S. publicly traded equities covering 800 different corporations (all listed at the NASDAQ or NYSE). The timeframe of the dataset itself starts in October 2008 and continuesthrough February 2020. We made certain restrictions and did not include the entire dataset:
, as this gave us the highest accuracy, but also show results for FinBERT(Araci 2019), and RoBERTa-Large (Liu et al.2019), BERT-Large(Devlin et al. 2018), and Electra-Large(Clark et al. 2020) in Appendix A1.
movements. For instance, Babbe et al. (2019) outperform a simple one-class classifier by 5 percentage points and Sawhney et al. (2020) outperform a random classifier by 10 percentage points. Training language models on multiple text sources combined did not result in higher accuracy (not shown).
TABLE 1 Hyperparameters
1Transformer-Based ModelsNote that in Appendix A2 we also test BERT-Base with a higher number of epochs. If anything, the performance tends to decrease with increasing epochs.Hyperparameter
Values
Learning Rate
1e-5
Batch Size
16
Max Sequence Length
200
Number of Epochs
1
Dropout Rate
0.1
TABLE 2 Classification
2ResultsModel/ Data
News
Blogs
Company Reports
Random
Acc.: 0.33
Acc.: 0.33
Acc.: 0.33
F1: 0.33
F1: 0.33
F1: 0.33
Logistic Regression
Acc.: 0.38
Acc.: 0.36
Acc.: 0.33
F1: 0.36
F1: 0.35
F1: 0.33
XGBoost
Acc.: 0.33
Acc.: 0.33
Acc.: 0.24
F1: 0.26
F1: 0.27
F1: 0.17
StonkBERT
Acc.: 0.43
Acc.: 0.39
Acc.: 0.36
F1: 0.43
F1: 0.39
F1: 0.37
TABLE 3
3Performance Simulations
Grouping
Year 2019
(Test Set)
Year 2018
(Robustness Check)
Whole Sample
6.29%
5.24%
Prediction: Good
16.83%
12.33%
Prediction: Average
4.72%
6.40%
Prediction: Bad
-3.17%
-5.49%
Prediction: Top-10
41.02%
8.65%
Prediction: Flop-10
-1.85%
-0.75%
The following figures show the average stock price development for the different groups
comprising the entire two-year period, with analyzed articles covering the entire year of 2019
TABLE A1 Comparing
A1Transformer Models (News Articles)
Please note that since the data for annual reports stops after 2018 we conduct the same splitting rule as for the news-& blog articles with one year lag. This means, the test period includes reports from 2018.
In a robustness check, we used the 2018 data as our test set achieving comparable results.
We calculated the one-year return for every trading day, over the entire year and then calculated the average over all days.
Appendix A2
. Khurshid; Ahmad, Jingguang Han, Ahmad, Khurshid; Han, JingGuang;
. Elaine Hutson, Hutson, Elaine;
Mediaexpressed negative tone and firm-level stock returns. Colm ; Kearney, Sha Liu, 10.1016/j.jcorpfin.2015.12.014Journal of Corporate Finance. 37Kearney, Colm; Liu, Sha (2016): Media- expressed negative tone and firm-level stock returns. In Journal of Corporate Finance 37, pp. 152-172. DOI: 10.1016/j.jcorpfin.2015.12.014.
FinBERT: Financial Sentiment Analysis with Pre-trained Language Models. Dogu Araci, Araci, Dogu (2019): FinBERT: Financial Sentiment Analysis with Pre-trained Language Models. Available online at https://arxiv.org/pdf/1908.10063.
BERT is the Word: Predicting Stock Prices with Language Models. Mark ; Babbe, Cory ; Nguyen, Lee, ; Won, Hanny Noueilaty, In medium.com, 3/16/2019. Available online atBabbe, Mark; Nguyen, Cory; Lee, Won; Noueilaty, Hanny (2019): BERT is the Word: Predicting Stock Prices with Language Models. In medium.com, 3/16/2019. Available online at https://babbemark.medium.com/bert-is-the-word-predicting-stock-prices-with-language- models-8d5205b8537c.
German's Next Language Model. Branden ; Chan, Stefan ; Schweter, Timo Möller, Chan, Branden; Schweter, Stefan; Möller, Timo (2020): German's Next Language Model. Available online at https://arxiv.org/pdf/2010.10906.
Stock Movement Prediction with Financial News using Contextualized Embedding from BERT. Qinkai Chen, Chen, Qinkai (2021): Stock Movement Prediction with Financial News using Contextualized Embedding from BERT. Available online at https://arxiv.org/pdf/2107.08721.
. Kevin ; Clark, Minh-Thang Luong, Clark, Kevin; Luong, Minh-Thang;
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. V Le, ; Quoc, Christopher D Manning, Le V, Quoc; Manning, Christopher D. (2020): ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. Available online at https://arxiv.org/pdf/2003.10555.
What moves stock prices. David M Cutler, James M Poterba, Summers, H Lawrence, 10.3905/jpm.1989.409212JPM 15 (3). Cutler, David M.; Poterba, James M.; Summers, Lawrence H. (1989): What moves stock prices? In JPM 15 (3), pp. 4-12. DOI: 10.3905/jpm.1989.409212.
. Jacob ; Devlin, Ming-Wei Chang, Devlin, Jacob; Chang, Ming-Wei;
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Kenton ; Lee, Kristina Toutanova, Lee, Kenton; Toutanova, Kristina (2018): BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Available online at https://arxiv.org/pdf/1810.04805.
Social Media Sentiment in International Stock Returns and Trading Activity. Duz Tan, ; Selin, Oktay Tas, 10.1080/15427560.2020.1772261In Journal of Behavioral Finance. 222Duz Tan, Selin; Tas, Oktay (2021): Social Media Sentiment in International Stock Returns and Trading Activity. In Journal of Behavioral Finance 22 (2), pp. 221-234. DOI: 10.1080/15427560.2020.1772261.
Efficient Capital Markets: A Review of Theory and Empirical Work. Eugene F Fama, 10.2307/2325486In The Journal of Finance. 252383Fama, Eugene F. (1970): Efficient Capital Markets: A Review of Theory and Empirical Work. In The Journal of Finance 25 (2), p. 383. DOI: 10.2307/2325486.
. R Gennadiy, us-equities-news-dataGennadiy, R. (2020): us-equities-news-data.
Comparing BERT against traditional machine learning text classification. Gonzalez-Carvajal, ; Santiago, Eduardo Garrido-Merchan, 13Gonzalez-Carvajal, Santiago; Garrido-Merchan, Eduardo (2020): Comparing BERT against traditional machine learning text classification. In undefined. Available online at 13
Merch'an/b1d01ce808e4671866e11a021ca7d8873946080e. Merch'an/b1d01ce808e4671866e11a021ca7d8873946080e.
When machines read the news: Using automated text analytics to quantify high frequency news-implied market reactions. Axel ; Groß-Klußmann, Nikolaus Hautsch, 10.1016/j.jempfin.2010.11.009In Journal of Empirical Finance. 182Groß-Klußmann, Axel; Hautsch, Nikolaus (2011): When machines read the news: Using automated text analytics to quantify high frequency news-implied market reactions. In Journal of Empirical Finance 18 (2), pp. 321-340. DOI: 10.1016/j.jempfin.2010.11.009.
Media Makes Momentum. Alexander ; Hillert, Jacobs, ; Heiko, Sebastian Müller, 10.1093/rfs/hhu061Rev. Financ. Stud. 2712Hillert, Alexander; Jacobs, Heiko; Müller, Sebastian (2014): Media Makes Momentum. In Rev. Financ. Stud. 27 (12), pp. 3467-3501. DOI: 10.1093/rfs/hhu061.
Text-Based Network Industries and Endogenous Product Differentiation. Gerard ; Hoberg, Gordon Phillips, 10.1086/688176In Journal of Political Economy. 1245Hoberg, Gerard; Phillips, Gordon (2016): Text-Based Network Industries and Endogenous Product Differentiation. In Journal of Political Economy 124 (5), pp. 1423-1465. DOI: 10.1086/688176.
Profitability of Momentum Strategies: An Evaluation of Alternative Explanations. Jegadeesh, Sheridan Narasimhan; Titman, In The Journal of Finance. 562Jegadeesh, Narasimhan; Titman, Sheridan (2001): Profitability of Momentum Strategies: An Evaluation of Alternative Explanations. In The Journal of Finance 56 (2), pp. 699-720. Available online at http://www.jstor.org/stable/222579.
. Fuwei; Jiang, Joshua Lee, Jiang, Fuwei; Lee, Joshua;
Manager sentiment and stock returns. Xiumin ; Martin, Guofu Zhou, 10.1016/j.jfineco.2018.10.001In Journal of Financial Economics. 1321Martin, Xiumin; Zhou, Guofu (2019): Manager sentiment and stock returns. In Journal of Financial Economics 132 (1), pp. 126-149. DOI: 10.1016/j.jfineco.2018.10.001.
. Yinhan ; Liu, Ott, ; Myle, Naman; Du Goyal, Jingfei, Liu, Yinhan; Ott, Myle; Goyal, Naman; Du Jingfei;
. Mandar Joshi, Joshi, Mandar;
RoBERTa: A Robustly Optimized BERT Pretraining Approach. Danqi Chen, Chen, Danqi et al. (2019): RoBERTa: A Robustly Optimized BERT Pretraining Approach. Available online at https://arxiv.org/pdf/1907.11692.
FinBERT: A Pretrained Financial Language Representation Model for Financial Text Mining. Zhuang ; Liu, Huang, ; Degen, Huang, ; Kaiyu, Li, ; Zhuang, Zhao, 10.24963/ijcai.2020/622Liu, Zhuang; Huang, Degen; Huang, Kaiyu; Li, Zhuang; Zhao, Jun (2020): FinBERT: A Pre- trained Financial Language Representation Model for Financial Text Mining, pp. 4513-4519. DOI: 10.24963/ijcai.2020/622.
Textual Analysis in Accounting and Finance: A Survey. Tim ; Loughran, Bill Mcdonald, 10.1111/1475-679X.12123Journal of Accounting Research. 544Loughran, Tim; McDonald, Bill (2016): Textual Analysis in Accounting and Finance: A Survey. In Journal of Accounting Research 54 (4), pp. 1187-1230. DOI: 10.1111/1475-679X.12123.
When Is a Liability Not a Liability? Textual Analysis, Dictionaries, and 10-Ks. Tim Loughran, Bill Mcdonald, 10.1111/j.1540-6261.2010.01625.xThe Journal of Finance. 661Loughran, Tim.; McDonald, Bill (2011): When Is a Liability Not a Liability? Textual Analysis, Dictionaries, and 10-Ks. In The Journal of Finance 66 (1), pp. 35-65. DOI: 10.1111/j.1540- 6261.2010.01625.x.
. Matthew E Peters, Neumann, ; Mark, Mohit Iyyer, Peters, Matthew E.; Neumann, Mark; Iyyer, Mohit;
Matt ; Gardner, Clark, ; Christopher, Kenton ; Lee, Luke Zettlemoyer, Deep contextualized word representations. Gardner, Matt; Clark, Christopher; Lee, Kenton; Zettlemoyer, Luke (2018): Deep contextualized word representations. Available online at https://arxiv.org/pdf/1802.05365.
Deep Attentive Learning for Stock Movement Prediction From Social Media Text and Company Correlations. Ramit ; Sawhney, Agarwal, ; Shivam, Wadhwa, ; Arnav, Rajiv Shah, Ratn, 10.18653/v1/2020.emnlp-main.676Sawhney, Ramit; Agarwal, Shivam; Wadhwa, Arnav; Shah, Rajiv Ratn (2020): Deep Attentive Learning for Stock Movement Prediction From Social Media Text and Company Correlations, pp. 8415-8426. DOI: 10.18653/v1/2020.emnlp-main.676.
. Mahmut Sivri, Sami, Sivri, Mahmut Sami;
. Buse Korkmaz, Sibel, Korkmaz, Buse Sibel;
From Statistical to Deep Learning Models: A Comparative Sentiment Analysis Over Commodity News. Alp Ustundag, Cengiz Kahraman, Selcuk Cebi, Sezi Cevik Onar, Basar Oztaysi, A. Cagri Tolga, Irem Ucal SariSpringer International PublishingCham, 2022. ChamIntelligent and Fuzzy Techniques for Emerging Conditions and Digital TransformationUstundag, Alp (2022): From Statistical to Deep Learning Models: A Comparative Sentiment Analysis Over Commodity News. In Cengiz Kahraman, Selcuk Cebi, Sezi Cevik Onar, Basar Oztaysi, A. Cagri Tolga, Irem Ucal Sari (Eds.): Intelligent and Fuzzy Techniques for Emerging Conditions and Digital Transformation. Cham, 2022. Cham: Springer International Publishing, pp. 155-162.
Stock price prediction using BERT and GAN. Priyank ; Sonkiya, Bajpai, ; Vikas, Anukriti Bansal, Sonkiya, Priyank; Bajpai, Vikas; Bansal, Anukriti (2021): Stock price prediction using BERT and GAN. Available online at https://arxiv.org/pdf/2107.09055.
Tweets and Trades: the Information Content of Stock Microblogs. Timm O Sprenger, Tumasjan, ; Andranik, Sandner, G Philipp, Isabell M Welpe, 10.1111/j.1468-036X.2013.12007.xEur Financial Management. 205Sprenger, Timm O.; Tumasjan, Andranik; Sandner, Philipp G.; Welpe, Isabell M. (2014): Tweets and Trades: the Information Content of Stock Microblogs. In Eur Financial Management 20 (5), pp. 926-957. DOI: 10.1111/j.1468-036X.2013.12007.x.
Giving Content to Investor Sentiment: The Role of Media in the Stock Market. Paul C Tetlock, 10.1111/j.1540-6261.2007.01232.xIn The Journal of Finance. 623Tetlock, Paul C. (2007): Giving Content to Investor Sentiment: The Role of Media in the Stock Market. In The Journal of Finance 62 (3), pp. 1139-1168. DOI: 10.1111/j.1540- 6261.2007.01232.x.
. Zixuan ; Yuan, Zhu, ; Yada, Wei ; Zhang, Huang, Ziming; Ye, ; Guangnan, Hui Xiong, Yuan, Zixuan; Zhu, Yada; Zhang, Wei; Huang, Ziming; Ye, Guangnan; Xiong, Hui (2021):
Transformer-Based Counterfactual Augmentation for Earnings Call Analysis. Multi-Domain, Multi-Domain Transformer-Based Counterfactual Augmentation for Earnings Call Analysis. Available online at https://arxiv.org/pdf/2112.00963.
| []
|
[
"THE INTRINSIC METRIC ON THE UNIT SPHERE OF A NORMED SPACE",
"THE INTRINSIC METRIC ON THE UNIT SPHERE OF A NORMED SPACE"
]
| [
"Miek Messerschmidt ",
"Marten Wortel "
]
| []
| []
| Let S denote the unit sphere of a real normed space. We show that the intrinsic metric on S is strongly equivalent to the induced metric on S. | null | [
"https://arxiv.org/pdf/1510.07442v2.pdf"
]
| 119,691,449 | 1510.07442 | 867ee3774f3d6a26603a2b3c14b589e6dcea6ec2 |
THE INTRINSIC METRIC ON THE UNIT SPHERE OF A NORMED SPACE
8 Mar 2017
Miek Messerschmidt
Marten Wortel
THE INTRINSIC METRIC ON THE UNIT SPHERE OF A NORMED SPACE
8 Mar 2017arXiv:1510.07442v2 [math.FA]
Let S denote the unit sphere of a real normed space. We show that the intrinsic metric on S is strongly equivalent to the induced metric on S.
Introduction
Consider the following problem which arose in other questions under investigation by the first author:
Question. For the unit sphere of a real normed space, is the induced metric strongly equivalent to the sphere's intrinsic metric?
This paper will answer this question in the affirmative. More precisely: For a unit sphere S of a real normed space, the length of a path ρ : [0, 1] → S (assumed to be continuous), is given by
L(ρ) := sup n−1 j=0 ρ(t j ) − ρ(t j+1 )
n ∈ N, 0 = t 0 < . . . < t n = 1 , and the intrinsic metric on S is defined by taking the infimum of the above quantity over all paths between two points. I.e., for x, y ∈ S, we define the intrinsic metric on S by d(x, y) := inf L(ρ) ρ : [0, 1] → S continuous, ρ(0) = x, ρ(1) = y .
The above question is then rephrased as: For the unit sphere S in a normed space, do there exist constants A, B > 0 such that, for all x, y ∈ S,
A x − y ≤ d(x, y) ≤ B x − y ?
To the authors' knowledge, the answer to this question does not appear in the literature 1 , a fact that is perhaps more surprising than the positive solution to the problem which will be presented in this paper. The problem can essentially be reduced to one in two dimensions and apart from relying on John Ellipsoids-a fundamental structure from local theory-the result follows from entirely elementary (albeit somewhat technical) arguments.
We will now describe the structure of the paper. After introducing the needed notation, definitions and preliminary results in Section 2, in Section 3 our goal will be to prove Theorem 3.6:
Theorem 3.6. For any norm · on a real vector space V , let d denote the intrinsic metric on the unit sphere S of V . For all x, y ∈ S,
x − y ≤ d(x, y) ≤ √ 2π x − y .
A crucial ingredient is that of the John Ellipsoid: The largest ellipsoid (Euclidean ball of largest volume) that can be contained inside a unit ball of a finite dimensional normed space. Theorem 1.1 (John's Theorem [1,Theorem 12.1.4]). Let W be any normed space of dimension n > 1. With · E denoting the Euclidean norm on R n , there exists a norm one isomorphism T :
(R n , · E ) → W with inverse T −1 : W → (R n , · E ) whose norm is at most √ n.
Specifically, all two-dimensional subspaces of a normed space has a Banach-Mazur distance of at most √ 2 from two-dimensional Euclidean space. Of course, the intrinsic metric on a normed space V 's unit sphere is bounded above by the "planar" intrinsic metric: where all paths in the defining infimum are taken to live in any two-dimensional subspace of V . This allows us to reduce the question to R 2 , where S lies between a Euclidian unit sphere S E and √ 2S E . In this setting, the crucial ingredients are Lemmas 3.3 and 3.4, which allows us to conclude the local bi-Lipschitzness of the map σ : S → S E defined by σ(x) := x/ x E where both S and S E are endowed with the Euclidean induced metric. Since the Euclidean induced-and intrinsic metrics on S E are easily calculated and related (Lemma 2.1), our main results, Theorems 3.5 and 3.6, then easily follow.
We note that the constant √ 2π obtained in our main result (Theorem 3.6) is likely not optimal. In two dimensions, the · ∞ -norm provides a worst case for the John Ellipsoid. Let S ∞ be the unit sphere of this norm, with intrinsic metric d ∞ , then for x, y ∈ S ∞ , it is easily seen that x − y ∞ ≤ d ∞ (x, y) ≤ 2 x − y ∞ . This prompts the following conjecture: Conjecture 1.2. 2 For any norm · on a real vector space V , let d denote the intrinsic metric on the unit sphere S of V . For all x, y ∈ S,
x − y ≤ d(x, y) ≤ 2 x − y .
Definitions, notation and preliminary results
This section will explicitly define all notation used in this paper. Since we will translate between many different metrics on many different sets, we will take extreme care to make our notation as explicit as possible.
Let V be a real vector space. Let A be an arbitrary index symbol and · A any norm on V . We will denote unit sphere, closed unit ball and open unit ball with respect to · A respectively by S A , B A and B A .
For any subset M ⊆ V , we define the induced metric
d A : M → R ≥0 on M by d A (x, y) := x − y A for v, w ∈ M .
We define the A-M -path-space by
P A (M ) := {ρ | ρ : [0, 1] → M, · A − continuous} , and the planar A-M -path-space by P planar A (M ) := {ρ ∈ P A (M ) | dim (span (Imρ)) = 2} . We define the A-path-length operator L A : P A (V ) → R ≥0 ∪ {∞} by L A (ρ) := sup n−1 j=0 ρ(t j ) − ρ(t j+1 ) A n ∈ N, 0 = t 0 < . . . < t n = 1 (ρ ∈ P A (V )). We define the (extended) A-M -intrinsic metric d A,M : M → R ≥0 ∪ {∞}, by d A,M (v, w) := inf L A (ρ) ρ ∈ P A (M ), ρ(0) = v, ρ(1) = w (v, w ∈ M ) and the (extended) A-M -planar-intrinsic metric d planar A,M : M → R ≥0 ∪ {∞} by d planar A,M (v, w) := inf L A (ρ) ρ ∈ P planar A (M ), ρ(0) = v, ρ(1) = w (v, w ∈ M ).
We introduce the following abbreviated notation that will aid in readability of the paper: For (extended) metrics d and d ′ on some set D, subset M ⊆ D, and constant K > 0, by
"d ≤ Kd ′ on M " we will mean d(a, b) ≤ Kd ′ (a, b) for all a, b ∈ M .
For any real normed space (V, · A ) and subset M ⊆ V , since P planar
A (M ) ⊆ P A (M ) ⊆ P A (V ), the chain of inequalities d A = d A,V ≤ d A,M ≤ d planar A,M ≤ ∞ on M is easy to verify.
An elementary calculation establishes the following lemma:
Lemma 2.1. With · E denoting the Euclidean norm on R 2 , d E ≤ d E,SE ≤ π 2 d E on S E .
If R 2 is endowed with the euclidean norm · E arising from an inner product · | · , for elements x, y ∈ R 2 the ray from x through y is denoted by r x,y and defined by r x,y := {(1 − t)x + ty | t ≥ 0}. For a point x and points y, z ∈ R 2 distinct from x, when referring to the size of the angle between r x,y and r x,z we will mean the quantity
arccos y − x | z − x y − x E z − x E ∈ [0, π].
For points v, w, x, y ∈ R 2 , we will say the ray r x,y lies between the rays r x,v and r x,w if v, w and x are in general position and r x,
y ∩ {(1 − t)v + tw | t ∈ [0, 1]} = ∅, or, r x,y = r x,v = r x,w .
3. The intrinsic metric on unit spheres in R 2
In this section we will prove our main results. Although somewhat technical, our results follow mostly from elementary trigonometry and Euclidian plane geometry.
Let · E denote the Euclidean norm on R 2 and let · X be any norm on R 2 satisfying B E ⊆ B X ⊆ KB E for some K ≥ 1. A large part of our attention will be devoted to proving that the map σ : S X → S E defined by σ(x) := x/ x E is locally bi-Lipschitz when both S X and S E are both endowed with the Euclidean induced metrics. Once this has been achieved through Lemmas 3.3 and 3.4, a straightforward calculation will prove our main results Theorems 3.5 and 3.6.
Let · E denote the Euclidean norm on R 2 and let · X be any other norm on R 2 satisfying B E ⊆ B X . We will first relate points on S X to lines tangent to S E . Specifically, for any point x ∈ S X that is not in S E , the two lines through x that are tangent to S E are such that points in S X "close to" x are "wedged between" the tangent lines. Also, if x ∈ S X ∩ S E , then the whole of S X lies on the same side of the line y ∈ R 2 x | y = 1 .
Let x ∈ R 2 \ B E , and let τ (x) ∈ S E be a point on a tangent line to S X through x. Then the angle between r 0,x and r 0,τ (x) equals arccos(
x −1 E ). Let x ⊥ ∈ S X ∩ {x} ⊥ be such that τ (x) x ⊥ ≥ 0. If we now define a(x) := cos arccos 1 x E x x E = x x 2 E and b(x) := sin arccos 1 x E x ⊥ = 1 − 1 x 2 E x ⊥ , then τ (x) = a(x) + b(x).
Lemma 3.1. Let · | · be an inner product and · E be the associated Euclidean norm on R 2 . Let · X be any other norm on
R 2 such that B E ⊆ B X . For all x ∈ S X ,(1)For all t ∈ R, tx + (1 − t)τ (x) | τ (x) = 1. (2) For all t ∈ [0, 1], tx + (1 − t)τ (x) X ≤ 1. (3) For all t > 1, tx + (1 − t)τ (x) X ≥ 1. (4) If x ∈ S X ∩ S E and y ∈ S X , then x | y ≤ 1.
Proof. We prove (1). Let x ∈ S X . For all t ∈ R,
tx + (1 − t)τ (x) | τ (x) = t x | τ (x) + (1 − t) τ (x) | τ (x) = t x | x x 2 E + (1 − t) = t + (1 − t) = 1.
We prove (2). Let x ∈ S X . Since τ (x) ∈ S E ⊆ B X , and B X is convex, the result follows.
We prove (3). Let x ∈ S X . Since x X = 1, if τ (x) = x, then, tx + (1 − t)τ (x) X = 1, and the result is trivial. We therefore assume τ (x) = x. Since τ (x) ∈ S E , so that τ (x) X ≤ 1, by the reverse triangle inequality and intermediate value theorem there exists some t 0 ≤ 0 such that
1 = t 0 x + (1 − t 0 )τ (x) X = 1x+(1−1)τ (x) X (here we used τ (x) = x). Since the map t → tx+(1−t)τ (x) X is convex, we cannot have that tx + (1 − t)τ (x) X < 1 for any t > 1, as this would contradict 1 = t 0 x + (1 − t 0 )τ (x) X = 1x + (1 − 1)τ (x) X . We conclude that tx + (1 − t)τ (x) X ≥ 1 for all t > 1.
We prove (4). Let x ∈ S X ∩ S E and y ∈ S X , but suppose x | y > 1.
If y and x are linearly dependent, then y X > 1, contradicting y ∈ S X , and we therefore may assume that y and x are linearly independent.
Let L denote the line through x and y, parameterized by the affine map η(t) := (1 − t)y + tx for t ∈ R. The line L is not tangent to S E (else we would have x | y = 1). Therefore L intersects S E in two distinct points, one being x; let t 0 ∈ R be such that η(t 0 ) ∈ S E ∩ L is the other. We must have t 0 > 1, since 1 < x | η(t) for t ∈ [0, 1). Since η is an affine map and (R 2 , · E ) is a strictly convex space,
η 1 + t 0 2 X = η(t 0 ) + η(1) 2 X ≤ η(t 0 ) + η(1) 2 E < 1.
Let λ := 2(1 + t 0 ) −1 ∈ (0, 1), so that λ 1+t0 2 + (1 − λ)0 = 1. Then, again since η is affine,
1 = x X = η(1) X = η λ 1 + t 0 2 + (1 − λ)0 X = λη 1 + t 0 2 + (1 − λ)η(0) X ≤ λ η 1 + t 0 2 X + (1 − λ) η(0) X = λ η 1 + t 0 2 X + (1 − λ) y X < λ · 1 + (1 − λ) · 1 = 1,
which is absurd. We conclude that x | y ≤ 1 for all x ∈ S X ∩ S E and all y ∈ S X .
Next, we show that for points x, y ∈ S X that are "sufficiently close", the size of the angle formed by the rays r 0,x and r 0,y bounds the size of the acute angle formed by the ray r x,y and the perpendicular line to r 0,x through x.
Lemma 3.2. Let · E denote the Euclidean norm on R 2 and · X be any norm on R 2 such that B E ⊆ B X . Let x, y ∈ S X and let x ⊥ ∈ S E ∩ {x} ⊥ be such that x ⊥ | y ≥ 0 and define v := x + x ⊥ . If K ≥ 1 and x, y ∈ S X are such that x E ≤ K and the size of the angle between the rays r 0,x and r 0,y is at most arccos(K −1 ), then α, the size of the angle between the rays r x,v and r x,y , is also at most arccos(K −1 ).
Proof. As a visual aid, the reader is referred to Figure 3 Let β := arccos(K −1 ) and u ∈ S E be such that x ⊥ | u > 0 and that the size of the angle formed by the rays r 0,x and r 0,u equals β (i.e., x | u = x E cos β). Let τ 1 (x), τ 2 (x) ∈ S E be the point(s) on the lines through x that are tangent to
.1. α S E S X β β 0 x τ 2 (x) τ 1 (x) v y u P u x wS E , such that x ⊥ | τ 1 (x) ≥ 0. Let w := v x = τ 1 (x) = τ 2 (x) 2x − τ 2 (x) otherwise,
so that w ∈ r τ2(x),x is distinct from x, and is such that r x,w ⊆ r τ2(x),x . Let P u denote the orthogonal projection onto the span of u. Then size of the angle formed between the rays r x,Pux and r x,v is exactly β. Since x E ≤ K, the point P u x lies on the line segment {tu | t ∈ (0, 1]} (if x E = K, then P u x = u = τ 1 (x)), and therefore the size of angle between rays r x,u and r x,v is at most β. Since r x,τ1(x) is between the rays r x,u and r x,v , and since r x,v bisects the angle formed by the rays r x,τ1(x) and r x,w , the size of the angle formed by r x,v and r x,w is also at most β. Finally, by Lemma 3.1 (2),(3) and (4) and the fact that B E ⊆ B X , the ray r x,y lies either between the rays r x,u and r x,v or the rays r x,v and r x,w (The point y can only lie in the shaded area in Figure 3.1). We conclude that α, the size of the angle between rays r x,v and r x,y , is at most β = arccos(K −1 ).
Lemma 3.3. Let · E denote the Euclidean norm on R 2 and · X be any norm on R 2 such that B E ⊆ B X ⊆ KB E for some K ≥ 1. If x, y ∈ S X is such that θ, the size of angle between the rays r 0,x and r 0,y , is at most arccos(K −1 ), then
x − y E ≤ K 2 x x E − y y E E .
Proof. As a visual aid, the reader is referred to Figure 3
.2. θ S E KS E S X α α
x v y 0
x/ x λy P x y u y/ y Let x ⊥ ∈ S E ∩ {x} ⊥ be such that x ⊥ | y > 0 and define v := x + x ⊥ . Let P x and P y be the orthogonal projections onto the span of x and y respectively. Define u := P y (x/ x E ), and λ := P x y −1 E so that P x (λy) = x/ x E . Let α denote the size of the angle formed by the rays r x,y and r x,v . We note that then size of the angle formed between the rays r x/ x E ,λy and r x/ x E ,u also equals θ. Elementary trigonometry will establish
x − y E = 1 cos α y − P x y E = 1 λ cos α λy − P x (λy) E = 1 λ cos α λy − x x E E = 1 λ cos α cos θ u − x x E E . Now we note that u − x x E ≤ y y E − x x E
, since u the is the closest point (with respect to · E ) in the span of y to the point x/ x E . Also, by Lemma 3.2 we have α ≤ arccos(K −1 ), so that cos α ≥ K −1 . Furthermore λ −1 = P x y E = y E cos θ ≤ K cos θ. Finally we conclude
x − y E = 1 λ cos α cos θ u − x x E E ≤ K cos θ cos α cos θ y y E − x x E = K 2 y y E − x x E .
Lemma 3.4. Let · E denote the Euclidean norm on R 2 and · X be any norm on R 2 such that B E ⊆ B X . If x, y ∈ S X , then
x x E − y y E E ≤ x − y E .
Proof. As a visual aid we refer the reader to Let x, y ∈ S X . By exchanging the roles of x and y if necessary, we may assume y E ≥ x E ≥ 1. Let P y be the orthogonal projection onto the span of y and let u := P y x x E . Then u E ≤ 1 and y/ x E E ≥ 1 = y/ y E E . Then, by the Pythagorean theorem,
x x E − y y E 2 E = x x E − u 2 E + u − y y E 2 E ≤ x x E − u 2 E + u − y x E 2 E = x x E − y x E 2 E = 1 x 2 E x − y 2 E ≤ x − y 2 E .
In essence, the previous two Lemmas together establish that the local bi-Lipschitzness of the map σ : S X → S E defined by σ(
x) := x/ x E when B E ⊆ B X ⊆ KB E .
We will now use the previous results to prove one of our main results which relates the intrinsic metric on S X to the induced metric on S X when B E ⊆ B X ⊆ KB E for some K ≥ 1.
Theorem 3.5. Let · E denote the Euclidean norm on R 2 and · X be any norm on R 2 such that B E ⊆ B X ⊆ KB E for some K ≥ 1. Then
d X ≤ d X,SX ≤ K 3 π 2 d X on S X .
Proof. We have already noted in Section 2 that d X ≤ d X,SX on S X . Let x, y ∈ S X be arbitrary. Let c : R → R 2 be the map defined by c(θ) := (cos(θ), sin(θ)) for θ ∈ R. Let θ x , θ y ∈ R be such that c(θ x )/ c(θ x ) X = x and c(θ y )/ c(θ y ) X = y. By switching the roles of x and y, if necessary, we may assume that 0 ≤ θ y − θ x ≤ π. Consider the path ρ :
[θ x , θ y ] → S X defined by ρ(θ) := c(θ)/ c(θ) X for θ ∈ [θ x , θ y ].
Let ε > 0 be arbitrary and θ x = θ 0 < θ 1 < . . . < θ n = θ y be a partition of [θ x , θ y ] such that
n−1 j=0 ρ(θ j ) − ρ(θ j+1 ) X ≥ L X (ρ) − ε.
We may assume that 0 < θ j+1 − θ j ≤ arccos K −1 , since the triangle inequality ensures that every refinement of {θ j } n j=1 still satisfies the above inequality. We note that B E ⊆ B X ⊆ KB E implies w X ≤ w E ≤ K w X for all w ∈ R 2 . Then, by Lemmas 3.3, 2.1 and 3.4, we obtain d X,SX (x, y)
≤ L X (ρ)
≤ n−1 j=0 ρ(θ j ) − ρ(θ j+1 ) X + ε ≤ n−1 j=0 ρ(θ j ) − ρ(θ j+1 ) E + ε ≤ n−1 j=0 K 2 c(θ j ) − c(θ j+1 ) E + ε ≤ K 2 sup m−1 j=0 c(φ j ) − c(φ j+1 ) E m ∈ N θ x = φ 0 < . . . < φ m = θ y + ε ≤ K 2 d E,SE x x E , y y E + ε ≤ K 2 π 2 x x E − y y E E + ε ≤ K 2 π 2 x − y E + ε ≤ K 3 π 2
x − y X + ε.
Since ε > 0 was chosen arbitrarily, the result follows.
Our final result now follows through an easy application of the previous result and John's Theorem (Theorem 1.1):
Theorem 3.6. For any norm · X on a real vector space V ,
d X ≤ d X,SX ≤ √ 2π d X on S X .
Proof. We have already noted in Section 2 that d X ≤ d X,SX ≤ d planar X,SX on S X . Let x, y ∈ S X be arbitrary and let W ⊆ V be any two-dimensional subspace containing x and y, noting that then d planar X,SX (x, y) ≤ d X,SX ∩W (x, y). By John's Theorem (Theorem 1.1), there exists a Euclidean norm · E on W such that w X ≤ w E ≤ √ 2 w X for all w ∈ W , i.e., B E ⊆ B X ∩ W ⊆ √ 2B E . Then, by Theorem 3.5, we may conclude that d X ≤ d X,SX ≤ √ 2π d X on S X .
Figure 3
3Figure 3.1.
Figure 3 . 2 .
32Figure 3.2.
Figure 3
3Figure 3.3.
False! See the correction above! 2 This conjecture is proven true in[2, Theorem 3.5].
Topics in Banach space theory. F Albiac, N J Kalton, Graduate Texts in Mathematics. SpringerF. Albiac and N.J. Kalton, Topics in Banach space theory, Graduate Texts in Mathematics, Springer, New York, 2006.
Inner diameter, perimeter, and girth of spheres. J J Schäffer, Math. Ann. 173addendum, ibidJ.J. Schäffer, Inner diameter, perimeter, and girth of spheres, Math. Ann. 173 (1967), 59-79; addendum, ibid. 173 (1967), 79-82.
Unit for BMI; North-West University; Private Bag X6001. Miek Messerschmidt, Miek Messerschmidt; Unit for BMI; North-West University; Private Bag X6001;
South Africa; 2520 E-mail address: mmesserschmidt@gmail. Potchefstroom, Potchefstroom; South Africa; 2520 E-mail address: [email protected]
Unit for BMI; North-West University; Private Bag X6001; Potchefstroom. Marten Wortel, South Africa; 2520 E-mail address: [email protected] Wortel; Unit for BMI; North-West University; Private Bag X6001; Potchef- stroom; South Africa; 2520 E-mail address: [email protected]
| []
|
[
"Continuous Gravitational Waves and Magnetic Monopole Signatures from Single Neutron Stars",
"Continuous Gravitational Waves and Magnetic Monopole Signatures from Single Neutron Stars"
]
| [
"P V S Pavan Chandra \nIndian Institute of Science Education and Research\nHomi Bhabha road\n411008Pashan, PuneIndia\n",
"Mrunal Korwar [email protected] \nDepartment of Physics\nUniversity of Wisconsin-Madison\n53706MadisonWIUSA\n",
"Arun M Thalapillil [email protected] \nIndian Institute of Science Education and Research\nHomi Bhabha road\n411008Pashan, PuneIndia\n"
]
| [
"Indian Institute of Science Education and Research\nHomi Bhabha road\n411008Pashan, PuneIndia",
"Department of Physics\nUniversity of Wisconsin-Madison\n53706MadisonWIUSA",
"Indian Institute of Science Education and Research\nHomi Bhabha road\n411008Pashan, PuneIndia"
]
| []
| Future observations of continuous gravitational waves from single neutron stars, apart from their monumental astrophysical significance, could also shed light on fundamental physics and exotic particle states. One such avenue is based on the fact that magnetic fields cause deformations of a neutron star, which results in a magnetic-field-induced quadrupole ellipticity. If the magnetic and rotation axes are different, this quadrupole ellipticity may generate continuous gravitational waves which may last decades, and may be observable in current or future detectors. Light, milli-magnetic monopoles, if they exist, could be pair-produced non-perturbatively in the extreme magnetic fields of neutron stars, such as magnetars. This non-perturbative production furnishes a new, direct dissipative mechanism for the neutron star magnetic fields. Through their consequent effect on the magnetic-fieldinduced quadrupole ellipticity, they may then potentially leave imprints in the early stage continuous gravitational wave emissions. We speculate on this possibility in the present study, by considering some of the relevant physics and taking a very simplified toy model of a magnetar as the prototypical system. Preliminary indications are that new-born millisecond magnetars could be promising candidates to look for such imprints. Deviations from conventional evolution, and comparatively abrupt features in the early stage gravitational waveforms, distinct from other astrophysical contributions, could be distinguishable signatures for these exotic monopole states. | 10.1103/physrevd.101.075028 | [
"https://arxiv.org/pdf/1909.12855v2.pdf"
]
| 203,593,949 | 1909.12855 | 98ee46a96dc6af8fa0f2718810742d31098cf0fc |
Continuous Gravitational Waves and Magnetic Monopole Signatures from Single Neutron Stars
27 Sep 2019
P V S Pavan Chandra
Indian Institute of Science Education and Research
Homi Bhabha road
411008Pashan, PuneIndia
Mrunal Korwar [email protected]
Department of Physics
University of Wisconsin-Madison
53706MadisonWIUSA
Arun M Thalapillil [email protected]
Indian Institute of Science Education and Research
Homi Bhabha road
411008Pashan, PuneIndia
Continuous Gravitational Waves and Magnetic Monopole Signatures from Single Neutron Stars
27 Sep 2019
Future observations of continuous gravitational waves from single neutron stars, apart from their monumental astrophysical significance, could also shed light on fundamental physics and exotic particle states. One such avenue is based on the fact that magnetic fields cause deformations of a neutron star, which results in a magnetic-field-induced quadrupole ellipticity. If the magnetic and rotation axes are different, this quadrupole ellipticity may generate continuous gravitational waves which may last decades, and may be observable in current or future detectors. Light, milli-magnetic monopoles, if they exist, could be pair-produced non-perturbatively in the extreme magnetic fields of neutron stars, such as magnetars. This non-perturbative production furnishes a new, direct dissipative mechanism for the neutron star magnetic fields. Through their consequent effect on the magnetic-fieldinduced quadrupole ellipticity, they may then potentially leave imprints in the early stage continuous gravitational wave emissions. We speculate on this possibility in the present study, by considering some of the relevant physics and taking a very simplified toy model of a magnetar as the prototypical system. Preliminary indications are that new-born millisecond magnetars could be promising candidates to look for such imprints. Deviations from conventional evolution, and comparatively abrupt features in the early stage gravitational waveforms, distinct from other astrophysical contributions, could be distinguishable signatures for these exotic monopole states.
Introduction
Recent observation of gravitational waves (GWs) by the LIGO-VIRGO collaboration [1,2] have ushered in a new era of multi-messenger astronomy. Apart from its significant astrophysical [3][4][5] and cosmological [6,7] implications, gravitational wave astronomy also has the potential to illuminate many important questions in fundamental physics [8][9][10][11][12]. A fast emerging area in this context is the endeavour to detect continuous GWs from single neutron stars. As opposed to GW signals from binary coalescence, which are short lived, the continuous gravitational waves are due to intrinsic deformations or other phenomena of the compact star itself, and may last decades or centuries. The cause for these continuous GWs may be due to various distinct phenomena-stellar seismic activity, mode instabilities, mountains, oscillations or glitches in the angular velocity (see for instance [13][14][15] and references therein). There has been rapid progress in this area, with many recent searches [16][17][18], and future third-generation GW detectors, such as the Einstein Telescope, expected to significantly improve the sensitivity and reach in the relevant frequency bands [19][20][21][22].
Magnetic fields are known to cause a star to become oblate or prolate, depending on the field configuration [23,24]. This generates a quadrupole moment and associated quadrupole ellipticity. In cases where the rotation and magnetic axes do not coincide, this opens up the possibility of generating continuous gravitational waves [25][26][27]. As opposed to gravitational waves from binary coalescences, these waveforms will last for much longer durations-days or years. This enables the application of a plethora of signal processing techniques in their analyses and understanding. The LIGO-VIRGO collaboration is already searching earnestly for such signals from pulsars [18]. Future third-generation detectors are expected to increase the reach much further and into the niche frequency ranges of such signals [19].
Magnetic monopoles have so far not been observed in nature. They are however a very generic prediction of many quantum field theories [28,29] and may be awaiting discovery. Current bounds on magnetic monopoles come from colliders [30][31][32][33], terrestrial and balloon observations [34][35][36], considerations of galactic magnetic field attenuation [37][38][39], searches in bulk matter [40,41], and limits on monopole-catalysed proton decay in compact stars [42][43][44]. Very interesting limits have also been placed on heavy magnetic monopoles by considering their non-perturbative production in heavy ion collisions and in the extreme magnetic fields of neutron stars [45].
We are specifically interested in the case of milli-magnetic monopoles (MMM), with masses below O(1 eV). They are monopoles with fractional effective magnetic charges, and which appear in many Standard Model extensions, especially those involving kinetic mixing [46] with a gauge-singlet dark sector. There are previous works that have considered milli-magnetic monopoles [47][48][49][50], in various contexts. Recently, it was also demonstrated that using energetic arguments from a magnetar, one may place very stringent, non-trivial bounds on the magnetic charge of such light MMMs [50]. Similar bounds have also been placed on light milli-electrically charged particles [51], for which the relevant pair-production and astrophysical considerations are very different from MMMs.
If MMMs exist, they may be non-perturbatively pair-produced [52,53], via Schwinger pair-production, in the extreme magnetic fields of a neutron star, such as a magnetar [54,55]. This causes a decay of the magnetic field hitherto different from conventional mechanisms operational in a neutron star. The modified magnetic field evolution in turn may affect the time evolution of the quadrupole ellipticity, assuming the concerned neutron star crustal strains are below the breaking limit [56,57]. This opens up an avenue for probing these exotic states by their imprints on the gravitational waves emitted. A time evolution of the magnetic-field-induced quadrupole ellipticity, and its impact on gravitational wave emissions, has been considered previously, in other contexts [58][59][60][61]. We would like to explore if MMMs could potentially leave markers in the gravitational waveforms, from single neutron stars, that are distinguishable from common astrophysical features.
In Sec. 2 we briefly review the relevant theoretical underpinnings behind the generation of continuous gravitational waves, from single neutron stars, and outline how magnetic fields may generically lead to mass quadrupole moments. In Sec. 3 we then briefly review how MMMs may be incorporated in SM extensions, involving kinetic mixing, and also the relevant theoretical background on Schwinger pair production of MMMs. With the foundations laid, in Sec. 4 we then present our analyses and main results. We summarise and conclude in Sec. 5. There, we also highlight some of the shortcomings of the study, along with a few future directions.
2 Gravitational waves from single neutron stars
Continuous gravitational waves
Isolated neutron stars may emit GWs through various processes (Please see [15] and references therein for a comprehensive discussion). A neutron star may sustain a deformation in some cases, and if not axisymmetric with respect to its rotation axis, then emit GWs. Such sustained distortions, due to the elasticity of the neutron star crust [62][63][64][65], are generically termed neutron star mountains. Neutron star mountains may be caused by thermal gradients [62,66] or magnetic fields [25][26][27]67]. We will be interested in the latter, in the context of MMMs, and will elaborate on this further in subsection 2.2. Let us briefly review the theory behind the generation of continuous GWs, from single neutron stars, in this subsection.
In the transverse traceless gauge and an asymptotically Cartesian and mass centred coordinate system (S) (see [68,69] for instance), the leading contribution to the gravitational wave amplitude is given by [70,71]
h T T ij = 1 rΛ ij;kl (n) 2G c 4Q kl t − r c .
(2.1) Figure 1: An illustrative representation of a neutron star, with its rotation and magnetic field axes misaligned with respect to each other. The quadrupole deformation due to the magnetic field is exaggerated for clarity. The internal field configuration is not illustrated and only the most salient features pertaining to the study are shown. The presence of a quadrupole ellipticity, with respect to the rotation axis, leads to the generation of continuous graviational waves.
Here, for propagation directionn andP ij (n) = δ ij −n inj , one defines the transverse projection operator asΛ ij;kl =P ikPjl − 1 2P ijPkl . Q is the mass quadrupole moment of the object. In the Newtonian limit, i.e. for weak gravitational fields, the mass quadrupole moment may be written explicitly in terms of the trace-free part of the moment of inertia tensor
Q ij −I ij + 1 3 I k k δ ij . (2.2)
Here, the moment of inertia tensor I ij is defined in the usual way, in terms of the mass density ρ(x),
as I ij = d 3 x ρ(x)(x k x k δ ij − x i x j ).
Pulsars and magnetars are rotating neutron stars. If they are endowed with a quadrupole moment, there is the possibility of generating continuous GWs. The case of interest to us is where the deformations are such that there is a privileged direction-as in cases of a magnetic-field-induced deformation (see subsection 2.2). Here, the star's magnetic moment furnishes a privileged direction, as illustrated in Fig. 1. We also neglect any precession. Such deformations are usually parametrised either by a surface ellipticity ε S = (R equator − R polar )/R polar [23] or by a quadrupole ellipticity, defined as [25][26][27]
ε Q = − Q I . (2.3)
Here, I is the mean moment of inertia about the rotation axis, defined in terms of angular momentum J as I = J/Ω. In the Newtonian limit, and for a simple distortion with a privileged direction, we have the relevant ε Q ∝ (I 33 − I 22 ). ε S and ε Q quantify slightly different physics, geometrical and bulk distortions respectively, and coincide only for a star with a constant-density equation of state [26]. ε Q , which quantifies the star's bulk deformation, is the most relevant quantity in our case. Contributions to ε Q , purely due to stellar rotations, will not contribute to continuous GWs. For the case of magnetic deformations, with the privileged direction for the deformations making two of the mass quadrupole moment eigenvalues equal, we may write the relevant quadrupole ellipticityε Q as [25]
ε Q = − 3 2Q 33 I 3 . (2.4)
Here,Q is the mass quadrupole moment due to the magnetic field, in a frame of reference (S) where it is diagonal. I 3 is the principal moment of inertia about the rotation axis. The S andS coordinate system quantities are related by Q = RQR T , where R is an appropriate rotation matrix. The additional factor of 3/2 is introduced to recover the classical definition of ellipticity in the Newtonian limit [25,72]. Consider now a neutron star, rotating with an angular speed Ω NS , whose rotational and magnetic field axes are misaligned by a wobble angle α. Then, from Eq. (2.1), we may derive the leading GW waveform to be [25] h + = h 0 sin α 1 2 cos α sin θ cos θ cos Ω NS t r − sin α 1 + cos 2 θ 2 cos 2Ω NS t r ,
h × = h 0 sin α 1 2 cos α sin θ sin Ω NS t r − sin α cos θ sin 2Ω NS t r . (2.5)
In the above expressions, we have defined
h 0 = − 6G c 4Q 33 Ω 2 NS r .
(2.6) + and × denote the two polarizations. r is the distance to the source and the retarded time is defined as t r = t − r c . θ is the line-of-sight angle to the observer, measured from the rotation axis. Through Eq. (2.4), note that Eq. (2.5) indeed has a dependence onε Q . From above, we see that for a general wobble angle, GWs may be emitted at Ω NS or 2Ω NS frequencies. Eq. (2.5) is valid under the assumption that the magnetic field and angular velocity do not change significantly during a single period of the neutron star's rotation. This "slow-roll" assumption is generally true for most neutron stars and will specifically be valid for the cases we study.
The GW amplitude (h 0 ) may be directly related to the strain (∆L/L) of the GW detector arms. The reach in h 0 , for Advanced LIGO 1 and the proposed Einstein telescope 2 , are around 10 −24 − 10 −26 and 10 −26 − 10 −27 respectively [13,15,18,20], in the 10 − 100 Hz frequency range of interest. This is assuming a year of phase-coherent observations and signal integration times [13,15]. There have been many pioneering searches already for continuous GWs [16][17][18], and future third-generation GW detectors are expected to significantly improve the sensitivities in the niche frequency bands [19][20][21][22].
Eq. (2.5) may now be used in detail, to understand how the magnetic-field-induced deformations affect continuous GWs, and how specifically modifications induced by the production of MMMs will impact it. As we will remark later, we will specifically concentrate on the 2Ω NS frequency mode, without much loss of generality, for making our estimates. This choice will help us express the GW amplitude h 0 almost solely in terms of observable parameters, like the neutron star time period and spin-down rate.
Magnetic field induced quadrupole moments
Let us now briefly consider the rudimentary ideas behind stellar deformations induced by magnetic fields. It has long been known that a magnetic field threading a star could have a significant effect on its equilibrium configuration, and analogous to rotations, may induce mass quadrupole moments [23,24]. The basic underlying physics behind this phenomena may be understood based on simple energetic arguments.
To sharpen the discussion, consider a special case for the potential deformation, in a simple model for the neutron star-a perfect sphere, of radius R, comprising an incompressible fluid [23]. Assume that there is a uniform magnetic field in the interior and a dipolar magnetic field in the exterior. The respective field profiles are
B r = B 0 cos θ B θ = −B 0 sin θ (r < R) , B r = B 0 R r 3 cos θ B θ = 1 2 B 0 R r 3 sin θ (r > R) . (2.7)
Consider now a small deformation of the neutron star, parametrised as r(cos θ) = R + ζP l (cos θ) (ζ R) .
(2.8) P l (cos θ) are the Legendre polynomials. Note also in passing that ζ may be related to the surface ellipticity, through ε S ∼ −ζ/R. If the net change in energy due to this deformation is negative, then the deformation is more stable, relative to the initial, perfectly spherical configuration. It may be shown that the non-trivial change is mainly for the spherical harmonic mode l = 2 [23,24], and hence we focus on this. Such quadrupole deformations are also the ones most relevant to continuous GWs.
The net change in the energy stored in the magnetic fields may be readily computed, by summing the interior and exterior contributions. This gives [23] δE B = 9 20
ζB 2 0 R 2 . (2.9)
Note that this is first order in ζ. This change in magnetic field energy is positive if ζ > 0 (prolate) and negative if ζ < 0 (oblate). The corresponding change in gravitational energy, due to the deformation, is
δE G = 3 25 ζ R 2 GM 2 R . (2.10)
Note that in contrast to δE B , this is second order in ζ and is thus always positive. The total change in energy is obtained by summing the magnetic and gravitational energy contributions. This gives
δE = 3 25 ζ R 2 GM 2 R + 9 20 ζB 2 0 R 2 (2.11)
Note from above that, for ζ R, the sign of the net change in energy will be determined directly by the sign of ζ.
To obtain the most stable configuration, we need to minimise δE, and if it comes out to be negative, would suggest an energetically more favorable configuration [23]. Minimisation gives
ζ R = − 15 8 B 2 0 R 4 GM 2 = − 9 2 B 0 B * 2 .
(2.12)
Here, B 2 * = 12GM 2 /5R 4 is the limit on the magnetic field coming from the virial theorem [23], and corresponds to around 10 18 G for neutron stars. Thus, under this magnetic configuration, the incompressible fluid star undergoes an oblate deformation, departing from pure spherical symmetry. This is the basic idea behind how quadrupole moments are generated by magnetic fields threading a star. This is in fact a generic phenomena, with the exact nature and extent of the deformation depending on the magnetic field configuration and the star's specific equation of state.
For an external dipolar magnetic field configuration in a neutron star, let us now examine a few simple equation of states, and their effects on bulk deformation (quantified byε Q ). To simplify discussions, define a dimensionless deformation parameter (D) through the relatioñ
ε Q = D B 2 B 2 * . (2.13)
Without loss of generality, we have made the normalisation with respect to B * . The deformation parameter D, may be related to the magnetic distortion factor defined in [25].
Consider the case of a constant density fluid. In this case, the quadrupole ellipticity may be computed as [67]
ε const. Q = 2 15 B 2 B 2 * ,
(2.14)
giving D = 2/15. For the case of an n = 1 polytrope, again with an exterior dipolar magnetic field, we have [67].ε
1-poly. Q = 36π 5 (12 − π 2 ) 5(π 2 − 6) 3 B 2 B 2 * , (2.15) in which case D = 36π 5 (12−π 2 ) 5(π 2 −6) 3 .
For almost the same magnetic field magnitude and exterior field configuration, the latter polytropic equation of state leads to a larger deformation.
Considering the values of the deformation parameter, in these examples, it seems D ∼ [10 −1 , 10 2 ]. These ranges for D are also believed to be typical for more realistic equation of states and field configurations [25,67], and we will use them for making our estimates. The effects due to rotations have been neglected in these estimates [67].
There are a few observational upper bounds onε Q , for neutron stars in their early stages. X-ray light curves from short gamma ray bursts have been used to constrainε Q of post-merger stable neutron stars, giving mean bounds in the range [15,73] ε Obs. GRB Q 10 −2 − 10 −1 .
(2.16)
For pulsars in their later stages, there are constraints from continuous GW searches by the LIGO-VIRGO collaboration, giving fiducial ellipticity bounds in the range [10 −2 , 10 −8 ] [16][17][18]. Theoretical models suggest bounds on fiducial ellipticities of compact stars in the range 10 −2 − 10 −7 [27,[62][63][64][65]; depending on the stellar mass, hadron composition, epoch, equation of state and theoretical approximations used. Interestingly, there is even possibly an indication for a lower bound onε Q , of about 10 −9 , from analyses of millisecond pulsars [74]. We will always work with values well below the mean bounds in Eq. (2.16). The main difference from taking lower values forε Q , or equivalently D, will be to make the GW signal undetectable much earlier in time, since the neutron star's birth; or completely undetectable if D is exteremely small.
In summary, the elastic properties of the neutron star crust [27,[62][63][64][65], and presence of very strong magnetic fields, may lead generically to the presence of sustained deformations, resulting in a non-zero quadrupole ellipticity. As remarked earlier, there may even be a time evolution of the magneticfield-induced quadrupole ellipticity in these early phases. This is a plausible scenario assuming that the concerned crustal stresses and strains, due to the magnetic pressure, are below the breaking limit [56,57]. An evolving quadrupole ellipticity has been previously studied, in other GW contexts [58][59][60][61], and we would like to explore if the presence of MMMs may leave imprints on this quadrupole ellipticity evolution, and consequent GW generation.
3 Milli-magnetic monopoles and non-perturbative production
Milli-magnetic monopoles and theoretical foundations
Magnetic monopoles are yet to be observed in nature. They nevertheless seem to be a very generic prediction of many quantum field theories and model frameworks (see for instance [75], and related references).
In conventional Maxwellian electrodynamics, the homogeneous equation ∇ · B = 0, or equivalently the Bianchi identity of the field tensor F αβ , presupposes the non-existence of magnetic monopoles. In this framework, the manifestly covariant equations in vacuum take the form
∂ µ F µν = 0 , ∂ µF µν = 0 . (3.1)
Here,F µν = 1 2 µνρσ F ρσ is the dual field tensor, and the Bianchi identity implies F µν = ∂ µ A ν − ∂ ν A µ . As is well know, the vacuum equations are symmetric under the duality transformation
F µν →F µν ,F µν → −F µν . (3.2)
Once we introduce an electric source, say J α , this symmetry is lost. To consider restoration of the symmetry, we may speculate the addition of an analogous magnetic source term K α . The equations then take the form
∂ µ F µν = −eJ ν , ∂ µF µν = −gK ν ,(3.3)
which are clearly symmetric under the transformations
F µν →F µν ,F µν → −F µν eJ ν → gK ν , gK ν → −eJ ν . (3.4)
The addition of the K α term introduces magnetic monopoles. The theoretical underpinnings for milli-magnetic monopoles, in the context of kinetic mixings, were discussed in [50], and put on a firmer theoretical foundation later in [76]. Among the theoretical subtleties, in incorporating magnetic monopoles directly in a quantum field theory, is the fact that it is not possible to write a local, Lorentz invariant Lagrangian containing both electric and magnetic charges [77][78][79][80]. We briefly review the theoretical framework [76] for incorporating MMMs, through kinetic mixing, as a specific example of incorporating MMMs into beyond Standard Model extensions. This will also help fix notations.
One theoretical strategy to incorporate magnetic monopoles, by Zwanziger [79], contains two gauge potentials A α andà α , with a local Lagrangian, but without any manifest Lorentz invariance [79,81]. In this formulation, one of the gauge potentials, A α , couples locally to the electric current J α , while the other,à α , couples to the magnetic current K α . The Lagrangian density takes the form [76,79,81]
L = − n α n µ 2n 2 η βν F A αβ F A µν + Fà αβ Fà µν − 1 2 µ νγδ Fà αν F A γδ − F A αν Fà γδ − eJ µ A µ − 4π e K µÃ µ . (3.5)
Here, F A αβ = ∂ α A β − ∂ β A α and Fà αβ = ∂ αÃβ − ∂ βÃα are the respective field tensors. n α is an arbitrary four vector, corresponding to the direction of the Dirac string in certain gauge choices. The presence of n α , projects out two, on-shell photon polarizations, breaking manifest Lorentz-invariance [76,79,81]. It has been argued that physical observables of the theory are independent of n α [80]. The above Lagrangian density correctly gives the modified Maxwell's equations in Eq. (3.3), with the definition
F µν = n α n 2 n µ F A αν − n ν F A αµ − ε µνα β n γ FÃ γβ . (3.6)
Let us now understand how MMMs may specifically be included, in this framework, in the context of kinetic mixing [46]. For this, consider the Lagrangian density [76] incorporating kinetic mixing with a dark sector (whose low-energy states are all Standard Model gauge singlets; labelled by subscript 'D')
L MMM ⊃ − n α n µ 2n 2 η βν F A αβ F A µν + Fà αβ Fà µν − 1 2 µ νγδ Fà αν F A γδ − F A αν Fà γδ − eJ µ A µ − 4π e K µÃ µ − n α n µ 2n 2 η βν F A Dαβ F A Dµν + Fà Dαβ Fà Dµν − 1 2 µ νγδ Fà Dαν F A Dγδ − F A Dαν Fà Dγδ − m 2 DA 2 A Dµ A µ D − e D J Dµ A µ D − 4π e D K DµÃ µ D + χ n α n µ n 2 η βν F A Dαβ F A µν − Fà Dαβ Fà µν . (3.7)
F A D and Fà D are the field tensors corresponding to the dark gauge potentials A D andà D . J D and K D are the dark electric and magnetic currents, with e D being the dark electric charge. e and e D are in general independent parameters of the model. Without loss of generality, we take the n α four-vector to be the same in both the sectors; this can always be achieved with appropriate gauge transformations. The two sectors are connected by kinetic mixing, via the last term in Eq. (3.7). This term is equivalent to χ/2F µν F µν D , from the definition in Eq. (3.6). The mass term for A Dµ breaks the SO(2) symmetry of the kinetic terms and is uniquely responsible for MMMs [76].
Considering A µ D to be massive, after field redefinitions, we get magnetic monopoles that have effective milli-magnetic charges [50,76], at low energies. Explicitly, consider the field redefinitions
A µ → A µ + χA Dµ ,à µ →à µ A Dµ → A Dµ ,à Dµ →à Dµ − χà µ . (3.8)
Note that the above field transformations, ensure that the visible-sector gauge potentials (A µ ,Ã µ ) do not get mass terms, and hence U (1) EM remains unbroken. After these field redefinitions, making the kinetic terms canonical, the relevant interaction terms become
L int. ⊃ eJ µ A µ + eχJ µ A µ D + e D J Dµ A µ D + 4π e K µÃ µ + 4π e D K DµÃ µ D − 4πχ e D K DµÃ µ . (3.9)
After making the kinetic terms canonical, one now has an effective interaction of the form 4πχ/e D K DµÃ µ . This makes the dark-sector magnetic monopoles milli-magnetically charged under the visible photon, with an interaction strength of 4πχ/e D . χ in general is an arbitrary, irrational number. This is the origin of the fractional magnetic charge, and of MMMs. Naively, χ being an irrational number may seem to violate the Dirac charge quantization condition at low energies. The emergence of milli-magnetically charged particles, through kinetic mixing, is nevertheless still consistent with a global Dirac quantization condition [47,76].
Figure 2:
The pair-production rates per unit volume (log 10 Γ 0 /1 m −3 s −1 ), for milli-magnetic monopoles at zero temperature, are shown. The magnetic field has been taken to be 10 16 G. The zero temperature rates bracket the true rates that may be operational in systems with a finite temperature.
Moving forward, let us henceforth define all MMM charges with respect to the visible sector g ≡ 4π/e. Towards this end, define the MMM charge parameter ξ as ξ ≡ χ g D g .
(3.10)
Here, we have defined g D ≡ 4π/e D . With respect to our photon, MMMs therefore have magnetic charges ξg ≡ χg D . We will express all analyses and limits with respect to ξ henceforth.
Non-perturbative pair production of milli-magnetic monopoles
In Quantum Electrodynamics, when the field strengths are very large, one may have non-perturbative production of electrically or magneticallly charged particles, through the Schwinger pair-production mechanism [52,53,[82][83][84]. This is a distinct phenomena compared to, for instance, perturbative electron-positron pair-production (γ + γ → e + + e − ). For field strengths comparable to the particle masses, the non-perturbative rates may be exponentially enhanced.
For zero temperature and homogeneous magnetic fields, as compared to the Compton wavelength and separation of the particles, the average MMM pair-production rate, per unit volume, is given by [52,53]
Γ 0 = ξ 2 g 2 B 2 8π 3 exp − πm 2 ξgB . (3.11)
The zero temperature rate assuming a magnetic field of 10 16 G is shown in Fig. 2. This is the first term in the vacuum decay rate [52,53,85]. Recently, this computation was also extended to strong coupling and finite temperatures [86]. We are interested in light, milli-magnetically charged monopoles of mass m O(1 eV), with effective magnetic charges ξg 1, as in Eq. (3.9). We assume that g D g ≡ 4π/e, and that any higher order instanton corrections to the MMM pair-production rates [52,53,85,86] may be neglected, to good approximation. Also note that for the MMM mass ranges we consider, the Compton wavelengths (λ max Compt.
1 m) are such that local magnetic field inhomogeneities in the neutron star may be neglected, to leading order.
Based on theoretical models and measurements, currently observed neutron stars are believed to have mean surface temperatures of the order of 10 6 K. It is believed that in the early stages of their formation, the mean temperatures may have been even higher (∼ 10 11 K). In the standard cooling scenario for neutron stars, it is presumed that a neutron star when formed has internal temperatures approaching 10 11 K or more, and subsequently cools down by various processes-neutrino emissions (through the URCA and modified URCA processes), neutrino pair bremsstrahlung, thermal photon emissions and so on (see, for instance, [72,87] and references therein). The rate of cooling differs widely during the many stages, with timescales varying from seconds to thousands of years. The neutron star mean temperature is thought to evolve from around 10 11 K to 10 4 K over a few million years [72,87].
Thus, a more relevant quantification of the MMM production rate, at least in the initial phases of the neutron star's life, should try to incorporate the effects of this finite temperature. As mentioned earlier, there has been tremendous progress recently in computing Schwinger pair-production rates at finite temperature, both for electrically charged as well as for strongly-coupled magnetic monopoles [45,86,[88][89][90][91][92][93][94][95][96][97][98]. There is currently some disagreement on the exact functional form of the worldline instanton (see for instance discussions in [45,86,[94][95][96][97][98]). Nevertheless, there seem to be a few generic predictions-an exponential enhancement in the pair-production rate relative to zero temperature rates, and a critical temperature below which the thermal enhancements switch off [45,86,90,91,94,95,97].
The critical temperature (T C ) is a function of the magnetic field, monopole mass and magnetic charge [45,86,90,91,94,95,97] T C (m, ξ, B) ≡ ξgB 2m .
(3.12)
Below this critical temperature, the thermal enhancements turn off and the rate subsequently follows the zero temperature rate, given by Eq. (3.11). The critical temperature estimates for our regions of interest are illustrated in Fig. 3. The thermal rate, at a finite temperature T ≡ β −1 , may be approximated as [95,97]
Γ T m, ξ, B, T ∞ p=1 (−1) p+1 ξ 2 g 2 B 2 8π 3 p 2 exp − pπm 2 ξgB + Θ(T − T C ) ∞ p=0 nmax n=1 2(−1) p (ξgB) 2 (2π) 3/2 (nmβ) 1/2 ϑ 2 1 − nβξgB 2m 2 − 1 4 exp − m 2 2ξgB 2π(p + 1) − 2 arcsin nT C T + nm 2T 1 − n 2 T 2 C T 2 ,(3.13)
following the notion of an electromagnetic dual to Schwinger pair production by an electric field [52,86,95,97]. Here, Θ(x) is the Heaviside step function, n max ≡ 2m/(ξgBβ) = T /T C , and ϑ = 2π(p + 1) − 2 arcsin nTC T .
x denotes the integer less than or equal to x. This explicit analytic expression derived in the worldline instanton framework, utilising a saddle-point approximation, is valid for the Figure 3: Plot of log 10 [T C /1 K] is shown, for a fixed magnetic field of 10 16 G. Certain regions are irrelevant, due to the exponential suppression of Schwinger pair-production rates. Mean energetic arguments from magnetars [50], also render regions with ξ 10 −17 (gray band) unviable, for m 1 eV.
semi-classical parameter ξgB/m 2 2π [52,86,95,97,99]. Note that the enhancement is present only when T > T C , as already mentioned, and changes abruptly below it. In fact, Eq. (3.13) seems to suggest that the rate also changes abruptly at all integer multiples of T C , owing to n max = T /T C . We will utilise the above expression, in regions satisfying ξgB/m 2 2π, to estimate Schwinger pair-production rates at finite temperatures.
Note that at a characteristic worldline sphaleron temperature, much higher than T C , the pair production transitions from a quantum tunnelling phenomena to a classical, thermal process, described by a worldline sphaleron [96]. The characteristic worldline sphaleron temperature [96], where this transition occurs, is greater than ∼ 10 11 K for the parameter space of interest to us. Since the neutron star is believed to cool to around 10 11 K within just a few seconds of its formation, we are mostly outside the sphaleron regime.
For the MMM and dark photon mass ranges we will consider, the MMM Compton wavelength and string separation between monopole and anti-monopole [50,76,100] are also such that the magnetic field spatial inhomogeneities may be neglected, to good approximation. The temporal variation of the magnetic field is also very gradual, and its effects may similarly be neglected while computing rates, to leading order.
The additional magnetic field dissipation, due to Schwinger pair production of MMMs, may cause a deviation in the time evolution of the gravitational wave amplitude, and frequency, relative to the conventional case. The fact that the non-perturbative pair-production rate reverts to the zero temperature rate, below a characteristic temperature T C [45,86,90,91,94,95,97], also opens up an intriguing possibility. As the neutron star cools down during its lifetime, if milli-magnetic monopoles exist, there could potentially be an abrupt change in the monopole production rate, in the vicinity of T C , that relatively brusquely affects the gravitational wave amplitude and frequency subsequent to it. As emphasised before, T C itself is a function of the magnetic field, monopole mass and magnetic charge ξ. Note that as the MMMs we are considering have very small masses and tiny magnetic charges, we do not expect them to drastically affect the ordinary thermal evolution or dynamic processes in the neutron star in a very significant way.
These comparatively abrupt features in the waveform would be a universal signature, potentially visible across different magnetar systems, in their early phase continuous gravitational wave emissions. They should also be distinct from signals originating due to typical astrophysical phenomena, and hence potentially distinguishable. As may be deduced from Fig. 3, for a field of 10 16 G, the critical temperature may be as high as 10 8 K, in the viable (m, ξ) parameter space of interest.
Effects of milli-magnetic monopoles on gravitational waves
With the basic concepts in place from the previous sections, we may now undertake a study of what potential affects MMMs may have on continuous gravitational waves from single neutron stars.
The MMMs are generally confined objects with a string connecting the monopole and antimonopole [50,76,100]. They behave like magnetically charged objects only beyond a particular distance O(1/m DA ). This suggests a characteristic lower value for the dark photon mass m DA . There is also an upper bound to m DA that must be considered. The external magnetic field will accelerate the MMMs out of the magnetar, as long as the string tension between the pair produced MMMs (O(m 2 DA )) is smaller than the external electromagnetic force. The gravitational forces on the MMMs, due to the neutron star, are many orders of magnitude smaller than the Lorentz forces, and hence do not furnish any further bounds. These requirements altogether translate finally to [50] 1 R NS m DA ξgB . For the parameter space of interest, the upper bound gives m DA 10 8 km −1 , which may be trivially incorporated. Neutron stars have typical radii ∼ 10 km and we set the lower limit for the dark photon mass by it. This will also make robust our assumption of magnetic field homogeneity, relative to the particle Compton wavelength and separation. We will work assuming the above two bounds for m DA . Lower dark photon masses and corresponding modifications may be readily incorporated phenomenologically, by assuming an exponential suppression [50] of the external field, as felt by the monopole and anti-monopole.
The subsequent history of the MMMs, after they are pair-produced and expelled by the magnetic field, is not important, as they do not return energy back into the magnetic fields. As mentioned earlier, due to the tiny MMM mass and charge, any direct imposition on the thermal or dynamical evolution of the neutron star should also be very marginal, after production. This is in sharp contrast to heavy magnetic monopoles, if they exist, that may be captured and trapped by neutron stars, and which may impact the internal neutron star processes and dynamics more drastically. For instance, these heavy magnetic monopoles may efficiently catalyse nucleon decays in the neutron star [42][43][44]. It is also distinct from interesting scenarios where very heavy dark matter states could be captured by neutron stars, sometimes through multiple scatterings, heating them up kinetically or through subsequent annihilations [101,102]. In such cases, measuring the temperatures of very old neutron stars could lead to very interesting constraints [101,102].
It was pointed out recently, in [50], that by considering an average magnetar field of 10 15 G, monopole anti-monopole pair-production rates bracketed by the zero temperature rate, and an assumed magnetar active lifetime of 10 4 yrs, one may place strong bounds on viable MMMs. For magnetars with magnetic fields in the range 10 15 − 10 16 G, and for various dark photon masses, such energetic considerations give limit estimates of ξ 10 −17 , (4.2)
for m O(1 eV). Following [50], we will explicitly compute the limit on ξ and impose it, at each MMM mass of interest, before utilising that point in studying the evolution of the gravitational wave amplitude.
Let us now turn to the GW waveforms that could be expected. To be concrete, let us focus specifically on the GW mode with frequency 2Ω NS . Assuming the dominance of electromagnetic dipole radiation, from Eq.(2.5), the amplitude corresponding to the 2Ω NS frequency mode may be expressed as
h 2ΩNS,+ 0 = 8 5 D R 2 NS crṖ P 1 + cos 2 θ 2 , h 2ΩNS,× 0 = 8 5 D R 2 NS crṖ P cos θ . (4.3)
Note that when expressed in terms of the observablesṖ and P in this fashion, the amplitude at frequency 2Ω NS is independent of the unknown wobble angle α. This is an advantage to considering this specific frequency mode, as we had alluded to earlier. There is a dependence on the line-of-sight angle θ, that just gives an O(1) factor, and may be ignored for our order of magnitude estimates. The dominance of electromagnetic dipole radiation may be explicitly checked for reasonable values ofε Q , and we shall comment further on this later. From Eq. [13,15,18,20], in the 10 − 100 Hz frequency range of relevance to these continuous GWs. This is assuming 1-year signal integration times [13,15]. We note therefore from above that the amplitude is typically very small, except when the compact object is spinning rapidly, undergoing rapid braking with largeṖ or has large magnetic field induced deformations. One may therefore intuit, from Eq. (4.4), that one must search for candidate compact stars with aforementioned characteristics. This may be further sharpened by estimating the typical GW amplitudes one may expect from observed pulsars and magnetars, due to their assumed magnetic-field-induced quadrupole ellipticities, for reasonable ranges of the deformation parameter D. These estimates are shown in Fig. 4, for a few representative pulsar and magnetar candidates. The parameter values were taken from the ATNF 3 pulsar [103] and McGill 4 magnetar [104] catalogues. Estimates in Fig. 4 suggest that magnetars with large time periods (∼ 10 s) and conventional radio pulsars with relatively small magnetic fields (∼ 10 11 G), or equivalently smallṖ , may not be the most promising candidates to look for persistent GWs; or for that matter MMM imprints in them. Based on these broad inspections, perhaps the most promising candidates are a class of newlyborn magnetars, in their early stages of evolution-the so called millisecond magnetars [105][106][107][108][109][110][111][112]. Millisecond magnetars are new-born neutron stars with very high magnetic fields and very small time periods, and have already been speculated to be promising sources for continuous GWs [13,15,108]. They have also garnered much interest recently, in the context of fast radio bursts [112,113]. The other reason for optimism, while considering these candidates, is that the internal magnetic fields and temperatures are presumed to be much higher, during the early stages of the magnetar's formation; relative to their mean values taken over the entire magnetar lifetime. This opens up the possibility that detectable signatures may still be present in the early stages. The mean temperature of the neutron star is also varying very rapidly in the early epochs, and as we shall discuss later, this increases the possibility of MMM induced abrupt features in the GW waveforms. We therefore explore imprints on gravitational waves from millisecond magnetars, induced by MMMs; with magnetic charges below the bound set by mean energetic limits, as in Eq. (4.2).
J1808-2024kds+98 J1846-0258gvbt00 J1714-3810hg10a B0540-69shh84 J1550-5418crhr07 B0531+21sr68 Vela J1640-4631gth+14 J0437-
Let us therefore look at the effects of MMM non-perturbative pair production in a very simplified toy model, for a newly-born millisecond magnetar. Consider specifically the magnetic field evolution in this toy model, assuming an external dipolar and uniform internal magnetic field, that attempts to capture the salient features. The simplified evolution equation [45,55,[114][115][116][117] may be written as
dB NS (t) dt B NS (t) τ dyn. e −t/τ dyn. − B NS (t) τ ohm − B 2 NS (t) B NS (0)τ hall − 2ξg l V m R 3 NS Γ T m, ξ, B NS (t), T (t) . (4.5)
The various terms try to crudely encapsulate the characteristic time-scales of the various relevant processes that are operational.
The first term is a dynamo term [55], that is believed to be operational for the first few seconds of a neutron star's birth, after which it winds down. It amplifies and regenerates the magnetic field in the magnetar. The second and third terms are the Ohmic and Hall drift terms, that contribute conventionally to the decay of the magnetic fields in a neutron star. Following standard literature, we take the dynamo, Ohmic and Hall drift time constants as τ dyn. = 10 s, τ ohm = 10 6 yrs and τ hall = 10 4 yrs [114,115] respectively. The respective time constants are in reality non-trivial functions of temperature and density, but the above values have been found to capture relevant effects [115]. A toy model of the magnetic field evolution, as encapsulated by Eq. (4.5), has also been seen to semiquantitaively reproduce [115] essential results from more detailed magneto-thermal simulations [115][116][117]. A similar evolution equation was also considered recently in [45], to set interesting limits on strongly-coupled, heavy magnetic monopoles.
The last term in Eq. (4.5) is due to the Schwinger pair production of MMMs, and is derived from energy conservation arguments. Specifically, it is obtained by equating the loss of energy from the electromagnetic field, to the energy needed for Schwinger pair production and to the work done in accelerating the monopole anti-monopole pairs outward. V m is the active volume over which MMMs are being non-perturbatively pair produced, and is taken to be the volume of the neutron star. l is the mean distance over which MMMs are being accelerated by the magnetic field, after production, and is equated to the diameter of the neutron star. The Schwinger pair production of the MMMs causes a non-perturbative decay of the magnetic flux. This is a potentially new source of flux decay in neutron stars, different from classical processes. Energy is being expended from the magnetic field during pair-production and during their expulsion.
Eq. (4.5) must be solved in tandem with the neutron star spin-down equation
dΩ NS (t) dt − 5 12 R 4 NS M NS B 2 NS (t)Ω 3 NS (t) − 64 25 GM NS R 2 NSε 2 Q (t)Ω 5 NS (t) . (4.6)
In this spin down equation, we have assumed that the magnetic axis is orthogonal to the rotation axis, i.e., α = π 2 [111]. Note from Eq. (2.5) that this choice would also cause continuous gravitational emissions solely at 2Ω NS frequencies. In the above expression, the neutron star has been idealised to an almost spherical object, with moment of inertia ∼ 2 5 M NS R 2 NS . The first term in Eq. (4.6) is due to electromagnetic dipole radiation, and the second term incorporates the gravitational quadrupole radiation. The latter term incorporates braking due to GW emissions and is proportional toε 2 Q (t). The GW emission contribution is small compared to the dipole term, for allε Q values of interest to us, as may be explicitly verified. It hence validates the assumption in Eq. (4.3). We neglect effects due to precession, in the time evolution.
When there is non-perturbative pair production of MMMs, the full gravitational waveform is plausibly affected, relative to the conventional case, in both amplitude and frequency. As seen from Eqs. (2.4), (2.5), (2.6) and (2.13), the amplitude of the waveform is modified directly due to the refinement of the quadrupole ellipticity. It is also affected indirectly through the adjustments in Ω NS (t), induced via the modified magnetic field evolution of Eq. (4.5) and by the GW emission term in Eq. (4.6). The latter effects also modify the frequency of the emitted gravitational waveform 2Ω NS (t).
h 0 (t) ∝ε Q (t) Ω NS (t) 2 , Ω NS (t) ∝ B 2 NS (t) ,ε 2 Q (t) . (4.7)
Remembering thatε Q (t) ∝ B NS (t) 2 , ultimately all the altered characteristics are a consequence of the MMM modified magnetic field evolution, condensed in the simplified Eq. (4.5). Thus, a revised modulation in the frequency and amplitude envelope of the GW waveform should be a consequence of MMM production in general. On a related note, observe from Eq. (4.5) that during the first many seconds after the millisecond magnetar's birth (say around time t 0 ) one may in some instances have a steady state situation (Ḃ NS (t 0 ) ∼ 0). This may be prompted by a near cancellation of the positive dynamo and negative MMM contributions
B NS (t 0 ) τ dyn. e −t0/τ dyn. ∼ 2ξg l V m R 3 NS Γ T m, ξ, B NS (t 0 ), T (t 0 ) . (4.8)
This quasi steady-state, if achieved, should also reflect in the persistent GW emissions during these brief intervals; before the dynamo shuts off after O(10 s). The time-scales for the Ohmic and Halldrift processes are much longer, and should not play a significant role at these very early times. The possibility of such a steady state was also effectively leveraged in [45], to place very interesting lower bounds on the mass of heavy magnetic monopoles.
To explore further, we numerically solve Eqs. (4.5) and (4.6), with a starting point taken as 10 yrs after the millisecond magnetar formation [105][106][107][108][109][110]112]; in a binary neutron star merger or supernovae explosion. For the estimates, initial starting values of B 0 NS = 10 16 G , Ω 0 NS = 2π/(30 ms) and T 0 NS,pole = 4.5 × 10 6 K, as well as temperature evolution profiles, are taken following representative values in the literature [109,110,112,116]. The neutron star equatorial temperature is usually much lower than the polar temperature [116] and the internal temperatures are believed to be much higher. Discounting magnetic fields, the interior temperature is thought to be related to the surface temperature via an approximate scaling that roughly goes as T NS,in ∼ T 2 NS,surf. [118]. To reduce model assumptions, to the extent possible, we will take the neutron star polar temperature prediction [116] as a crude proxy for the mean neutron star temperature. Assumption of a higher mean temperature would cause a further enhancement to the thermal Schwinger pair-production rate, and would only cause more pronounced deviations from conventional evolution. D is taken to be 81, corresponding to the case of an n = 1 polytropic equation of state. This gives an initialε Q of about 10 −4 . This magnitude seems to be consistent with typical expectations, for millisecond magnetars [113]. The distance to the source is taken as 1 kpc. For a magnetic charge of ξ = 10 −19 , the MMM masses have been taken to be 15 meV, 20 meV, and 25 meV. The magnetic charge adopted for these masses, satisfies the limit from mean energetic arguments, as derived in [50]. The parameter space points also satisfy ξgB/m 2 2π, making Eq. (3.13) valid, and hence directly usable in Eq. (4.5). The dark photon mass has been taken as m DA = 10 3 m −1 .
Using Eq. (4.4), the results of these numerical evolutions are displayed in Fig. 5. As is clearly seen from these curves, the amplitudes deviate drastically from the conventional case, in the first few decades of the millisecond magnetar's birth. Ifε Q , or equivalently D, is even smaller, the main difference will be that the GW amplitudes will fall below their detectability much earlier in the epoch. As already mentioned, assuming a higher mean temperature would cause more conspicuous deviations with respect to conventional evolution. For the MMM masses and charges adopted in Fig. 5, the neutron star temperature, for the time period displayed, is always higher than the respective critical temperatures T C m, ξ, B(t) . Thus, for these parameter points, one does not expect, nor see, any relatively abrupt features in the gravitational wave amplitudes. Note also that the mean energetic arguments [50] for these MMM masses, and corresponding limits on ξ based on it, are still relevant. The thermal Schwinger pair-production rates are very prolific in the early epochs, but almost completely switch off once the magnetic field value decreases below the critical field value ∼ m 2 /ξg; this happens after just a few decades. Thus, taken as an average over the entire lifetime of the magnetar, the mean energetic arguments should still furnish meaningful and interesting limits, while still being consistent with the enhanced rates and prominences in the early stages.
In general, as emphasised in subsection 3.2, one should expect to see comparatively abrupt features in the gravitational wave amplitude and frequency. They would have a distinct pattern, correlated [112,116]. The distance to the source is assumed to be 1 kpc. D has been assumed to be 81, corresponding to an n = 1 polytropic equation of state. The amplitude must potentially be observable in third generation gravitational wave detectors, like the Einstein telescope, that is expected to have a sensitivity of 10 −26 − 10 −27 , in the 10 − 100 Hz frequency range, assuming integration times of one year.
with temperature and magnetic field evolution. The presence or absence of such abrupt patterns, in the GW waveform, would of course depend on the (m, ξ) values of the MMMs that may exist in nature. More specifically, such abrupt patterns may appear if the mean temperature of the neutron star T NS (t) falls below the MMM critical temperature T C (t) at some point in time (equivalently, it may manifest through some evolution of a temperature gradient, across neutron star layers). After this cross-over there should be a relatively abrupt change in the MMM pair-production rates, and hence a relatively abrupt change in the gravitational wave amplitude and frequency evolution. Assume one is starting at an initial time t 0 , with
T NS (t 0 ) > T C (t 0 ) . (4.9)
For a cross-over to occur, a necessary criterion that the monotonically decreasing mean temperature and mean magnetic field profiles should satisfy, during some point subsequent to t 0 , iṡ
T NS (t) B(t) ξg 2m . (4.10)
Here, the dot denotes a first time derivative.
For the gravitational waves to be detectable, such a crossing should also occur in the early stages of the millisecond magnetar's birth. Depending on the allowed values ofε Q , this may mean a time frame of seconds to decades, following birth. An MMM imprint detection is also more plausible during the early stages, since the internal magnetic fields are at their highest (implying large pair-production rates), and the temperatures are also varying rapidly (implying Eq. (4.10) is more prone to be satisfied). As seen from Fig. 3, in the viable ξ range, for MMM masses m 10 −5 , the critical temperatures can vary from 10 5 − 10 8 K. As the neutron star is expected to cool from 10 11 K to 10 6 K, over its initial phase of a few hundred years, if MMMs exist with the above mentioned masses and charges, they may leave imprints in the amplitude and frequency evolution that have a comparatively discontinuous character. During these epochs, they should also fall in the sensitivity ranges of future third generation gravitational wave detectors.
If they exist, these MMM imprints on GWs, must be an almost universal feature across different newly-born millisecond magnetars. They must have a very unique pattern correlated with the temperature and magnetic field evolution, and hence should be potentially distinguishable from many other astrophysical phenomena. At the moment, it is difficult to quantitatively demonstrate this in a satisfactory manner, through an explicit rate computation and evolution, even in the simplified toy model. This is because, in the potentially interesting (m, ξ) regions where such abrupt features may show up, we have ξgB/m 2 2π. Therefore, in these regions, all the known analytic expressions for thermal Schwinger pair production break down, and their applicability is unclear [45,86,[90][91][92][93][94][95][96][97][98].
Summary and conclusions
The search for continuous gravitational waves from neutron stars is well underway [16][17][18]. Exotic particle states beyond the Standard Model have the potential to leave their imprints on these waveforms. In this work, we speculated on the effect of milli-magnetic monopoles on persistent gravitational wave signals, sourced by single neutron stars.
Magnetic fields are known to cause distortions from spherical symmetry, in compact astrophysical objects, generating a quadrupole moment [23,24]. If the magnetic and rotation axes are misaligned, this may produce detectable gravitational wave signals. Milli-magnetic monopoles may be copiously pair-produced in the extreme magnetic fields of neutron stars, such as magnetars; through the Schwinger pair-production mechanism [52,53]. This causes an additional attenuation of the magnetic field, relative to conventional field decay mechanisms operational in a magnetar. Consequently, through a modification of the quadrupole moment time evolution, this may leave imprints in the continuous gravitational waves, during early stages of a neutron star's life. A time evolution of the neutron star quadrupole moment has been considered previously in other contexts [58][59][60][61]. We found that the most promising candidate compact objects are a class of newly born magnetars, the so called millisecond magnetars [105][106][107][108][109][110]112]. In addition to deviations from conventional evolution, an imprint may potentially be present, as comparatively discontinuous features, in the gravitational waveform amplitude and frequency, in the early phases of a millisecond magnetar's life. Since the temperatures are rapidly evolving in the early stages, and the internal magnetic fields during these periods are also at their highest, these early times hold much promise. These signatures, if they exist as evidence for milli-magnetic monopoles, should be universally seen across new-born millisecond magnetars, with a very distinct pattern, and may therefore be potentially distinguishable from other astrophysical signatures.
A more detailed implementation of the neutron star magneto-thermal evolution [115][116][117], incorporating milli-magnetic monopole non-perturbative production, should help further clarify and add to the ideas of the present study. Another crucial aspect is reaching a consensus on the functional form of the thermal Schwinger pair-production rates [45,86,[94][95][96][97][98] and striving to extend them to regions beyond the weak-field regime [52,119,120]. This would facilitate quantitative analyses in all regions of the viable (m, ξ) parameter space, and directly probing the presence of abrupt features in the GW waveforms. Incorporating effects due to field inhomogeneities [99] and finite chemical potentials [89,121], to account for the baryon environment and finite densities in a neutron star, would further sharpen future studies. Another crucial question is regarding how prevalent millisecond magnetars are [109,110,112], and what their detection prospects are, across the lifetime of Advanced LIGO and future third generation GW detectors. We hope to address some of these in future works.
(4.3), the order of magnitude estimate for the GW amplitude gives had remarked earlier, in subsec. 2.1, the sensitivity in strain (h 0 ) for Advanced LIGO and the proposed Einstein telescope, are around 10 −24 − 10 −26 and 10 −26 − 10 −27 respectively
Figure 4 :
4Estimates for the magnetic-field-induced GW amplitudes, from a few representative pulsar (Left) and magnetar (Right) candidates. The relevant parameter values were taken from the ATNF pulsar[103] and McGill magnetar[104] databases. D is varied in the range [10 −1 , 10 2 ].
Figure 5 :
5Evolution of the gravitational amplitude, a decade into the birth of the millisecond magnetar. The MMM charge has been fixed at 10 −19 , and the MMM masses have been taken at 15 meV (dashed), 20 meV (dot-dashed), and 25 meV (dotted). The evolution of the gravitational wave amplitude, when there are no MMMs is shown as a solid line. The initial conditions for the polar temperature (4.5 × 10 6 K), time period (30 ms) and mean magnetic field (10 16 G), were taken from representative values in the literature
https://dcc.ligo.org/cgi-bin/DocDB/ShowDocument?.submit=Identifier&docid=T1800044&version=5
https://www.atnf.csiro.au/research/pulsar/psrcat/ 4 http://www.physics.mcgill.ca/~pulsar/magnetar/main.html
AcknowledgmentsWe thank Martin Hendry, Anson Hook, Adam Martin, Dipanjan Mitra, Sunil Mukhi and Prasad Subramanian for discussions. A.T. would like to thank the organisers of the Gordon Research Conference on Particle Physics 2019, where parts of this work were completed, and would also like to acknowledge support from an SERB Early Career Research Award.
Observation of Gravitational Waves from a Binary Black Hole Merger. 10.1103/PhysRevLett.116.0611021602.03837Phys. Rev. Lett. 11661102LIGO Scientific, Virgo collaboration, Observation of Gravitational Waves from a Binary Black Hole Merger, Phys. Rev. Lett. 116 (2016) 061102 [1602.03837].
GW150914: First results from the search for binary black hole coalescence with Advanced LIGO. 10.1103/PhysRevD.93.1220031602.03839Phys. Rev. 93122003LIGO Scientific, Virgo collaboration, GW150914: First results from the search for binary black hole coalescence with Advanced LIGO, Phys. Rev. D93 (2016) 122003 [1602.03839].
GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral. 10.1103/PhysRevLett.119.1611011710.05832Phys. Rev. Lett. 119161101LIGO Scientific, Virgo collaboration, GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral, Phys. Rev. Lett. 119 (2017) 161101 [1710.05832].
GW170817: Measurements of neutron star radii and equation of state. 10.1103/PhysRevLett.121.1611011805.11581Phys. Rev. Lett. 121161101LIGO Scientific, Virgo collaboration, GW170817: Measurements of neutron star radii and equation of state, Phys. Rev. Lett. 121 (2018) 161101 [1805.11581].
Astrophysical Implications of the Binary Black-Hole Merger GW150914. 10.3847/2041-8205/818/2/L22L22 [1602.03846Astrophys. J. 818LIGO Scientific, Virgo collaboration, Astrophysical Implications of the Binary Black-Hole Merger GW150914, Astrophys. J. 818 (2016) L22 [1602.03846].
First Measurement of the Hubble Constant from a Dark Standard Siren using the Dark Energy Survey Galaxies and the LIGO/Virgo Binary?Black-hole Merger GW170814. Ligo Des, Virgo collaborationScientific, Virgo collaboration10.3847/2041-8213/ab14f1Astrophys. J. 876L7 [1901.01540DES, LIGO Scientific, Virgo collaboration, First Measurement of the Hubble Constant from a Dark Standard Siren using the Dark Energy Survey Galaxies and the LIGO/Virgo Binary?Black-hole Merger GW170814, Astrophys. J. 876 (2019) L7 [1901.01540].
A gravitational-wave measurement of the Hubble constant following the second observing run of Advanced LIGO and Virgo. 6060LIGO Scientific, Virgo collaboration, A gravitational-wave measurement of the Hubble constant following the second observing run of Advanced LIGO and Virgo, 1908.06060.
Constraints on cosmic strings using data from the first Advanced LIGO observing run. 10.1103/PhysRevD.97.1020021712.01168Phys. Rev. 97102002LIGO Scientific, Virgo collaboration, Constraints on cosmic strings using data from the first Advanced LIGO observing run, Phys. Rev. D97 (2018) 102002 [1712.01168].
Search for Tensor, Vector, and Scalar Polarizations in the Stochastic Gravitational-Wave Background. 10.1103/PhysRevLett.120.201102Phys. Rev. Lett. 1202011021802.10194LIGO Scientific, Virgo collaboration, Search for Tensor, Vector, and Scalar Polarizations in the Stochastic Gravitational-Wave Background, Phys. Rev. Lett. 120 (2018) 201102 [1802.10194].
Tests of General Relativity with GW170817. 10.1103/PhysRevLett.123.0111021811.00364Phys. Rev. Lett. 12311102LIGO Scientific, Virgo collaboration, Tests of General Relativity with GW170817, Phys. Rev. Lett. 123 (2019) 011102 [1811.00364].
Constraints on Lorentz Invariance Violations from Gravitational Wave Observations, in 8th Meeting on CPT and Lorentz Symmetry (CPT'19). Bloomington, Indiana, USALIGO Scientific, Virgo collaboration, Constraints on Lorentz Invariance Violations from Gravitational Wave Observations, in 8th Meeting on CPT and Lorentz Symmetry (CPT'19) Bloomington, Indiana, USA, May 12-16, 2019, 2019, 1906.05933.
. B S Sathyaprakash, Extreme Gravity and Fundamental Physics. 9221B. S. Sathyaprakash et al., Extreme Gravity and Fundamental Physics, 1903.09221.
Structure, Deformations and Gravitational Wave Emission of Magnetars. L Gualtieri, R Ciolfi, V Ferrari, 10.1088/0264-9381/28/11/114014114014 [1011.2778Class. Quant. Grav. 28L. Gualtieri, R. Ciolfi and V. Ferrari, Structure, Deformations and Gravitational Wave Emission of Magnetars, Class. Quant. Grav. 28 (2011) 114014 [1011.2778].
Detecting gravitational waves from mountains on neutron stars in the Advanced Detector Era. B Haskell, M Priymak, A Patruno, M Oppenoorth, A Melatos, P D Lasky, 10.1093/mnras/stv7261501.06039Mon. Not. Roy. Astron. Soc. 4502393B. Haskell, M. Priymak, A. Patruno, M. Oppenoorth, A. Melatos and P. D. Lasky, Detecting gravitational waves from mountains on neutron stars in the Advanced Detector Era, Mon. Not. Roy. Astron. Soc. 450 (2015) 2393 [1501.06039].
Gravitational waves from single neutron stars: an advanced detector era survey. K Glampedakis, L Gualtieri, 1709.07049K. Glampedakis and L. Gualtieri, Gravitational waves from single neutron stars: an advanced detector era survey, 1709.07049.
Virgo collaboration, First search for gravitational waves from known pulsars with Advanced LIGO. 10.3847/1538-4357/aa9aee,10.3847/1538-4357/aa677f1701.07709Astrophys. J. 83912LIGO Scientific, Virgo collaboration, First search for gravitational waves from known pulsars with Advanced LIGO, Astrophys. J. 839 (2017) 12 [1701.07709].
First narrow-band search for continuous gravitational waves from known pulsars in advanced detector data. 10.1103/PhysRevD.96.122006,10.1103/PhysRevD.97.1299031710.02327Phys. Rev. 96122006LIGO Scientific, Virgo collaboration, First narrow-band search for continuous gravitational waves from known pulsars in advanced detector data, Phys. Rev. D96 (2017) 122006 [1710.02327].
Searches for Gravitational Waves from Known Pulsars at Two Harmonics in 2015-2017 LIGO Data. 8507LIGO Scientific, Virgo collaboration, Searches for Gravitational Waves from Known Pulsars at Two Harmonics in 2015-2017 LIGO Data, 1902.08507.
The third generation of gravitational wave observatories and their science reach. M Punturo, 10.1088/0264-9381/27/8/084007Class. Quant. Grav. 2784007M. Punturo et al., The third generation of gravitational wave observatories and their science reach, Class. Quant. Grav. 27 (2010) 084007.
Sensitivity Studies for Third-Generation Gravitational Wave Observatories. S Hild, 10.1088/0264-9381/28/9/0940131012.0908Class. Quant. Grav. 2894013S. Hild et al., Sensitivity Studies for Third-Generation Gravitational Wave Observatories, Class. Quant. Grav. 28 (2011) 094013 [1012.0908].
Scientific Potential of Einstein Telescope. B Sathyaprakash, Proceedings. nullLa Thuile, ItalyB. Sathyaprakash et al., Scientific Potential of Einstein Telescope, in Proceedings, 46th Rencontres de Moriond on Gravitational Waves and Experimental Gravity: La Thuile, Italy, March 20-27, 2011, pp. 127-136, 2011, 1108.1423.
Scientific Objectives of Einstein Telescope. B Sathyaprakash, 10.1088/0264-9381/29/12/124013,10.1088/0264-9381/30/7/0795011206.0331Class. Quant. Grav. 29124013B. Sathyaprakash et al., Scientific Objectives of Einstein Telescope, Class. Quant. Grav. 29 (2012) 124013 [1206.0331].
Problems of Gravitational Stability in the Presence of a Magnetic Field. S Chandrasekhar, E Fermi, 10.1086/145732Astrophys. J. 118116S. Chandrasekhar and E. Fermi, Problems of Gravitational Stability in the Presence of a Magnetic Field., Astrophys. J. 118 (1953) 116.
On the Equilibrium of Magnetic Stars. V C A Ferraro, 10.1086/145838Astrophys. J. 119407V. C. A. Ferraro, On the Equilibrium of Magnetic Stars., Astrophys. J. 119 (1954) 407.
Gravitational waves from pulsars: Emission by the magnetic field induced distortion. S Bonazzola, E Gourgoulhon, astro-ph/9602107Astron. Astrophys. 312675S. Bonazzola and E. Gourgoulhon, Gravitational waves from pulsars: Emission by the magnetic field induced distortion, Astron. Astrophys. 312 (1996) 675 [astro-ph/9602107].
Relativistic models of magnetars: structure and deformations. A Colaiuda, V Ferrari, L Gualtieri, J A Pons, 10.1111/j.1365-2966.2008.12966.xMon. Not. Roy. Astron. Soc. 38520800712.2162A. Colaiuda, V. Ferrari, L. Gualtieri and J. A. Pons, Relativistic models of magnetars: structure and deformations, Mon. Not. Roy. Astron. Soc. 385 (2008) 2080 [0712.2162].
Structure and deformations of strongly magnetized neutron stars with twisted torus configurations. R Ciolfi, V Ferrari, L Gualtieri, 10.1111/j.1365-2966.2010.16847.xMon. Not. Roy. Astron. Soc. 40625401003.2148R. Ciolfi, V. Ferrari and L. Gualtieri, Structure and deformations of strongly magnetized neutron stars with twisted torus configurations, Mon. Not. Roy. Astron. Soc. 406 (2010) 2540 [1003.2148].
Magnetic Monopoles in Unified Gauge Theories. G Hooft, 10.1016/0550-3213(74)90486-6Nucl. Phys. 79276G. 't Hooft, Magnetic Monopoles in Unified Gauge Theories, Nucl. Phys. B79 (1974) 276.
A M Polyakov, Particle Spectrum in the Quantum Field Theory. 20194A. M. Polyakov, Particle Spectrum in the Quantum Field Theory, JETP Lett. 20 (1974) 194.
Search for highly ionizing particles in e+ e-annihilations at s**(1/2) = 91.1-GeV. K Kinoshita, R Du, G Giacomelli, L Patrizii, F Predieri, P Serra, 10.1103/PhysRevD.46.R881Phys. Rev. 46881K. Kinoshita, R. Du, G. Giacomelli, L. Patrizii, F. Predieri, P. Serra et al., Search for highly ionizing particles in e+ e-annihilations at s**(1/2) = 91.1-GeV, Phys. Rev. D46 (1992) R881.
Direct search for Dirac magnetic monopoles in pp collisions at √ s = 1.96 TeV. 10.1103/PhysRevLett.96.201801hep-ex/0509015Phys. Rev. Lett. 96201801CDF collaboration, Direct search for Dirac magnetic monopoles in pp collisions at √ s = 1.96 TeV, Phys. Rev. Lett. 96 (2006) 201801 [hep-ex/0509015].
Search for Dirac magnetic monopoles in e+e-collisions with the OPAL detector at LEP2. 10.1016/j.physletb.2008.03.057Phys. Lett. 663370707.0404OPAL collaboration, Search for Dirac magnetic monopoles in e+e-collisions with the OPAL detector at LEP2, Phys. Lett. B663 (2008) 37 [0707.0404].
Search for magnetic monopoles with the MoEDAL prototype trapping detector in 8 TeV proton-proton collisions at the LHC. 10.1007/JHEP08(2016)0671604.06645JHEP. 0867MoEDAL collaboration, Search for magnetic monopoles with the MoEDAL prototype trapping detector in 8 TeV proton-proton collisions at the LHC, JHEP 08 (2016) 067 [1604.06645].
Final results of magnetic monopole searches with the MACRO experiment. 10.1140/epjc/s2002-01046-9hep-ex/0207020Eur. Phys. J. 25511MACRO collaboration, Final results of magnetic monopole searches with the MACRO experiment, Eur. Phys. J. C25 (2002) 511 [hep-ex/0207020].
Relativistic Magnetic Monopole Flux Constraints from RICE. D P Hogan, D Z Besson, J P Ralston, I Kravchenko, D Seckel, 10.1103/PhysRevD.78.075031Phys. Rev. 78750310806.2129D. P. Hogan, D. Z. Besson, J. P. Ralston, I. Kravchenko and D. Seckel, Relativistic Magnetic Monopole Flux Constraints from RICE, Phys. Rev. D78 (2008) 075031 [0806.2129].
Ultra-Relativistic Magnetic Monopole Search with the ANITA-II Balloon-borne Radio Interferometer. 10.1103/PhysRevD.83.0235131008.1282Phys. Rev. 8323513ANITA-II collaboration, Ultra-Relativistic Magnetic Monopole Search with the ANITA-II Balloon-borne Radio Interferometer, Phys. Rev. D83 (2011) 023513 [1008.1282].
The Origin of Magnetic Fields. E N Parker, 10.1086/150442Astrophys. J. 160383E. N. Parker, The Origin of Magnetic Fields, Astrophys. J. 160 (1970) 383.
Magnetic Monopoles and the Survival of Galactic Magnetic Fields. M S Turner, E N Parker, T J Bogdan, 10.1103/PhysRevD.26.1296Phys. Rev. 261296M. S. Turner, E. N. Parker and T. J. Bogdan, Magnetic Monopoles and the Survival of Galactic Magnetic Fields, Phys. Rev. D26 (1982) 1296.
Extension of the Parker bound on the flux of magnetic monopoles. F C Adams, M Fatuzzo, K Freese, G Tarle, R Watkins, M S Turner, 10.1103/PhysRevLett.70.2511Phys. Rev. Lett. 702511F. C. Adams, M. Fatuzzo, K. Freese, G. Tarle, R. Watkins and M. S. Turner, Extension of the Parker bound on the flux of magnetic monopoles, Phys. Rev. Lett. 70 (1993) 2511.
New Superconducting Quantum Interface Device Based Constraints on the Abundance of Magnetic Monopoles Trapped in Matter: An Investigation of Deeply Buried Rocks. J M Kovalik, J L Kirschvink, 10.1103/PhysRevA.33.1183Phys. Rev. 331183J. M. Kovalik and J. L. Kirschvink, New Superconducting Quantum Interface Device Based Constraints on the Abundance of Magnetic Monopoles Trapped in Matter: An Investigation of Deeply Buried Rocks, Phys. Rev. A33 (1986) 1183.
Search for magnetic monopoles trapped in matter. H Jeon, M J Longo, 10.1103/PhysRevLett.76.159,10.1103/PhysRevLett.75.1443hep-ex/9508003Phys. Rev. Lett. 751443H. Jeon and M. J. Longo, Search for magnetic monopoles trapped in matter, Phys. Rev. Lett. 75 (1995) 1443 [hep-ex/9508003].
Monopole Catalysis of Nucleon Decay in Neutron Stars. E W Kolb, S A Colgate, J A Harvey, 10.1103/PhysRevLett.49.1373Phys. Rev. Lett. 491373E. W. Kolb, S. A. Colgate and J. A. Harvey, Monopole Catalysis of Nucleon Decay in Neutron Stars, Phys. Rev. Lett. 49 (1982) 1373.
Catalyzed Nucleon Decay in Neutron Stars. S Dimopoulos, J Preskill, F Wilczek, 10.1016/0370-2693(82)90679-7Phys. Lett. 119320S. Dimopoulos, J. Preskill and F. Wilczek, Catalyzed Nucleon Decay in Neutron Stars, Phys. Lett. 119B (1982) 320.
Monopole Catalysis of Nucleon Decay in Old Pulsars. K Freese, M S Turner, D N Schramm, 10.1103/PhysRevLett.51.1625Phys. Rev. Lett. 511625K. Freese, M. S. Turner and D. N. Schramm, Monopole Catalysis of Nucleon Decay in Old Pulsars, Phys. Rev. Lett. 51 (1983) 1625.
Magnetic monopole mass bounds from heavy ion collisions and neutron stars. O Gould, A Rajantie, 10.1103/PhysRevLett.119.2416011705.07052Phys. Rev. Lett. 119241601O. Gould and A. Rajantie, Magnetic monopole mass bounds from heavy ion collisions and neutron stars, Phys. Rev. Lett. 119 (2017) 241601 [1705.07052].
Two U(1)'s and Epsilon Charge Shifts. B Holdom, 10.1016/0370-2693(86)91377-8Phys. Lett. 166196B. Holdom, Two U(1)'s and Epsilon Charge Shifts, Phys. Lett. B166 (1986) 196.
Minicharges and Magnetic Monopoles. F Brummer, J , 10.1016/j.physletb.2009.04.041Phys. Lett. 6753600902.3615F. Brummer and J. Jaeckel, Minicharges and Magnetic Monopoles, Phys. Lett. B675 (2009) 360 [0902.3615].
Magnetic Mixing: Electric Minicharges from Magnetic Monopoles. F Brummer, J Jaeckel, V V Khoze, 10.1088/1126-6708/2009/06/037JHEP. 06370905.0633F. Brummer, J. Jaeckel and V. V. Khoze, Magnetic Mixing: Electric Minicharges from Magnetic Monopoles, JHEP 06 (2009) 037 [0905.0633].
Monopoles, strings and dark matter. C , Gomez Sanchez, B Holdom, 10.1103/PhysRevD.83.123524123524 [1103.1632Phys. Rev. 83C. Gomez Sanchez and B. Holdom, Monopoles, strings and dark matter, Phys. Rev. D83 (2011) 123524 [1103.1632].
Bounding millimagnetically charged particles with magnetars. A Hook, J Huang, 10.1103/PhysRevD.96.0550101705.01107Phys. Rev. 9655010A. Hook and J. Huang, Bounding millimagnetically charged particles with magnetars, Phys. Rev. D96 (2017) 055010 [1705.01107].
Novel Astrophysical Probes of Light Millicharged Fermions through Schwinger Pair Production. M Korwar, A M Thalapillil, 10.1007/JHEP04(2019)0391709.07888JHEP. 0439M. Korwar and A. M. Thalapillil, Novel Astrophysical Probes of Light Millicharged Fermions through Schwinger Pair Production, JHEP 04 (2019) 039 [1709.07888].
Pair Production at Strong Coupling in Weak External Fields. I K Affleck, O Alvarez, N S Manton, 10.1016/0550-3213(82)90455-2Nucl. Phys. 197509I. K. Affleck, O. Alvarez and N. S. Manton, Pair Production at Strong Coupling in Weak External Fields, Nucl. Phys. B197 (1982) 509.
Monopole Pair Production in a Magnetic Field. I K Affleck, N S Manton, 10.1016/0550-3213(82)90511-9Nucl. Phys. 19438I. K. Affleck and N. S. Manton, Monopole Pair Production in a Magnetic Field, Nucl. Phys. B194 (1982) 38.
Formation of very strongly magnetized neutron stars -Implications for gamma-ray bursts. R C Duncan, C Thompson, 10.1086/186413Astrophys. J. Let. 3929R. C. Duncan and C. Thompson, Formation of very strongly magnetized neutron stars -Implications for gamma-ray bursts, Astrophys. J. Let. 392 (1992) L9.
Neutron star dynamos and the origins of pulsar magnetism. C Thompson, R C Duncan, 10.1086/172580Astrophys. J. 408194C. Thompson and R. C. Duncan, Neutron star dynamos and the origins of pulsar magnetism, Astrophys. J. 408 (1993) 194.
Magnetically-driven crustquakes in neutron stars. S K Lander, N Andersson, D Antonopoulou, A L Watts, 10.1093/mnras/stv432Mon. Not. Roy. Astron. Soc. 44920471412.5852S. K. Lander, N. Andersson, D. Antonopoulou and A. L. Watts, Magnetically-driven crustquakes in neutron stars, Mon. Not. Roy. Astron. Soc. 449 (2015) 2047 [1412.5852].
Breaking properties of neutron star crust. D A Baiko, A I Chugunov, 10.1093/mnras/sty22591808.06415Mon. Not. Roy. Astron. Soc. 4805511D. A. Baiko and A. I. Chugunov, Breaking properties of neutron star crust, Mon. Not. Roy. Astron. Soc. 480 (2018) 5511 [1808.06415].
Gravitational radiation from neutron stars deformed by crustal Hall drift. A G Suvorov, A Mastrano, U Geppert, 10.1093/mnras/stw9091604.04305Mon. Not. Roy. Astron. Soc. 4593407A. G. Suvorov, A. Mastrano and U. Geppert, Gravitational radiation from neutron stars deformed by crustal Hall drift, Mon. Not. Roy. Astron. Soc. 459 (2016) 3407 [1604.04305].
Gravitational Waves from Pulsars and Their Braking Indices: The Role of a Time Dependent Magnetic Ellipticity. J C N De Araujo, J G Coelho, C Costa, 10.3847/0004-637X/831/1/35Astrophys. J. 831351610.07955J. C. N. de Araujo, J. G. Coelho and C. Costa, Gravitational Waves from Pulsars and Their Braking Indices: The Role of a Time Dependent Magnetic Ellipticity, Astrophys. J. 831 (2016) 35 [1610.07955].
Gravitational waves from pulsars in the context of magnetic ellipticity. J C N De Araujo, J G Coelho, C A Costa, 10.1140/epjc/s10052-017-4925-3Eur. Phys. J. 773501610.10092J. C. N. de Araujo, J. G. Coelho and C. A. Costa, Gravitational waves from pulsars in the context of magnetic ellipticity, Eur. Phys. J. C77 (2017) 350 [1610.10092].
Gravitational Waves From Pulsars Due To Their Magnetic Ellipticity. J C N De Araujo, J G Coelho, S M Ladislau, C A Costa, 15th Marcel Grossmann Meeting on Recent Developments in Theoretical and Experimental General Relativity, Astrophysics, and Relativistic Field Theories (MG15). Rome, ItalyJ. C. N. de Araujo, J. G. Coelho, S. M. Ladislau and C. A. Costa, Gravitational Waves From Pulsars Due To Their Magnetic Ellipticity, in 15th Marcel Grossmann Meeting on Recent Developments in Theoretical and Experimental General Relativity, Astrophysics, and Relativistic Field Theories (MG15) Rome, Italy, July 1-7, 2018, 2019, 1906.00774.
Deformations of accreting neutron star crusts and gravitational wave emission. G Ushomirsky, C Cutler, L Bildsten, 10.1046/j.1365-8711.2000.03938.xastro-ph/0001136Mon. Not. Roy. Astron. Soc. 319902G. Ushomirsky, C. Cutler and L. Bildsten, Deformations of accreting neutron star crusts and gravitational wave emission, Mon. Not. Roy. Astron. Soc. 319 (2000) 902 [astro-ph/0001136].
Maximum elastic deformations of compact stars with exotic equations of state. B J Owen, 10.1103/PhysRevLett.95.211101astro-ph/0503399Phys. Rev. Lett. 95211101B. J. Owen, Maximum elastic deformations of compact stars with exotic equations of state, Phys. Rev. Lett. 95 (2005) 211101 [astro-ph/0503399].
B Haskell, D I Jones, N Andersson, 10.1111/j.1365-2966.2006.10998.xastro-ph/0609438Mountains on Neutron Stars: Accreted vs. Non-Accreted crusts. 3731423B. Haskell, D. I. Jones and N. Andersson, Mountains on Neutron Stars: Accreted vs. Non-Accreted crusts, Mon. Not. Roy. Astron. Soc. 373 (2006) 1423 [astro-ph/0609438].
Maximum elastic deformations of relativistic stars. N K Johnson-Mcdaniel, B J Owen, 10.1103/PhysRevD.88.0440041208.5227Phys. Rev. 8844004N. K. Johnson-McDaniel and B. J. Owen, Maximum elastic deformations of relativistic stars, Phys. Rev. D88 (2013) 044004 [1208.5227].
Gravitational radiation and rotation of accreting neutron stars. L Bildsten, 10.1086/311440astro-ph/9804325Astrophys. J. 50189L. Bildsten, Gravitational radiation and rotation of accreting neutron stars, Astrophys. J. 501 (1998) L89 [astro-ph/9804325].
Modelling magnetically deformed neutron stars. B Haskell, L Samuelsson, K Glampedakis, N Andersson, 10.1111/j.1365-2966.2008.12861.xMon. Not. Roy. Astron. Soc. 3855310705.1780B. Haskell, L. Samuelsson, K. Glampedakis and N. Andersson, Modelling magnetically deformed neutron stars, Mon. Not. Roy. Astron. Soc. 385 (2008) 531 [0705.1780].
M Maggiore, Theory and Experiments, Oxford Master Series in Physics. Oxford University Press1M. Maggiore, Gravitational Waves. Vol. 1: Theory and Experiments, Oxford Master Series in Physics. Oxford University Press, 2007.
Gravitational waves. A Buonanno, Les Houches Summer School -Session 86: Particle Physics and Cosmology: The Fabric of Spacetime Les Houches. France4682A. Buonanno, Gravitational waves, in Les Houches Summer School -Session 86: Particle Physics and Cosmology: The Fabric of Spacetime Les Houches, France, July 31-August 25, 2006, 2007, 0709.4682, https://inspirehep.net/record/762437/files/arXiv:0709.4682.pdf.
Gravitational Radiation from Slowly Rotating, Fully Relativistic Stars. J R Ipser, 10.1086/150948Astrophys. J. 166175J. R. Ipser, Gravitational Radiation from Slowly Rotating, Fully Relativistic Stars, Astrophys. J. 166 (1971) 175.
Multipole expansions of gravitational radiation. K S Thorne, 10.1103/RevModPhys.52.299Rev. Mod. Phys. 52299K. S. Thorne, Multipole expansions of gravitational radiation, Rev. Mod. Phys. 52 (1980) 299.
Black holes, white dwarfs, and neutron stars: The physics of compact objects. S L Shapiro, S A Teukolsky, S. L. Shapiro and S. A. Teukolsky, Black holes, white dwarfs, and neutron stars: The physics of compact objects. 1983.
Observationally constraining gravitational wave emission from short gamma-ray burst remnants. P D Lasky, K Glampedakis, 10.1093/mnras/stw4351512.05368Mon. Not. Roy. Astron. Soc. 4581660P. D. Lasky and K. Glampedakis, Observationally constraining gravitational wave emission from short gamma-ray burst remnants, Mon. Not. Roy. Astron. Soc. 458 (2016) 1660 [1512.05368].
Evidence for a Minimum Ellipticity in Millisecond Pulsars. G Woan, M D Pitkin, B Haskell, D I Jones, P D Lasky, 10.3847/2041-8213/aad86a1806.02822Astrophys. J. 86340G. Woan, M. D. Pitkin, B. Haskell, D. I. Jones and P. D. Lasky, Evidence for a Minimum Ellipticity in Millisecond Pulsars, Astrophys. J. 863 (2018) L40 [1806.02822].
Magnetic monopoles. J , 10.1146/annurev.ns.34.120184.002333Ann. Rev. Nucl. Part. Sci. 34461J. Preskill, Magnetic monopoles, Ann. Rev. Nucl. Part. Sci. 34 (1984) 461.
Dark Monopoles and SL(2, Z) Duality. J Terning, C B Verhaaren, 10.1007/JHEP12(2018)1231808.09459JHEP. 12123J. Terning and C. B. Verhaaren, Dark Monopoles and SL(2, Z) Duality, JHEP 12 (2018) 123 [1808.09459].
The theory of magnetic poles. P A M Dirac, 10.1103/PhysRev.74.817Phys. Rev. 74817P. A. M. Dirac, The theory of magnetic poles, Phys. Rev. 74 (1948) 817.
Noncovariance of the Dirac Monopole. C R Hagen, 10.1103/PhysRev.140.B804Phys. Rev. 140804C. R. Hagen, Noncovariance of the Dirac Monopole, Phys. Rev. 140 (1965) B804.
Local Lagrangian quantum field theory of electric and magnetic charges. D Zwanziger, 10.1103/PhysRevD.3.880Phys. Rev. 3880D. Zwanziger, Local Lagrangian quantum field theory of electric and magnetic charges, Phys. Rev. D3 (1971) 880.
Remarks on zwanziger's local quantum field theory of electric and magnetic charge. R A Brandt, F Neri, 10.1103/PhysRevD.18.2080Phys. Rev. D. 182080R. A. Brandt and F. Neri, Remarks on zwanziger's local quantum field theory of electric and magnetic charge, Phys. Rev. D 18 (1978) 2080.
Anomaly Constraints on Monopoles and Dyons. C Csaki, Y Shirman, J Terning, 10.1103/PhysRevD.81.1250281003.0448Phys. Rev. 81125028C. Csaki, Y. Shirman and J. Terning, Anomaly Constraints on Monopoles and Dyons, Phys. Rev. D81 (2010) 125028 [1003.0448].
Uber das Verhalten eines Elektrons im homogenen elektrischen Feld nach der relativistischen Theorie Diracs. F Sauter, 10.1007/BF01339461Z. Phys. 69742F. Sauter, Uber das Verhalten eines Elektrons im homogenen elektrischen Feld nach der relativistischen Theorie Diracs, Z. Phys. 69 (1931) 742.
W Heisenberg, H Euler, 10.1007/BF01343663Folgerungen aus der diracschen theorie des positrons. 98714W. Heisenberg and H. Euler, Folgerungen aus der diracschen theorie des positrons, Zeitschrift für Physik 98 (1936) 714.
On gauge invariance and vacuum polarization. J S Schwinger, 10.1103/PhysRev.82.664Phys. Rev. 82664J. S. Schwinger, On gauge invariance and vacuum polarization, Phys. Rev. 82 (1951) 664.
The Schwinger mechanism revisited. T D Cohen, D A Mcgady, 10.1103/PhysRevD.78.036008Phys. Rev. 78360080807.1117T. D. Cohen and D. A. McGady, The Schwinger mechanism revisited, Phys. Rev. D78 (2008) 036008 [0807.1117].
Thermal schwinger pair production at arbitrary coupling. O Gould, A Rajantie, 10.1103/PhysRevD.96.076002Phys. Rev. D. 9676002O. Gould and A. Rajantie, Thermal schwinger pair production at arbitrary coupling, Phys. Rev. D 96 (2017) 076002.
Neutron Stars and Pulsars. W. BeckerBerlin, GermanySpringerW. Becker (ed.), Neutron Stars and Pulsars. Springer, Berlin, Germany, 2009.
Effective Lagrangians at finite temperature. W Dittrich, 10.1103/PhysRevD.19.2385Phys. Rev. 192385W. Dittrich, Effective Lagrangians at finite temperature, Phys. Rev. D19 (1979) 2385.
QED effective action at finite temperature and density. P Elmfors, D Persson, B.-S Skagerstam, 10.1103/PhysRevLett.71.480hep-th/9305004Phys. Rev. Lett. 71480P. Elmfors, D. Persson and B.-S. Skagerstam, QED effective action at finite temperature and density, Phys. Rev. Lett. 71 (1993) 480 [hep-th/9305004].
QED effective action at finite temperature. H Gies, 10.1103/PhysRevD.60.105002hep-ph/9812436Phys. Rev. 60105002H. Gies, QED effective action at finite temperature, Phys. Rev. D60 (1999) 105002 [hep-ph/9812436].
QED effective action at finite temperature: Two loop dominance. H Gies, 10.1103/PhysRevD.61.085021hep-ph/9909500Phys. Rev. 6185021H. Gies, QED effective action at finite temperature: Two loop dominance, Phys. Rev. D61 (2000) 085021 [hep-ph/9909500].
Comment on fermionic and bosonic pair creation in an external electric field at finite temperature. A K Ganguly, hep-th/9804134A. K. Ganguly, Comment on fermionic and bosonic pair creation in an external electric field at finite temperature, hep-th/9804134.
Nonperturbative QED Effective Action at Finite Temperature. S P Kim, H K Lee, Y Yoon, 10.1103/PhysRevD.82.0250161006.0774Phys. Rev. 8225016S. P. Kim, H. K. Lee and Y. Yoon, Nonperturbative QED Effective Action at Finite Temperature, Phys. Rev. D82 (2010) 025016 [1006.0774].
Schwinger pair production at nonzero temperatures or in compact directions. A R Brown, 1512.05716A. R. Brown, Schwinger pair production at nonzero temperatures or in compact directions, 1512.05716.
Schwinger pair production at finite temperature. L Medina, M C Ogilvie, 10.1103/PhysRevD.95.056006Phys. Rev. D. 9556006L. Medina and M. C. Ogilvie, Schwinger pair production at finite temperature, Phys. Rev. D 95 (2017) 056006.
Worldline sphaleron for thermal Schwinger pair production. O Gould, A Rajantie, C Xie, 1806.02665O. Gould, A. Rajantie and C. Xie, Worldline sphaleron for thermal Schwinger pair production, 1806.02665.
Finite temperature Schwinger pair production in coexistent electric and magnetic fields. M Korwar, A M Thalapillil, 10.1103/PhysRevD.98.0760161808.01295Phys. Rev. 9876016M. Korwar and A. M. Thalapillil, Finite temperature Schwinger pair production in coexistent electric and magnetic fields, Phys. Rev. D98 (2018) 076016 [1808.01295].
Virtual and Thermal Schwinger Processes. P Draper, 10.1103/PhysRevD.98.1250141809.10768Phys. Rev. 98125014P. Draper, Virtual and Thermal Schwinger Processes, Phys. Rev. D98 (2018) 125014 [1809.10768].
Worldline instantons and pair production in inhomogeneous fields. G V Dunne, C Schubert, 10.1103/PhysRevD.72.105004hep-th/0507174Phys. Rev. 72105004G. V. Dunne and C. Schubert, Worldline instantons and pair production in inhomogeneous fields, Phys. Rev. D72 (2005) 105004 [hep-th/0507174].
Metastable strings in Abelian Higgs models embedded in nonAbelian theories: Calculating the decay rate. M Shifman, A Yung, 10.1103/PhysRevD.66.045012hep-th/0205025Phys. Rev. 6645012M. Shifman and A. Yung, Metastable strings in Abelian Higgs models embedded in nonAbelian theories: Calculating the decay rate, Phys. Rev. D66 (2002) 045012 [hep-th/0205025].
Multiscatter stellar capture of dark matter. J Bramante, A Delgado, A Martin, 10.1103/PhysRevD.96.0630021703.04043Phys. Rev. 9663002J. Bramante, A. Delgado and A. Martin, Multiscatter stellar capture of dark matter, Phys. Rev. D96 (2017) 063002 [1703.04043].
Neutron stars at the dark matter direct detection frontier. N Raj, P Tanedo, H.-B Yu, 10.1103/PhysRevD.97.0430061707.09442Phys. Rev. 9743006N. Raj, P. Tanedo and H.-B. Yu, Neutron stars at the dark matter direct detection frontier, Phys. Rev. D97 (2018) 043006 [1707.09442].
The Australia Telescope National Facility pulsar catalogue. R N Manchester, G B Hobbs, A Teoh, M Hobbs, 10.1086/428488astro-ph/0412641Astron. J. 129R. N. Manchester, G. B. Hobbs, A. Teoh and M. Hobbs, The Australia Telescope National Facility pulsar catalogue, Astron. J. 129 (2005) 1993 [astro-ph/0412641].
The McGill Magnetar Catalog. S A Olausen, V M Kaspi, 10.1088/0067-0049/212/1/6Astrophys. J. Suppl. 212 (2014) 6 [1309.4167S. A. Olausen and V. M. Kaspi, The McGill Magnetar Catalog, Astrophys. J. Suppl. 212 (2014) 6 [1309.4167].
Gamma-ray bursts and afterglows from rotating strange stars and neutron stars. Z G Dai, T Lu, 10.1103/PhysRevLett.81.4301astro-ph/9810332Phys. Rev. Lett. 814301Z. G. Dai and T. Lu, Gamma-ray bursts and afterglows from rotating strange stars and neutron stars, Phys. Rev. Lett. 81 (1998) 4301 [astro-ph/9810332].
Gamma-ray burst afterglow with continuous energy injection: Signature of a highly magnetized millisecond pulsar. B Zhang, P Meszaros, 10.1086/320255astro-ph/0011133Astrophys. J. 55235B. Zhang and P. Meszaros, Gamma-ray burst afterglow with continuous energy injection: Signature of a highly magnetized millisecond pulsar, Astrophys. J. 552 (2001) L35 [astro-ph/0011133].
Signatures of magnetar central engines in short GRB lightcurves. A Rowlinson, P T O'brien, B D Metzger, N R Tanvir, A J Levan, 10.1093/mnras/sts6831301.0629Mon. Not. Roy. Astron. Soc. 4301061A. Rowlinson, P. T. O'Brien, B. D. Metzger, N. R. Tanvir and A. J. Levan, Signatures of magnetar central engines in short GRB lightcurves, Mon. Not. Roy. Astron. Soc. 430 (2013) 1061 [1301.0629].
Formation of Stable Magnetars from Binary Neutron Star Mergers. B Giacomazzo, R Perna, 10.1088/2041-8205/771/2/L26Astrophys. J. 771261306.1608B. Giacomazzo and R. Perna, Formation of Stable Magnetars from Binary Neutron Star Mergers, Astrophys. J. 771 (2013) L26 [1306.1608].
Supernova Light Curves Powered by Young Magnetars. D Kasen, L Bildsten, 10.1088/0004-637X/717/1/245Astrophys. J. 7172450911.0680D. Kasen and L. Bildsten, Supernova Light Curves Powered by Young Magnetars, Astrophys. J. 717 (2010) 245 [0911.0680].
The protomagnetar model for gamma-ray bursts. B D Metzger, D Giannios, T A Thompson, N Bucciantini, E Quataert, 10.1111/j.1365-2966.2011.18280.x1012.0001Mon. Not. R. Astron. Soc. 4132031B. D. Metzger, D. Giannios, T. A. Thompson, N. Bucciantini and E. Quataert, The protomagnetar model for gamma-ray bursts, Mon. Not. R. Astron. Soc. 413 (2011) 2031 [1012.0001].
Gravitational waves from massive magnetars formed in binary neutron star mergers. S Osso, B Giacomazzo, R Perna, L Stella, The Astrophysical Journal. 79825S. Dall'Osso, B. Giacomazzo, R. Perna and L. Stella, Gravitational waves from massive magnetars formed in binary neutron star mergers, The Astrophysical Journal 798 (2015) 25.
Millisecond Magnetar Birth Connects FRB 121102 to Superluminous Supernovae and Long Duration Gamma-ray Bursts. B D Metzger, E Berger, B Margalit, 10.3847/1538-4357/aa633d1701.02370Astrophys. J. 84114B. D. Metzger, E. Berger and B. Margalit, Millisecond Magnetar Birth Connects FRB 121102 to Superluminous Supernovae and Long Duration Gamma-ray Bursts, Astrophys. J. 841 (2017) 14 [1701.02370].
Young magnetars with fracturing crusts as fast radio burst repeaters. A G Suvorov, K D Kokkotas, 10.1093/mnras/stz20521907.10394Mon. Not. Roy. Astron. Soc. 4885887A. G. Suvorov and K. D. Kokkotas, Young magnetars with fracturing crusts as fast radio burst repeaters, Mon. Not. Roy. Astron. Soc. 488 (2019) 5887 [1907.10394].
Magnetic field decay in isolated neutron stars. P Goldreich, A Reisenegger, 10.1086/171646Astrophys. J. 395250P. Goldreich and A. Reisenegger, Magnetic field decay in isolated neutron stars, Astrophys. J. 395 (1992) 250.
2D Cooling of Magnetized Neutron Stars. D N Aguilera, J A Pons, J A Miralles, 10.1051/0004-6361:20078786Astron. Astrophys. 4862550710.0854D. N. Aguilera, J. A. Pons and J. A. Miralles, 2D Cooling of Magnetized Neutron Stars, Astron. Astrophys. 486 (2008) 255 [0710.0854].
Unifying the observational diversity of isolated neutron stars via magneto-thermal evolution models. D Vigano, N Rea, J A Pons, R Perna, D N Aguilera, J A Miralles, 10.1093/mnras/stt1008Mon. Not. Roy. Astron. Soc. 4341231306.2156D. Vigano, N. Rea, J. A. Pons, R. Perna, D. N. Aguilera and J. A. Miralles, Unifying the observational diversity of isolated neutron stars via magneto-thermal evolution models, Mon. Not. Roy. Astron. Soc. 434 (2013) 123 [1306.2156].
Magnetic field dissipation in neutron star crusts: from magnetars to isolated neutron stars. J A Pons, U Geppert, 10.1051/0004-6361:20077456astro-ph/0703267Astronomy and Astrophysics. 470303J. A. Pons and U. Geppert, Magnetic field dissipation in neutron star crusts: from magnetars to isolated neutron stars, Astronomy and Astrophysics 470 (2007) 303 [astro-ph/0703267].
Structure of neutron star envelopes. E H Gudmundsson, C J Pethick, R I Epstein, 10.1086/161292Astrophys. J. 272286E. H. Gudmundsson, C. J. Pethick and R. I. Epstein, Structure of neutron star envelopes, Astrophys. J. 272 (1983) 286.
Schwinger pair production via instantons in a strong electric field. S P Kim, D N Page, 10.1103/PhysRevD.65.105002hep-th/0005078Phys. Rev. 65105002S. P. Kim and D. N. Page, Schwinger pair production via instantons in a strong electric field, Phys. Rev. D65 (2002) 105002 [hep-th/0005078].
Schwinger pair production in electric and magnetic fields. S P Kim, D N Page, 10.1103/PhysRevD.73.065020hep-th/0301132Phys. Rev. 7365020S. P. Kim and D. N. Page, Schwinger pair production in electric and magnetic fields, Phys. Rev. D73 (2006) 065020 [hep-th/0301132].
Holographic Schwinger effect with chemical potential at finite temperature. L Zhang, D.-F Hou, J Li, 10.1140/epja/i2018-12524-4Eur. Phys. J. A54. 94L. Zhang, D.-F. Hou and J. Li, Holographic Schwinger effect with chemical potential at finite temperature, Eur. Phys. J. A54 (2018) 94.
| []
|
[
"Starlink : A Solution to the Digital Connectivity Divide in Education in the Global South",
"Starlink : A Solution to the Digital Connectivity Divide in Education in the Global South"
]
| [
"H M V R Herath \nDepartment of Electrical and Electronic Engineering\nUniversity of Peradeniya\nSri Lanka\n"
]
| [
"Department of Electrical and Electronic Engineering\nUniversity of Peradeniya\nSri Lanka"
]
| []
| Digital connectivity gap in the global south hampered the education of millions of school children during the COVID-19 pandemic. If not actions are taken to remedy this problem, future prospects of millions of children around will be bleak. This paper explores the feasibility of using the SpaceX Starlink satellite constellation as a means to alleviate the problem of the digital connectivity divide in the global south. First, the paper discusses the issues of digital connectivity in education in rural Sri Lanka and other countries in the global south. Then, the paper gives an introduction to Starlink broadband internet technology and discusses its advantages over traditional technologies.After that, the paper discusses a possible mechanism of adopting Starlink technology as a solution to the rural digital connectivity problem in the global south. Technological, as well as economical aspects of such scheme, are discussed.Finally, challenges that may arise in deploying a system such as Starlink to improve rural digital connectivity in Sri Lanka or any another country in the global south will be discussed with possible remedies.I. | null | [
"https://arxiv.org/pdf/2110.09225v2.pdf"
]
| 239,016,293 | 2110.09225 | fd8b6c2e4b58dc74ddfbba8972a026cef34e5585 |
Starlink : A Solution to the Digital Connectivity Divide in Education in the Global South
H M V R Herath
Department of Electrical and Electronic Engineering
University of Peradeniya
Sri Lanka
Starlink : A Solution to the Digital Connectivity Divide in Education in the Global South
Digital connectivity gap in the global south hampered the education of millions of school children during the COVID-19 pandemic. If not actions are taken to remedy this problem, future prospects of millions of children around will be bleak. This paper explores the feasibility of using the SpaceX Starlink satellite constellation as a means to alleviate the problem of the digital connectivity divide in the global south. First, the paper discusses the issues of digital connectivity in education in rural Sri Lanka and other countries in the global south. Then, the paper gives an introduction to Starlink broadband internet technology and discusses its advantages over traditional technologies.After that, the paper discusses a possible mechanism of adopting Starlink technology as a solution to the rural digital connectivity problem in the global south. Technological, as well as economical aspects of such scheme, are discussed.Finally, challenges that may arise in deploying a system such as Starlink to improve rural digital connectivity in Sri Lanka or any another country in the global south will be discussed with possible remedies.I.
INTRODUCTION
Online teaching/learning paradigm adopted all around the world as a consequence of SARS -Cov-19 pandemic put a sharp focus on the digital connectivity or lack thereof in the rural regions of the Global South including in Sri Lanka [1][2][3][4]. Primary, Secondary, and Tertiary education systems of the countries around the world had to adapt to an online delivery mode in haste due to pandemic imposed movement restrictions that made face-to-face learning/ teaching impossible. In Sri Lanka the mechanisms of delivery varied from the synchronous delivery of lessons using video conferencing technologies to sharing lessons and assignments through social media platforms. This is mainly true for the other countries of the global south as well. As a result of this diversity, the quality of education received by students varied a lot with rural students witnessing the already poor quality of education they receive further eroding. This widened the rural-urban education quality gap further. Poor digital connectivity is a major contributor to the poor quality of the online education received by students in rural areas. The impact is hardest felt in the primary and secondary education where UNICEF estimated that more than 168 million children lost more than full year of school education.
Urban centers of Sri Lanka, for example, are provided with internet connectivity via fiber to the home (FTTH) and mobile broadband technologies. These technologies provide reasonably sufficient bandwidth to the subscribers [1].
But, in certain urban pockets, the connectivity is not reliable due to the insufficient capacity of base stations and shadowing effects in the case of mobile broadband. When it comes to rural areas fiber connectivity to homes is not available due to economic factors such as the cost of deployment and insufficient subscriber base due to low population density as well as the poverty among the rural population. Furthermore, mobile broadband connectivity is sketchy in rural areas due to the light deployment of base stations. As a result, certain locations do not have connectivity and in many locations where the connectivity is available, the poor signal quality makes the user experience below par.
Economics of deployment discussed earlier prevents internet service providers from expanding their networks deep into rural areas. As a consequence of above factors internet connectivity in Sri Lanka is about 50% according to 2020 data [5].
One could argue that, as education is a basic human right, the access to the medium through which the education is carried out needs to be a basic human right as well. In that context, it is imperative that people who live in every region of a certain country should have reasonable digital connectivity thereby allowing them to participate in educational and economic activities through the internet on an equal footing to every other citizen. This paper discusses how the connectivity provided by the Startlink satellite system can be utilized to provide equitable digital connectivity across any country in the global south [6].
The paper is organized as follows. Section II gives an introduction to the Starlink technology and compares it with other technologies that provide digital connectivity. Some field results of Starlink connectivity performance will be discussed in section III. After that, in the section IV a framework to incorporate Starlink to improve rural digital connectivity will be discussed. Possible challenges to this scheme and potential solutions will be discussed in section V. Finally, conclusions summarize the content of the paper.
II. THE STARLINK TECHNOLOGY
Sir Arthur C. Clarke's concept of worldwide radio coverage using three geostationary satellites presented in 1945 marks the beginning of the satellite age. Satellite technology has advanced many folds since that publication through various technology cycles. The concept of low latency high bandwidth communication via a network of medium earth orbit (MEO) and low earth orbit (LEO) satellites gathered momentum in the late 80s. Iridium, ICO, and Globalstar are such systems deployed in the late 90s. But with the rapid improvements in terrestrial optical fiber and cellular wireless communication systems, the necessity of such satellite networks diminished. As a result, those MEO/LEO systems failed financially. In the second decade of the 20th century, with the rapid advancements of the ICT and increased requirements for communication capacity, the importance of the LEO satellite constellations, with its low latency and high throughput, re-emerged [7]. The inability of optical fiber and wireless communication technologies to penetrate remote areas of the globe due to various reasons act as a catalyst to this development.
Out of this background, several LEO constellation concepts emerged such as Kuiper, Starlink, Telesat, and OneWeb.
These systems are at various stages of their development and use advanced technologies for spectrum usage, satellite and constellation throughput, ground equipment development, and management of systems. Out of these constellations, Starlink is the most extensive and the one that progressed most towards the deployment. Therefore, Starlink can be considered as the most promising solution to overcome the digital connectivity divide in education at this moment in time [6 -9] Federal Communication Commission (FCC) of USA in 2018 has granted SpaceX Starlink permission to deploy a constellation of LEO satellites in five orbital shells. SpaceX Starlink has so far deployed more than 1500 satellites of the 1st shell at an orbit of 560 km and at 53.00 inclination [6,7,[9][10][11][12][13]. Figure 1 shows the Starlink satellite map as of 21st May 2021 [10]. They are stationed on 72 orbital plains with approximately 20 satellites on each plain [13]. Satellites operate in Ku-band and each has a mass of approximately 240 kg [6]. The system uses phase array antennas for up and downlinks and laser communication in the inter-satellite link (ISL) [14,15]. Considering the fact that laser communication occurs in a vacuum, light travels 47% faster than in the terrestrial fiber network. A latency of less than 30 ms is expected in this network. The architecture of the Starlink system is shown in Figure 2. Ground stations or Starlink Gateways are in constant communication with the satellites [9,14,15]. They provide internet access and control information to user terminals.
User-Satellite communication uses Ku band and Ground Station-Satellite communication uses Ku band for downlink
and Ka band for uplink [6,16]. SpaceX's satellites generate ultra-small spot-size beams due to the fact that they are much closer to the earth compared to geostationary satellites. Close proximity to the earth provides higher speed and lower latency. The estimated total bandwidth throughput at the start of the commercial deployment is 23.7 Tbps [6]. This is a plug and play system. The dish is 23" in diameter and can be easily handled by a single person [15,17]. It can be placed on the ground or on a rooftop where clear sky visibility is available. The dish consists of a phased antenna array of the stacked honeycomb structure. The dish can automatically align with the available Starlink satellite. Starlink uses advanced phased-array technology for both the satellite and the customer dish [15][16][17]. That allows nearly instantaneous hand-offs between different satellites with no mechanical transitions.
The router is equipped with a Gigabit Ethernet port and Wi-Fi to provide connectivity. The satellite dish is connected to the router and both are powered using Power over Ethernet (PoE). One router can support up to 128 devices simultaneously [18]. It is operated with a 56 V DC supply provided through PoE. The router complies with IEEE 802.11 standard and operates at 2.4 GHz and 5 GHz. OFDM modulation technology is used for transmission [15][16][17].
III. STARLINK USER END PERFORMANCE
In October 2020 public beta trial program for subscribers in the northern United States and Canada between the latitudes of 45º and 52º was commenced. Starlink expected to provide full equatorial coverage by the beginning of 2022. Starlink services will be available in Sri Lanka in 2022 Subjected to the regulatory approval according to the Starlink web site [19]. At present Starlink provides unlimited internet access to the subscribers.
Independent performance analysis of the Starlink user terminal was carried out by the ROADMAP-5G research group of the Carintia University of Applied Sciences, Austria in June 2021 [15]. Key findings of the experiment is shown in the Table 1. According to the experiment YouTube streaming provided satisfactory performance with a rare exception of having 4-6 second interruptions [15]. Automatic switching between the satellites follow a pre-defined timing of 15 seconds according to the observations [15]. Latencies fluctuated nearly always during this period [15].
The monthly subscription during the public beta is $99 for download speeds between 50-150 Mbps and latency between 20-40 milliseconds. A one-time equipment fee of $499 is also charged [19]. The data cap has not yet been implemented for the connection so far. Starlink expects to improve the download speed to 300 Mbps and reduce the equipment cost further according to the promotional materials.
The main advantage Starlink has over terrestrial cellular and fiber technologies is that it can be deployed in remote rural areas without much supporting infrastructure in an economical manner. In addition, Starlink can provide performance comparable or better than cellular systems in rural areas. The primary objective of the digital connectivity center would be to enhance the educational opportunities of children via reliable digital connectivity. Additionally, resources could be used by adults for e-commerce in the evening and at night provided that there is sufficient energy storage. For a grid-connected digital connectivity center, a solar energy generation system is optional.
Considering the fact that this center is mainly used for educational activities a local school would be a best location to establish this center. Locations such as "e-nanasala", a digital connectivity providing center in Sri Lanka, or a community center could be alternatives to the schools to establish proposed center.
V. CHALLENGES AND POSSIBLE SOLUTIONS
In order for Starlink to operate in any country, it is necessary to get regulatory approval to provide telecommunication services in that country. So far there is no evidence that the Starlink received such approval from any of the countries in the global south. It is expected that they will start that process soon.
Considering the Starlink network architecture, users could access internet without subjected to the control of governments of the respective countries. Governments could consider this as a threat to their sovereignty. Therefore, it is essential that Starlink come to a workable agreement with governments to solve this issue to the satisfaction of all parties concerned. One solution could be establishing at least one gateway in the country of interest and routing traffic through that gateway.
There will be a pushback from the established internet service providers (ISPs) within a country who might consider Starlink as a competitor. Furthermore, the quality of the broadband services in many countries in the global south are poor and it may be possible that Starlink would be able to provide better broadband service even in urban areas compared to the local ISPs. Therefore, it is essential that all the parties come together and find out ways of collaborating in such a way that all the parties including customers are benefitted. One possibility is using Starlink to improve the performance of cellular backhaul of the local service providers. Starlink would be able to connect user terminals to the local internet infrastructure through a gateway.
VI. CONCLUSIONS
This paper puts forward a conceptual idea of how the Starlink satellite network could be utilized to provide digital connectivity to rural remote locations in order to overcome the digital connectivity divide in education in the global south. A proposal to establish rural digital connectivity centers is discussed in this paper. Through few simple calculations, it could be concluded that such a project would have long-term economic viability and could provide adequate capacity to service 50 users simultaneously from the existing technology. Possible regulatory and commercial challenges to a project of this nature were discussed with possible solutions.
Providing better educational opportunities to rural youth via reliable internet could move them out of poverty and allow them to give a better contribution to the local economy. Furthermore, the proposed center could promote ecommerce in rural areas.
Figure 1 :
1Starlink Satellite Map as of 21 st May 2021[10]
Figure 2 :
2Startlink System Architecture Starlink customer premises equipment (CPE) consists of a satellite dish, a Wi-Fi router, and a power supply unit.
Figure 3
3presents the conceptual idea of a rural off-grid digital connectivity center. In this center, internet connectivity is provided via Starlink and is powered by solar energy. First, let's do an approximate power requirement calculation. Assume that this center is equipped with 5 desktop computers and 5 laptops. Additionally, 40 tablets can be simultaneously used in this location. The physical structure of the center can either be constructed for the purpose or it may be a re-purposed existing building.
Figure 3 :
3Proposed Rural Digital Connectivity CenterIt is reasonable to assume that a desktop PC with an LED display consumes approximately 150 W of power and a laptop consumes 60W of power. Additionally, a tablet consumes approximately 10 W of power. So all the devices combine consume approximately 1.45 kW. Considering all the other requirements a 3 kW solar energy system would suffice to power up such a center. The capital cost of such a system would be 3500 USD equivalent Sri Lankan currency.It would be useful to consider the economic viability of such a center with respect to the connectivity cost. Here, three possible utilization schemes are considered. In scheme 1, each user gets to use the center for a 4-hour time slot per day. Considering 8-hour daytime usage, 100 users can use the center per day. In scheme 2, each user gets to use the center for a 3-hour time slot per day. Considering 9-hour daytime usage, 150 users can use the center per day. Inscheme 3, each user gets to use the center for a 2-hour time slot per day. Considering 8-hour daytime usage, 200 users can use the center per day. These number can be decided considering the ground situation such as number of students and the area of service. Assuming that the same set of users use the center every day it is possible to calculate connectivity cost per user per month considering the Starlink subscription of 99 USD/month. It would be 99 cents, 66 cents, and 49 cents respectively for schemes 1, 2, and 3. The equipment cost per head would be one time 4.99 USD, 3.32 USD, and 2.49 USD expenditure respectively. Considering an equipment lifetime of 3 years, equipment cost would be negligible. In comparison two ISPs in Sri Lanka provide unlimited broadband access at up to 2 Mbps for a monthly rental of about 10 USD according to their promotional materials. Above calculations show that deploying rural digital connectivity centers linked by Starlink is an economically viable proposition for the countries in the global south. Considering the fact that services can be provided to 100-200 people per day justifies the capital expenditure to establish such a system. A Government institution, private corporations or NGOs could fund the establishment of the centers while the cost of the maintenance and subscription could be recovered from the users.
Table 1 :
1Key performance parameters of Starlink[15] IV. RURAL DIGITAL CONNECTIVITY VIA STARLINK This section proposes a framework to provide digital connectivity to remote rural locations via Starlink. The proposal considers an off-grid location as an example. It can be easily modified for a grid-connected location. It isParameter
Performance
Average download throughput
̴ 170 Mbps
Maximum download throughput
̴ 330 Mbps
Average upload throughput
̴ 17 Mbps
Maximum upload throughput
̴ 60 Mbps
Latency
30ms -2s
Percentage of time latency is below 90ms
98%
Percentage of time latency is below 50ms
77%
Down time
2.4%
Average power consumption
105 W
Peak power consumption
190 W
Closing the digital divide in Sri lanka amid COVID-19. Wimal Nanayakkara, RetrievedWimal Nanayakkara. 2021. Closing the digital divide in Sri lanka amid COVID-19. (May 2021). Retrieved September 01, 2021 from https://development.asia/insight/closing-digital-divide-sri-lanka-amid-covid-19.
The latent digital divide and its drivers in e-learning among bangladeshi students during the covid-19 pandemic. Md Badiuzzaman, Md, Md Rafiquzzaman, Mohammad Mustaneer Insiat Rabby, Rahman, 10.3390/info12080287Information. 12287Md Badiuzzaman, Md. Rafiquzzaman, Md Insiat Rabby, and Mohammad Mustaneer Rahman. 2021. The latent digital divide and its drivers in e-learning among bangladeshi students during the covid-19 pandemic. Information 12, 8 (2021), 287. DOI:http://dx.doi.org/10.3390/info12080287.
The impact of covid-19 on the growing north-south divide. Winnie M Makau, RetrievedWinnie M. Makau. 2021. The impact of covid-19 on the growing north-south divide. (March 2021). Retrieved September 01, 2021 from https://www.e-ir.info/2021/03/15/the-impact-of-covid-19-on-the-growing-north-south- divide/.
COVID-19 and the digital divide in virtual learning. Paul Ong, RetrievedPaul Ong. 2021. COVID-19 and the digital divide in virtual learning. (May 2021). Retrieved September 01, 2021 from https://knowledge.luskin.ucla.edu/2020/10/28/covid-19-and-the-digital-divide-in-virtual learning/.
Digital in Sri Lanka: All the statistics you need in 2021 -DATAREPORTAL -global Digital insights. Simon Kemp, RetrievedSimon Kemp. 2021. Digital in Sri Lanka: All the statistics you need in 2021 -DATAREPORTAL -global Digital insights. (February 2021). Retrieved September 05, 2021 from https://datareportal.com/reports/digital-2021-sri-lanka.
Digital connectivity and low Earth orbit Satellite Constellations: Opportunities for Asia and the Pacific. John Garrity, Arndt Husar, RetrievedJohn Garrity and Arndt Husar. 2021. Digital connectivity and low Earth orbit Satellite Constellations: Opportunities for Asia and the Pacific. (May 2021). Retrieved September 01, 2021 from https://www.adb.org/publications/digital-connectivity-low-earth-orbit-satellite-opportunities.
Recent development of commercial satellite communications systems. Jinhui Huang, Jiang Cao, 10.1007/978-981-15-0187-6_63Lecture Notes in Electrical Engineering. Jinhui Huang and Jiang Cao. 2020. Recent development of commercial satellite communications systems. Lecture Notes in Electrical Engineering (2020), 531-536. DOI:http://dx.doi.org/10.1007/978-981-15-0187-6_63.
Large LEO satellite constellations: Will it be different this time?. Chris Daehnick, Isabelle Klinghoffer, Ben Maritz, Bill Wiseman, RetrievedChris Daehnick, Isabelle Klinghoffer, Ben Maritz, and Bill Wiseman. 2021. Large LEO satellite constellations: Will it be different this time? (September 2021). Retrieved September 01, 2021 from https://www.mckinsey.com/industries/aerospace-and-defense/our-insights/large-leo-satellite-constellations-will-it- be-different-this-time.
A techno-economic cost framework for satellite networks applied to low Earth orbit constellations: Assessing Starlink, OneWeb and kuiper. B Osoro, Edward J Ogutu, Oughton, Osoro B. Ogutu and Edward J. Oughton. 2021. A techno-economic cost framework for satellite networks applied to low Earth orbit constellations: Assessing Starlink, OneWeb and kuiper. (August 2021). Retrieved from https://arxiv.org/abs/2108.10834.
. Tracker Starlink Satellite, RetrievedStarlink Satellite Tracker. Retrieved September 07, 2021 from https://satellitemap.space/.
Delay is not an option. Mark Handley, 10.1145/3286062.3286075Proceedings of the 17th ACM Workshop on Hot Topics in Networks. the 17th ACM Workshop on Hot Topics in NetworksMark Handley. 2018. Delay is not an option. Proceedings of the 17th ACM Workshop on Hot Topics in Networks (2018). DOI:http://dx.doi.org/10.1145/3286062.3286075.
The low earth orbit satellite population and impacts of the spacex starlink constellation. Jonathan C Mcdowell, 10.3847/2041-8213/ab8016The Astrophysical Journal. 892Jonathan C. McDowell. 2020. The low earth orbit satellite population and impacts of the spacex starlink constellation. The Astrophysical Journal 892, 2 (2020). DOI:http://dx.doi.org/10.3847/2041-8213/ab8016.
Starlink space Network-Enhanced CYBER-PHYSICAL power system. Tong Duan, Venkata Dinavahi, IEEE Transactions on Smart Grid. 12Tong Duan and Venkata Dinavahi. 2021. Starlink space Network-Enhanced CYBER-PHYSICAL power system. IEEE Transactions on Smart Grid 12, 4 (2021), 3673-3675.
10.1109/tsg.2021.3068046DOI. DOI:http://dx.doi.org/10.1109/tsg.2021.3068046.
Broadband LEO Satellite Communications: Architectures and key technologies. Yongtao Su, Yaoqi Liu, Yiqing Zhou, Jinhong Yuan, Huan Cao, Jinglin Shi, 10.1109/mwc.2019.1800299IEEE Wireless Communications. 26Yongtao Su, Yaoqi Liu, Yiqing Zhou, Jinhong Yuan, Huan Cao, and Jinglin Shi. 2019. Broadband LEO Satellite Communications: Architectures and key technologies. IEEE Wireless Communications 26, 2 (2019), 55-61. DOI:http://dx.doi.org/10.1109/mwc.2019.1800299.
Starlink analysis -forschung.fh-kaernten.at. RetrievedStarlink analysis -forschung.fh-kaernten.at. Retrieved September 13, 2021 from https://forschung.fh- kaernten.at/roadmap 5g/files/2021/07/Starlink-Analysis.pdf.
PETITION OF STARLINK SERVICES, LLC FOR DESIGNATION AS AN ELIGIBLE TELECOMMUNICATIONS CARRIER. Available at: Federal Communications Commission. Starlink, Llc Services, STARLINK SERVICES, LLC. 2021. PETITION OF STARLINK SERVICES, LLC FOR DESIGNATION AS AN ELIGIBLE TELECOMMUNICATIONS CARRIER. Available at: Federal Communications Commission.
Federal Communications Commission Test Report. Fcc, report. 2020RetrievedFcc.report. 2020. Federal Communications Commission Test Report. Retrieved September 03, 2021 from https://fcc.report/FCC-ID/2AWHPR201/4805897.pdf.
Starlink Beta frequently asked questions -R/starlink. RetrievedStarlink Beta frequently asked questions -R/starlink. Retrieved September 22, 2021 from https://libredd.it/r/Starlink/comments/jjx5dq/starlink_beta_frequently_asked_questions/?sort=confidence.
. Starlink, Starlink. 2021. http://www.starlink.com/.
A Technology Survival Guide for Online Learning. RetrievedTemple College. 2020. A Technology Survival Guide for Online Learning. Retrieved September 08, 2021 from https://www.templejc.edu/live/files/479-technologysurvivalguide.pdf .
| []
|
[
"A UNIFIED TREATMENT OF DIVIDEND PAYMENT PROBLEMS UNDER FIXED COST AND IMPLEMENTATION DELAYS",
"A UNIFIED TREATMENT OF DIVIDEND PAYMENT PROBLEMS UNDER FIXED COST AND IMPLEMENTATION DELAYS"
]
| [
"Erhan Bayraktar ",
"Masahiko Egami "
]
| []
| []
| In this paper we solve the dividend optimization problem for a corporation or a financial institution when the managers of the corporation are facing (regulatory) implementation delays. We consider several cash reservoir models for the firm including two mean-reverting processes, Ornstein-Uhlenbeck and square-root processes. Since the cashflow structure of different companies have different qualitative behaviors it makes sense to use different diffusions to model them. We provide a unified mathematical framework to analyze all these models and find the optimal barrier strategies. Our solution depends on a new characterization of the value function for one-dimensional diffusions and provide easily implementable algorithms to find the optimal control and the value function.2000 Mathematics Subject Classification. Primary: 93E20 , Secondary: 60J60. | 10.1007/s00186-009-0292-7 | [
"https://arxiv.org/pdf/math/0703825v3.pdf"
]
| 7,540,803 | math/0703825 | 930058f488b0d71679458b70edb46f7367ec49fe |
A UNIFIED TREATMENT OF DIVIDEND PAYMENT PROBLEMS UNDER FIXED COST AND IMPLEMENTATION DELAYS
21 Jan 2009
Erhan Bayraktar
Masahiko Egami
A UNIFIED TREATMENT OF DIVIDEND PAYMENT PROBLEMS UNDER FIXED COST AND IMPLEMENTATION DELAYS
21 Jan 2009
In this paper we solve the dividend optimization problem for a corporation or a financial institution when the managers of the corporation are facing (regulatory) implementation delays. We consider several cash reservoir models for the firm including two mean-reverting processes, Ornstein-Uhlenbeck and square-root processes. Since the cashflow structure of different companies have different qualitative behaviors it makes sense to use different diffusions to model them. We provide a unified mathematical framework to analyze all these models and find the optimal barrier strategies. Our solution depends on a new characterization of the value function for one-dimensional diffusions and provide easily implementable algorithms to find the optimal control and the value function.2000 Mathematics Subject Classification. Primary: 93E20 , Secondary: 60J60.
Introduction
In this paper, we solve the dividend optimization problem for a corporation or a financial institution. The corporation controls the timing and the amount of dividends and the objective of the corporation is to maximize the total discounted dividends paid out to shareholders until the time of bankruptcy given that the dividend payments are subject to regulatory delay. The payment of a dividend is not automatic and payments can be made only after a certain amount of time elapses. The amount and the timing of payment is decided by the company managers but these are subject to the approval of the company's owners (shareholders) and maybe also of debt holders and therefore it takes some time before the dividends are paid. Recently, there have been other papers on optimally controlling a state variable subject to implementation delays in different modeling contexts, see e.g. [2], [3], [4], [16], [21] and [25]. Our methodolgy of solving this problem is in the spirit of [4] and differs from the other papers cited above as will be made clear below.
We model the problem of the corporation as an impulse control problem and assume that when dividend is paid out, the firm has to pay a fixed cost representing the resources it has to devote to the distribution of dividends. This amount is independent of the size of the dividend payment. Other papers modeling the dividend payment problem as an impulse control problem are [7], [13] and [23]. There are several other papers which model the dividend payment problem as a singular stochastic control problem by assuming that there is no fixed cost at the time of dividend payment; see e.g. [10], [11], [12], [13] and [26].
Applying an appropriate transformation to the value of a particular control, we transform the problem into a non-linear programming problem. Using the new characterization of the value function we give an easy to implement algorithm to determine the optimal control and the value function. A secondary result of our paper are the sufficient conditions under which the smooth fit holds (see Remark 4.1 and Proposition 4.1). In contrast, in the literature impulse control problems are solved first finding a classical solution to a system of quasi variational inequalities. The optimal thresholds are determined using the so-called "smooth fit principle" (by hypothesizing that the smooth fit holds). (Once a classical solution to this system is determined it can be shown to be equal to the value function by the so called verification lemma.) See e.g. Bensoussan and Lions [5] and Øksendal and Sulem [20].
In this paper, the time horizon is the time of ruin, and this makes the analysis more difficult from that of [4], which only considers infinite horizon problems. Since cashflow of different companies have different qualitative behavior, a manager needs a portfolio of tractable models to choose from. Here we consider four models for the aggregate income/cash reservoir of the firm: i) Brownian motion with drift, ii) Ornstein-Uhlenbeck, iii) Square-root process, iv) Geometric Brownian motion. Most of the papers related to stochastic impulse control, in order to obtain analytical solutions, assume that the uncontrolled process is a Brownian motion with drift. In addition to using Brownian motion to model the cash reservoir, we also propose two mean reverting processes as possible modeling alternatives which is suggested by the Cash Flow Hypothesis in Jensen [14]; see [12] for further motivation. On the other hand, geometric Brownian motion is used to model the firm value in the structural models in credit risk modeling. Our solution for the geometric Brownian motion model can also be interpreted as the optimal dividend distribution to the stockholders of a given company since geometric Brownian motion is frequently used to model the value of a company (for e.g. in the structural credit risk models) [18]; see [10] for further motivation. As far as we know, our paper is the first one that explicitly handles the dividend payment problem for for the square root process (with or without delays).
The rest of the paper is organized as follows: In Section 2, we present the models for the cash reservoir and state the dividend payment problem. In Section 3, we provide a characterization of the value function for a given threshold strategy. In Section 4, we provide an easily implementable algorithm to find the optimal threshold strategy. We also provide theoretical justification for our algorithm in this section (see e.g. Proposition 4.1). We then check that the models satisfy the sufficient assumptions of optimality in Section 4.3. Finally, in Section 5 we present some numerical examples.
Statement of The Problem
Let (Ω, F, P) be a complete probability space with a standard Brownian motion W = {W t ; t ≥ 0}. We model the aggregate income process X 0 as either the Brownian motion
(2.1) dX 0 t = µdt + σdW t , X 0 0 = x > 0,
for some constants µ, σ > 0; or the Ornstein Uhlenbeck process
(2.2) dX 0 t = −ρX 0 t dt + dW t , X 0 0 = x > 0,
for some constant ρ > 0, or the square root process
(2.3) dX 0 t = (1 − 2ρX 0 t )dt + 2 X 0 t dW t , X 0 0 = x > 0.
Note that if the initial condition of (2.3) is properly chosen, then the solution of it is the square of the solution of (2.2). We will also consider the case when the aggregate income process follows the geometric Brownian motion
(2.4) dX 0 t = µX 0 t dt + σX 0 t dW t , X 0 0 = x > 0.
The firm will pay dividends to its shareholders out of the aggregate income process X 0 and the net holdings of the firm, i.e. the net income process will be denoted by X. We assume that the company pays out dividends to its shareholders in order to maximize the expected value of discounted dividends paid out until the time of ruin. There will be a fixed amount of transaction cost for making a dividend payment. In this framework a dividend payment scheme that a firm follows can be represented by a doubly stochastic sequence
ν = (T 1 , T 2 , ....T i ....; ξ 1 , ξ 2 , ...ξ i ....),
where 0 ≤ T 1 < T 2 < .... is an increasing sequence of F-stopping times such that T i+1 − T i ≥ ∆, and ξ 1 , ξ 2 ... are F (T i +∆)− measurable random variables representing the dividend amount paid out. The firm decides to make dividend payments at (random) time T i , but it can not act until time T i + ∆ (where ∆ ≥ 0 is a constant). It decides on the magnitude of the dividend amount at T i + ∆ depending on the level of its revenues. We will in particular consider benchmark strategies. These strategies are determined by specifying two numbers 0 ≤ a < b as follows: At the time the aggregate profit (or the firm value) hits a large enough level b, the shareholders ask the firm to commit to making dividend payments and reduce the level of net profits (or the firm value) to a. We denote by V the set of strategies that fit into this description. We will refer to them as the admissible strategies. The net income process follows (until after the first dividend payment)
dX t = µ(X t )dt + σ(X t )dW t , 0 ≤ t < T 1 + ∆, X T 1 +∆ = X (T 1 +∆)− − ξ 1 , (2.5)
for appropriate functions µ and σ depending on which case we are inspecting. For the first three cases we assume that 0 is the absorbing state and define τ 0 (the time of ruin) as :
τ 0 inf{t ≥ 0 : X t ≤ 0}.
When the aggregate income process follows the geometric Brownian motion, the time of ruin is defined as
(2.6) τ d inf{t ≥ 0 : X t ≤ d},
for some fixed d > 0. The purpose of the firm is to maximize expected value of the discounted dividend payments until the time of ruin, i.e.,
J ν (x) E x T i +∆<τ 0 e −α(T i +∆) K(X (T i +∆)− , X T i +∆ ) , (2.7)
over all the admissible strategies. We will assume that
K(X (T 1 +∆)− , X T 1 +∆ ) = X (T 1 +∆)− − X T 1 +∆ − λ,
where λ > 0 is a fee associated with a transaction. We could also consider
K(X (T 1 +∆)− , X T 1 +∆ ) = k · (X (T 1 +∆)− − X T 1 +∆ ) − λ,
for k ∈ (0, 1), in which 1 − k can be considered as the tax rate. This does not affect the analysis and therefore we will focus on the case when k = 1.
Let us denote the value function of this problem by
(2.8) v(x) sup ν∈V J ν (x) = J ν * (x).
When X 0 is the Ornstein Uhlenbeck process, in addition to considering the performance function in (2.7) we will also consider the following performance
(2.9) J ν (x) E x T i +∆<τ 0 e −α(T i +∆) K(X (T i +∆)− , X T i +∆ ) − P e −ατ 0 ,
for some constant P > 0. The rationale for considering this penalty function is to penalize declaring banktruptcy. As we shall see if the purpose is to maximize the performance function in (2.7), when X 0 follows an OU process, it is optimal to declare bankruptcy when the aggregate income process reaches a certain level. Therefore the OU process might be used to model the income process of firms in distress. This might give an idea to the creditors of how this type of a firm might behave. The extra cost in (2.9) will, on the other hand, deter the firms from declaring bankruptcy.
Characterization of the Value Function
In this section, we will show that when we apply a suitable transformation (see 3.6) to the value function corresponding to a particular threshold strategy (that is identified by a pair (a, b)), the transformed value function is linear on (F (0), F (b)). This characterization will become important in determining the optimal threshold strategy in the next section. Equation (2.7) can be developed as
J ν (x) = E x
In this case J ν (0) = −P and
J ν (b) = E b 1 {∆<τ 0 } e −α∆ (K(X ∆ , a) + J ν (a)) − P 1 {∆>τ 0 } e −ατ 0 .
We will denote the infinitesimal generator of the process X 0 by A. Let us denote the increasing and decreasing solutions of the second-order ordinary differential equation (A − α)u = 0 by ψ(·) and ϕ(·) respectively (these are uniquely determined up to a multiplication). We can write
(3.5) E x [e −ατr 1 {τr <τ l } ] = ψ(l)ϕ(x) − ψ(x)ϕ(l) ψ(l)ϕ(r) − ψ(r)ϕ(l) , E x [e −ατ l 1 {τ l <τr} ] = ψ(x)ϕ(r) − ψ(r)ϕ(x) ψ(l)ϕ(r) − ψ(r)ϕ(l) , for x ∈ [l, r] where τ l inf{t > 0; X 0 t = l} and τ r inf{t > 0; X 0 t = r} (see e.
g. Dayanik and Karatzas [9]). Let us introduce the increasing function
F (x) ψ(x) ϕ(x) .
By defining
(3.6) W (J ν /ϕ) • F −1 , on x ∈ [0, b], using (3.5), equation (3.3) on 0 ≤ x ≤ b becomes (3.7) W (F (x)) = W (F (b)) F (x) − F (0) F (b) − F (0) + W (F (0)) F (b) − F (x) F (b) − F (0) 0 ≤ x ≤ b
which shows that the value function is linear in the transformed space. Next, we will compute 3.1.1. Ornstein-Uhlenbeck Process. Let's first consider the case when X 0 is the Ornstein-Uhlenbeck process given by (2.2). Recall that X 0 t can be written as (can be derived using Theorem 4.6 of Karatzas and Shereve [15])
(3.8) B E x 1 {∆<τ 0 } e −α∆ (K(X ∆ , a) + J ν (a)) ,(3.9) X 0 t = xe −ρt + B Q(t) , where Q(t) = 1 − e −2ρt 2ρ , or (3.10) e ρt X 0 t = x +BQ (t) , whereQ(t) = e 2ρt − 1 2ρ ,
and B andB are Brownian motions. This implies that the distribution of X 0 t is Gsn xe −αt , Q(t) . (We use Gsn(a, b) to denote a Gaussian random variable with mean a and variance b.) As a result of the representation in (3.10)
P x (τ 0 > ∆) = P x (τB 0 >Q(∆)) = 1 − 2 √ 2π ∞ x/ √Q (∆) e −u 2 /2 du = 2N x Q (∆) − 1 = 2N xe −ρ∆ Q(∆) − 1, (3.11)
where τB 0 is the first time the Brownian motion x +B hits zero. Here, we used the distribution of the hitting times of Brownian motion (see page 96 of Karatzas and Shreve). We also used the notation that
N (x) = x −∞ 1/( √ 2π)e −u 2 /2 du. Let us try to identify the density function of Y ∆ X 0 ∆ 1 {τ 0 >∆} .
To this end we first compute
P x {X 0 ∆ ≥ y, τ 0 > ∆} = P x {X 0 ∆ ≥ y) − P x (X 0 ∆ ≥ y, τ 0 ≤ ∆} = P x {X 0 ∆ ≥ y} − P x {X 0 ∆ ≤ −y, τ 0 ≤ ∆} = P x {X 0 ∆ ≥ y} − P x {X 0 ∆ ≤ −y} = 1 2πQ(∆) ∞ y exp − (u − xe −ρ∆ ) 2 2Q(∆) du − −y −∞ exp − (u − xe −ρ∆ ) 2 2Q(∆) du .
(3.12)
Here, the second equality follows from the fact that OU process satisfies a reflection principle around zero, and the third inequality follows from the fact that {X 0 ∆ ≤ −y} ⊃ {τ 0 ≤ ∆} since y > 0. The last line implies that (after taking the derivative with respect to y and flipping the sign) the density of the random variable Y ∆ = X 0 ∆ 1 {τ 0 >∆} is given by
(3.13) q(y) = 1 2πQ(∆) exp − (y − xe −ρ∆ ) 2 2Q(∆) − exp − (y + xe −ρ∆ ) 2 2Q(∆) , y > 0.
Using (3.11) and (3.13), we can write (3.14)
B = E x 1 {∆<τ 0 } e −α∆ (K(X ∆ , a) + J ν (a)) = e −α∆ 2N xe −ρ∆ Q(∆) − 1 (J ν (a) − a − λ) + A , in which A ∞ 0 yq(y)dy. Since xe −ρ∆ 0 y exp − (y − xe −ρ∆ ) 2 2Q(∆) dy = − 0 −xe −ρ∆ y exp − (y + xe −ρ∆ ) 2 2Q(∆) dy,
we can write A as
A = 1 2πQ(∆) ∞ xe −ρ∆ y exp − (y − xe −ρ∆ ) 2 2Q(∆) dy − ∞ −xe −ρ∆ y exp − (y + xe −ρ∆ ) 2 2Q(∆) dy .
For any µ ∈ R and σ > 0 we have that
∞ µ x √ 2πσ 2 exp − (x − µ) 2 2σ 2 dx = σ √ 2π + µ 2 .
As a result
(3.15) A = xe −ρ∆ .
Observe from the above calculations that X 0 ∆ 1 {τ 0 >∆} and X 0 ∆ have the same expectation. We will also compute the quantity
(3.16)B E x 1 {∆<τ 0 } e −α∆ (K(X ∆ , a) + J ν (a)) − P 1 {∆>τ 0 } e −ατ 0 ,
for this case. Using the density of the hitting time of 0, which can be derived by differentiating (3.11) we can write
(3.17)B = B − P ∆ 0 e −αt x √ 2π ρ sinh(ρt) 3 2 exp − ρx 2 e −ρt 2 sinh(ρt) + ρt 2 dt.
There is not explicit expression available for the integral term (even in terms of special functions, except when ∆ = ∞, see e.g. [6] and [8], in which case this integral is the Laplace transform of the distribution of τ 0 ) but the NIntegrate function of Mathematica is able to evaluate it with a very high numerical precision.
Remark 3.1. We can compute B in (3.8) explicitly even for the cases when X 0 follows
(3.18) dX 0 t = (φ − ρX 0 t )dt + σdW t , X 0 = x > 0, for φ, σ > 0 by using the Strong Markov property to compute E x [X ∆ 1 {∆<τ 0 } ] = E x [X ∆ ] − E x [1 {∆≥τ 0 } X ∆ ].
The Strong Markov property is used to compute
E x [1 {∆≥τ 0 } X ∆ ] = E x [1 {∆≥τ 0 } E x [X ∆ |F τ 0 ]] = E x [1 {∆≥τ 0 } E 0 [X ∆−τ 0 ]] = ∆ 0 f (u)E 0 [X ∆−u ] = φ ∆ 0 f (u)(1 − exp(−ρ(∆ − u)))du,
where f is the density function of τ 0 . Several representations for f are available, see for e.g. [1].
3.1.2.
Square-root Process. To evaluate B in (3.8) when the aggregate income process is modeled by the square root process in (2.3) we need to compute
(3.19) C ∞ 0 y 2q (y)dy, in whichq(y) is equal to the q(y) in (3.13) if x is replaced by √ x. This follows because if (2.2) is started from √ x,
then the solution of it is the square root of the solution of (2.3). Let us first evaluate
1 2πQ(∆) ∞ 0 y 2 exp − (y − √ xe −ρ∆ ) 2 2Q(∆) dy = 1 √ 2π ∞ − √ xe −ρ∆ √ Q(∆) (y Q(∆) + √ xe −ρ∆ ) 2 exp − y 2 2 dy = Q(∆) √ 2π ∞ − √ xe −ρ∆ √ Q(∆) y 2 exp − y 2 2 dy + 2 Q(∆) √ 2π √ xe −ρ∆ ∞ − √ xe −ρ∆ √ Q(∆) y exp − y 2 2 dy + xe −2ρ∆ N √ xe −ρ∆ Q(∆) = Q(∆) √ 2π ∞ − √ xe −ρ∆ √ Q(∆) y 2 exp − y 2 2 dy + 2 Q(∆) √ 2π √ xe −ρ∆ exp − xe −2ρ∆ 2Q(∆) + xe −2ρ∆ N √ xe −ρ∆ Q(∆) .
(3.20)
Since 1 √ 2πQ(∆) ∞ 0 y 2 exp − (y+ √ xe −ρ∆ ) 2 2Q(∆)
dy can be obtained by flipping the sign in front of √ x in (3.20), the computation of C will follow. We also have that
Q(∆) √ 2π ∞ − √ xe −ρ∆ √ Q(∆) y 2 exp − y 2 2 dy = Q(∆) 2 + Q(∆) √ 2π 0 − √ xe −ρ∆ √ Q(∆) y 2 exp − y 2 2 dy = Q(∆) 2 + Q(∆)N √ xe −ρ∆ Q(∆) − xQ(∆) √ 2π e −ρ∆ exp − xe −2ρ∆ 2Q(∆) . (3.21)
Form (3.20) and (3.21), we can evaluate C as
(3.22) C = (xe −2ρ∆ + Q(∆)) 2N √ xe −ρ∆ Q(∆) − 1 + 2Q(∆)x √ π e −ρ∆ exp − xe −2ρ∆ Q(∆) .
When X 0 is the square root process then B defined in (3.8) equals
(3.23) B = e −α∆ 2N √ xe −ρ∆ Q(∆) − 1 (J ν (a) − a − λ) + C .
Brownian Motion with Drift.
Similarly, using reflection principle, Girsanov's Theorem and the spatial homogeneity of Brownian motion we will obtain B in (3.14) when X 0 is a Brownian motion given by (2.1). We will first need the following lemma, which is Corollary B.3.4 in [19].
Lemma 3.1. Let Y t = σW t + µt and m Y t = min u∈[0,t] Y u . Then (3.24) P{Y t ≥ y, m Y t ≥ m} = N −y + µt σ √ t − e 2µm/σ 2 N 2m − y − µt σ √ t ,
for every m ≤ 0 and y ≥ m.
We can write B as
(3.25) B = E x+θ 1 {∆<τ θ } X 0 ∆ ,
in which θ J ν (a) − a − λ, which follows from the spatial homogeneity of Brownian motion. Note that for any y > θ
Z 1 {∆<τ θ } X 0 ∆ ≥ y iff X ∆ ≥ y, m X 0 ∆ ≥ θ.
We will find the probability density function of Z. Let us first define
Y t X t − (θ + x), which implies that m Y t = m X 0 t − (θ + x). With this new definition (3.26) P{Z ≥ y} = P{Y ∆ ≥ y −(x+θ), m Y ∆ ≥ −x} = N −y + x + θ + µ∆ σ √ ∆ −e −2µx/σ 2 N −x − y + θ + µ∆ σ √ ∆ .
Here, the second equality follows from Lemma 3.1. Now, the density of the random variable Z is easy to calculate and using that we can compute B by calculating the expectation of Z and get
B = e −α∆ (x + µ∆ + J ν (a) − a − λ)N x + µ∆ σ √ ∆ + σ √ ∆ √ 2π exp − 1 2 (x + µ∆) 2 σ 2 ∆ − e −2µx/σ 2 (−x + µ∆ + J ν (a) − a − λ)N −x + µ∆ σ √ ∆ + σ √ ∆ √ 2π exp − 1 2 (−x + µ∆) 2 σ 2 ∆ .
(3.27)
3.1.4. Geometric Brownian Motion. We will use the down and out European call option price, see e.g. [24] , (when we take the strike price to be zero) to evaluate
(3.28) E x 1 {τ d >∆} X ∆ = e µ∆ x N (d 1 ) − d x 1+2µ/σ 2 N (−d 2 ) ,
in which
d 1 = log x d + (µ − 1 2 σ 2 )∆ σ √ ∆ d 2 = log x d − (µ − 1 2 σ 2 )∆ σ √ ∆ . (3.29)
In order to calculate B we also need to compute P{τ d > ∆}. In fact
P x {τ d > ∆} = P x {τB d > ∆},
in which τB d is the hitting time ofd < 0 by the Brownian motionB γt + σB t , wherẽ
d log d x , γ = µ − 1 2 σ 2 .
Using the hitting time distribution for Brownian motion (with drift), (which can be obtained from Lemma 3.1), we deduce
(3.30) P x {τ d > ∆} = P x {mB ∆ ≥d} = N −d + γt σ √ ∆ − exp 2γd σ 2 N d + γt σ √ ∆ .
Therefore, B can be written as
(3.31) B = e −α∆ e µ∆ x N (d 1 ) − d x 1+2µ/σ 2 N (−d 2 ) + (J ν (a) − a − λ)P x {τ d > ∆} .
4. An Efficient Algorithm to Calculate the Value Function
D 1 = −µ + µ 2 + 2ασ 2 σ 2 and D 2 = −µ − µ 2 + 2ασ 2 σ 2 .
When X 0 is the Ornstein-Uhlenbeck process in (2.2), then
(4.2) ψ(x) = exp ρx 2 2 D −α/ρ (−x 2ρ), ϕ(x) = exp ρx 2 2 D −α/ρ (x 2ρ), x ∈ R,
where D ν (·) is the parabolic cylinder function given in the Appendices 1.14 and 2.9 in [6] which is defined as
D ν (x) 2 −ν/2 e −x 2 /4 H v x √ 2 , x ∈ R,
in which H ν is the Hermite polynomial of order ν, which has the integral representation (see e.g. [17])
H ν (x) = 1 Γ(−ν) ∞ 0 exp −t 2 − 2tx t −ν−1 dt, Re(ν) < 0.
On the other hand, when X 0 is the square root process whose dynamics follows (2.3), then
(4.3) ψ(x) = x −1/4 exp ρx 2 M − α 2ρ + 1 4 ,− 1 4 (ρx), ϕ(x) = x −1/4 exp ρx 2 W − α 2ρ + 1 4 ,− 1 4 (ρx), in which W − α 2ρ + 1 4 ,− 1 4 and M − α 2ρ + 1 4 ,− 1 4
are Whittaker functions (see e.g. Appendix 2.10 of [6]). These functions satisfy
W − α 2ρ + 1 4 ,− 1 4 x 2 2 = 2 α 2ρ − 1 4 √ xD −α/ρ (x), x ≥ 0, M − α 2ρ + 1 4 ,− 1 4 x 2 2 = Γ((1 + α/ρ)/2) 2 √ π √ x(D −α/ρ (−x) − D −α/ρ (x)), x ≥ 0,(4.4)
in which Γ stands for the Gamma function Γ(x) = ∞ 0 u x−1 e −u du. When, X 0 is the geometric Brownian motion, then
(4.5) ψ(x) = x q κ 2 + 2α σ 2 −κ ϕ(x) = x − q κ 2 + 2α σ 2 −κ ,
in which κ = µ/σ 2 − 1/2.
4.2.
An Algorithm to Find the Optimal Control. In this section we will describe a numerical algorithm to find the value function. First we will introduce some notation that we facilitate our description. where B is as in (3.8). We transform r and h by (4.7) R(·; a) r(F −1 (·), a) ϕ(F −1 (·)) , H(·) h(F −1 (·)) ϕ(F −1 (·)) .
Note that r(a; a) < 0 and that sup x r(x; a) > 0 in all the cases considered above (see Section 4.3). First stage: For a given pair (a, b) ∈ R 2 + we will determine W in (3.6) using the linear characterization in (3.7). On [F (0), F (b)] we will find the line W (y) = βy + ξ that passes through the point F (0), − P ϕ(0) , i.e.,
Now the function J ν (x) in (3.4) can be written as
(4.11) J ν (x) = βψ(x) + ξϕ(x), 0 ≤ x ≤ b, e −α∆ (r(x; a) + h(x)J ν (a)), x ≥ b.
Note that (A − α)J ν (x) = 0 for x < b. Second stage: Let us fix a and treat β as a function of b parametrized by a. We will maximize the function β in (4.10). Taking the derivative of (4.9) and evaluating at β b = 0 we obtain
(4.12) β = e −α∆ ∂ ∂y R(y; a) y=F (b) + H ′ (F (b))ϕ(a) β · (F (a) − F (0)) − P ϕ(0) ,
in which β is as in (4.10). To find the optimal b given a we solve the non-linear and implicit equation (4.12). The right derivative of W at F (b) is
(4.14) W ′ (F (b)) = e −α∆ W (F (a))ϕ(a)H ′ (F (b)) + ∂ ∂y R(y; a) y=F (b) = β,
where we used (4.12). This implies that the left and the right derivatives of W are equal at F (b) (smooth fit), since the left derivative at F (b) is also equal to β.
Third stage: Now, we vary a ∈ R + and choose a * that maximizes a → β(a). We also find the corresponding b * = b(a * ). Now, the value function is given by (4.11) when a and b are replaced by a * and b * respectively. The next proposition justifies the second stage of our algorithm.
Proposition 4.1. Assume that r(a; a) < 0 and sup x r(x; a) > 0. Furthermore, if the functions R(·; a) and H(·) defined in (4.7) are increasing and concave on (y, ∞) for some y ≥ F (a), and the function h(·) defined in (4.6) satisfies h(·) ∈ (0, 1), then for any given a ≥ 0, (4.12) has a unique solution.
The proof essentially follows from Remark 4.1. But we will have to introduce a series of lemmas before we justify our claim.
First, let us also introduce a family of value functions parameterized by γ ∈ R as
(4.15) V γ a (x) sup τ ∈S E x e −ατ r γ (X τ ; a) where r γ (x; a) e −α∆ (r(x; a) + γ · h(x)),
in which S is the set of stopping times of the natural filtration of X. Here, X is a diffusion on [0, ∞), which is absorbed at the left boundary. (In the case of geometric Brownian motion this left boundary is taken to be d > 0.) Then we have the following result.
(4.16) R γ (·; a) r γ (F −1 (·), a) ϕ(F −1 (·)) ,
then the function
(4.17) W γ a (·) V γ a (F −1 (·)) ϕ(F −1 (·)) ,
is the smallest non-negative concave majorant of R γ that passes through (F (0), 0). Moreover under the assumptions of Proposition 4.1 this majorant is linear in the continuation region (the region in which W γ is strictly greater than R γ ).
Proof. The first part of the proof follows Proposition 5.3 of Dayanik and Karatzas [9]. The second part of the proof follows from the first and the fact that R γ (·; a) is increasing and concave on (y, ∞).
The following technical lemma will be used in showing the existence of γ such that V γ a (a) = γ for any a ≥ 0.
4.18) V γ 1 a (x) − V γ 2 a (x) ≤ γ 1 − γ 2 , for any γ 1 ≥ γ 2 and x ≥ 0.
Proof. It is clear from (4.6) that γ → V γ a (x) is an increasing convex function. Therefore the rightderivative
D + γ V γ ′ a (x) lim h↓0 V γ ′ +h (x) − V γ ′ (x)
h exists for any γ ′ > 0 and it satisfies
(4.19) V γ 1 a (x) − V γ 2 a (x) γ 1 − γ 2 ≤ D + γ V γ 1 a (x),
for any γ 1 ≥ γ 2 (see e.g. [15], pages 213-214). Note that since h(·) ∈ (0, 1) we have that (4.19) and (4.20) together imply (4.18). Proof. Consider the function γ → V γ a (a). Our aim is to show that there exists a fixed point to this function. Let us consider V 0 a (a) first. Since sup x r(x; a) > 0 we have that V 0 a (a) > 0. Now let us consider the case when γ > 0. First, note that W γ a (F (a)) ≥ R γ (F (a), a) for all γ. Since by Lemma 4.3 V has less than linear growth in γ and R γ is linear in γ, we can find a γ ′ large enough such that W γ a (F (a)) = R γ (F (a), a) for γ ≥ γ ′ . This implies however F (a); a) = e −α∆ (r(a; a) + γ ′ h(a)) < γ ′ . Since γ → V γ a is continuous, which follows from the fact that this function is convex, V 0 a > 0 and V γ ′ a (a) < γ ′ implies that γ → V γ a crosses the line γ → γ. Since γ → V γ a is increasing convex it crosses this line only once.
(4.20) 0 < D + γ V γ ′ a (x) ≤ 1. Now,V γ ′ a (a) = ϕ(a)R γ ′ (
Proof of Proposition 4.1. The smallest concave majorant W γ a in (4.17) is linear on (F (0), F (b γ )) for a unique b γ ∈ [0, ∞) and smoothly fits to R γ (·; a) at b γ and coincides with R γ (·, a) on [b γ , d). Together with Lemma 4.4 this implies that there exists a unique γ * such that equations (4.13) and (4.14) are satisfied when W is replaced by W γ * a and b is replaced by b γ * . If the solution of equations (4.13) and (4.14) were not unique, on the other hand, then one would be able to find multiple smooth fit points b γ * , which yields a contradiction.
4.3.
Are the Assumptions of Proposition 4.1 Satisfied? The following remark will be helpful in the analysis that follows:
Remark 4.2. Given a function k let us denote K(y) k ϕ • F −1 (y), y > 0. If k is twice differentiable at x ≥ 0 and if we denote y F (x), then K ′ (y) = m(x) and K ′′ (y) = m ′ (x) F ′ (x) with (4.21) m(x) = 1 F ′ (x) k ϕ ′ (x), and K ′′ (y)[(A − α)k(x)] ≥ 0, y = F (x),
with strict inequality if H ′′ (y) = 0. The inequality in (4.21) is useful in identifying the concavity of K.
Brownian Motion with Drift.
In this case r(x; a) and h(x) defined in (4.6) are given by
r(x; a) = (x + µ∆ − a − λ)N x + µ∆ σ √ ∆ + σ √ ∆φ x + µ∆ σ √ ∆ − e −2µx/σ 2 (−x + µ∆ − a − λ)N −x + µ∆ σ √ ∆ + σ √ ∆φ −x + µ∆ σ √ ∆ , h(x) = N x + µ∆ σ √ ∆ − e −2µx/σ 2 N −x + µ∆ σ √ ∆ ,(4.22)
in which φ(x) = (1/ √ 2π)e −x 2 /2 . First note that h(x) ∈ (0, 1). It is enough to show that R(·; a) and H(·) are eventually increasing, and are eventually concave. First, we will show that they are eventually increasing. The derivative of R(·; a) has the same sign as
ϕ(x) 2 r(x; a) ϕ(x) ′ (x) = ϕ(x) N x + µ∆ σ √ ∆ − (a + λ) σ √ ∆ φ x + µ∆ σ √ ∆ + e −2µx/σ 2 N −x + µ∆ σ √ ∆ − (a + λ) σ √ ∆ φ −x + µ∆ σ √ ∆ − D 1 σ 2 (−x + µ∆ − a − λ)N −x + µ∆ σ √ ∆ + σ √ ∆φ −x + µ∆ σ √ ∆ − D 2 e −2µx/σ 2 (x + µ∆ − a − λ)N x + µ∆ σ √ ∆ + σ √ ∆φ x + µ∆ σ √ ∆ ,(4.23)
since F is an increasing function. If we take x > a (a is fixed) large enough, the third line of (4.23) dominates the the other lines. Since D 1 > 0, we can conclude that there exists x ′ ≥ a such that
r(x;a) ϕ(x) ′ (x) > 0 on x ∈ (x ′ , ∞).
On the other hand, directly taking the derivative, h(x) can be shown to be an increasing function in x ∈ R + , from which it follows that H(y) = h(F −1 (y))/ϕ(F −1 (y)) is also increasing.
Next, we will show that R and H are eventually concave. Consider the equation (A−α)r(x; a) = p(x; a) so that p(x; a) = µr ′ (x; a) + 1 2 σ 2 r ′′ (x; a) − αr(x; a).
Directly taking the derivatives and letting x → ∞, we obtain r ′ (x; a) → 1, r ′′ (x; a) → 0 and r(x; a) → ∞. Therefore lim x→∞ p(x; a) = −∞. Similarly, we consider the equation q(x) (A − α)h(x). By letting x → ∞, we have h(x) → 1, h ′ (x) → 0 and h ′′ (x) → 0 so that lim x→∞ q(x) < 0. Together with Remark 4.2, these facts imply that R(·, a) and H(·) are concave on y ∈ (y ′′ , +∞) for some y ′′ F (a).
4.3.2.
Ornstein-Uhlenbeck Process. We will only consider the case when the performance function is as in (3.3). The analysis for the case when declaring banktruptcy is penalized can be performed similarly, since first and the second derivatives of the integral term in (3.17) with respect to the x variable goes to zero as x → ∞.
In this case r(x; a) and h(x) defined in (4.6) are given by
r(x; a) = xe −ρ∆ − 2N xe −ρ∆ Q(∆) − 1 (a + λ), h(x) = 2N xe −ρ∆ Q(∆) − 1. (4.24)
First, observe that r(x; a) > 0 on (x 1 , ∞) with some x 1 > a. By taking the derivative of r(x; a), we have
r ′ (x; a) = e −ρ∆ 1 − 2(a + λ) Q(∆) φ xe −ρ∆ Q(∆) .
From this expression we see that r ′ (x; a) > 0 on x ∈ (x 2 , ∞) with some x 2 > a. Let us denote x ′ max(x 1 , x 2 ). It follows that R(y) is increasing on y ∈ (y ′ , ∞) with y ′ = F (x ′ ) because
r ϕ ′ = r ′ ϕ − rϕ ′ ϕ 2 with ϕ ′ < 0.
Observe also that h(x) ∈ (0, 1) and h ′ (x) > 0 on x ∈ R + . Next, we will analyze the concavity properties of R(·; a) and H. Observe that r(x; a) > 0 on (x 1 , ∞) with some x 1 ≥ a since the only negative term in the first equation in (4.25) is bounded from below by −(a + λ). Taking the derivative of r(x; a) we obtain
r(x; a) = (xe −2ρ∆ + Q(∆) − (a + λ)) 2N √ xe −ρ∆ Q(∆) − 1 + 2 Q(∆)xe −ρ∆ φ √ xe −ρ∆ Q(∆) , h(x) = 2N √ xe −ρ∆ Q(∆) − 1.r ′ (x; a) = e −2ρ∆ 2N √ xe −ρ∆ Q(∆) − 1 + e −ρ∆ Q(∆) φ √ xe −ρ∆ Q(∆) √ xe −2ρ∆ + a + λ √ x − e −ρ∆ √ x φ √ xe −ρ∆ Q(∆) Q(∆) + xe −ρ∆ + Q(∆) Q(∆) .
(4.26)
The second term on the first line of (4.26) is positive and it dominates as x → ∞, therefore r ′ (x; a) > 0 on x ∈ (x 2 , ∞) with for some x 2 ≥ a. Take x ′ max(x 1 , x 2 ). It follows that R(y) is increasing on
y ∈ (y ′ , ∞), in which y ′ = F (x ′ ). On the other hand, h(x) ∈ (0, 1) and h ′ (x) = − e −ρ∆ √ x φ √ xe −ρ∆ √ Q(∆) < 0.
However, h ′ goes to zero as x → ∞, which implies that (h/ϕ)
′ = h ′ ϕ−hϕ ′ ϕ 2 > 0 on (x ′′ , ∞) for some sufficiently large x ′′ .
Next, we analyze the concavity properties of R(·; a) and H(·). Let us define p(x; a) (A − α)r(x; a).
r(x; a) = e µ∆ x N (d 1 ) − d x 1+2µ/σ 2 N (−d 2 ) − (a + λ) N −d + γ∆ σ √ ∆ − e 2γd/σ 2 N d + γ∆ σ √ ∆ , h(x) = N −d + γ∆ σ √ ∆ − e 2γd/σ 2 N d + γ∆ σ √ ∆ .r ′ (x; a) = e µ∆ N (d 1 ) − d x 1+2µ/σ 2 N (−d 2 ) − (a + λ)h ′ (x) + e µ∆ φ(d 1 ) σ √ ∆ + (1 + 2µ/σ 2 ) d x 1+2µ/σ 2 + φ(−d 2 ) σ √ ∆ d x 1+2µ/σ 2 ,
which is positive on x ∈ (x 2 , ∞) for some x 2 ≥ a. Take x ′ max(x 1 , x 2 ). It follows that R(y) is increasing on y ∈ (y ′ , ∞) with y ′ = F (x ′ ). Similarly, since h(x) ∈ (0, 1) and h ′ (x) goes to zero as x → ∞, so that H ′ (y) > 0 on (y ′′ , ∞) for sufficiently large y ′′ . Next, we analyze the concavity of R(·; a) and H(·). Let us denote p(x; a) (A − α)r(x; a). The function p(·; a) is given by
p(x; a) = µxr ′ (x; a) + 1 2 σ 2 r ′′ (x; a) − αr(x; a) = (µ − α)xe µ∆ N (d 1 ) − d x 1+2µ/σ 2 N (−d 2 ) − α(a + λ)h(x) + T (x; a),
where T (x; a) is the terms that involve φ(·) or d x 1+2µ/σ 2 and lim x→+∞ T (x; a) = 0. Observe that lim x→+∞ p(x; a) = −∞ when µ ≤ α. Similarly, we consider the equation q(x) (A − α)h(x). h(x) → 1, h ′ (x) → 0 and h ′′ (x) → 0 implies that lim x→∞ q(x) < 0. Using Remark 4.2, we can conclude that R(·; a) and H(·) are eventually concave.
Numerical Examples
See Figures 1-4 for numerical illustrations. In our examples we quantify the effect of delay in dividend payments. In each case we find the optimal dividend payment barrier, b * , the optimal amount of dividend payment, b * − a * , and the value function v. Then we compare them to b 0 , b 0 − a 0 and v 0 , the analogues of the previous quantities when there is no delay. As expected the value function is smaller, v < v 0 when there is delay in dividend payments. Since in Figures 2 (b), 2 (e) and 4 (b), the value functions v and v 0 are not distinguishable, in Figures (2) and (4) we plot the difference of v 0 − v.
When the aggregate income process, X 0 is modeled by a Brownian motion with drift, a square root process then we observe that a * < a 0 , b * < b 0 , b * − a * < b 0 − a 0 and β * < β 0 . The same conclusion holds if X 0 is an Ornstein-Uhlenbeck process and the declaring bankruptcy is penalized. On the other hand, when X 0 is modeled by an Ornstein-Uhlenbeck process (the case in which declaring ruin is not penalized) or a geometric Brownian motion we obtain that a * = a 0 , b * > b 0 , b * − a * > b 0 − a 0 and β * < β 0 . Note that in both of these cases declaring bankruptcy is optimal as soon as the aggregate income level hits b * , regardless of the magnitude of delay.
Observe that in the numerical examples considered, the function β(a), which is obtained from (4.10) after we plug in for b that we obtain from (4.12) (say b(a)), is concave. It is either strictly decreasing or has a unique local maximum. We leave the proof of these features of the function β(a) as an open problem.
Remark 5.1. In our framework, it is easy to deal with solvency constraints. The optimal a * may not be acceptable, and prohibited by regulatory constraints. This was studied by Paulsen [22] in singular control setting (with no delays). Let as consider the case with ∆ = 0 and assume that the firm is not allowed to reduce its aggregate cash flow to belowã. If we show the above properties hold for β(a) a it is easy to argue if a * >ã, then every time it pays out dividends the firm would reduce its reservoir to a * (the constraint is not binding), else if a * <ã, then the firm every time it pays out dividends the firm would reduce its reservoir toã.
Conclusion
We study optimal dividend payout problems with delay using various types of diffusions. Our method facilitates greatly the solution procedure due to the new characterization of the value function. The existence of the finite value function and the uniqueness of optimal threshold strategy reduce to verifications of the assumption of Proposition 4.1. Our models here are more realistic since the delays with respect to dividend payments are explicitly handled.
in (3.3) for all the different models of aggregate income process. (In the case of geometric Brownian motion we will replace τ 0 by τ d in (3.8). Moreover the function J ν (x) for this case is given by replacing 0's with d's in (3.4).) 3.1. Computation of B in (3.8).
4. 1 .
1Increasing and Decreasing Solutions of (A − α)u = 0. When X 0 is the Brownian motion in (2.1), then the increasing and decreasing solutions of (A − α)u = 0 are (4.1) ψ(x) = e D 1 x and ϕ(x) = e D 2 x in which
α∆ B =: r(x; a) + h(x)J ν (a),
F
βF (b) + ξ = e −α∆ [R(F (b); a) + H(F (b))ϕ(a)(βF (a) + ξ)] . (b) − F (a) − e −α∆ (F (a) − F (0))H(F (b))ϕ(a) .
Remark 4. 1 .
1On y ≥ F (b), the function W is given by
( 4 .
413) W (y) = e −α∆ [W (F (a))ϕ(a)H(y) + R(y; a)] .
(
Lemma 4 . 4 .
44Under the assumptions of Proposition 4.1, there exists a unique γ such that V γ a (a) = γ for a ≥ 0.
Consider the equation (A−α)r(x; a) = p(x; a) so that p(x; a) = −ρxr ′ (x; a) + 1 2 r ′′ (x; a) − αr(x; a). We have r(x; a) → +∞, xr ′ (x; a) → +∞ and r ′′ (x; a) → 0 as x → ∞. Thus, we have lim x→∞ p(x; a) = −∞. Similarly, we consider the equation q(x) (A − α)h(x). By letting x → ∞, we have h(x) → 1, xh ′ (x) → 0 and h ′′ (x) → 0 so that lim x→∞ q(x) < 0. Together with Remark 4.2, this analysis shows that there exists y ′′ ≥ F (a) such that R(·; a) and H(·) are concave on (y ′′ , ∞).
Square Root Process. In this case the functions r and h are given by
x; a) = (1 − 2ρx)r ′ (x; a) + 2xr ′′ (x; a) − αr(x; a).We have r(x; a) → +∞, xr ′ (x; a) → +∞ and xr ′′ (x; a) → 0 as x → ∞. Thus, we have lim x→∞ p(x; a) = −∞. Similarly, we consider the equationq(x) (A − α)h(x). By letting x → ∞, we have h(x) → 1, xh(x) → 0 and xh ′′ (x) → 0 so that lim x→∞ q(x) < 0.Using Remark 4.2, we observe that R(·; a) and H(·) are eventually concave.4.3.4. Geometric Brownian Motion.When the aggregate income process X 0 is modeled by a geometric Brownian motion a sufficient condition for the hypothesis of the Proposition 4.1 to hold is µ ≤ α. In this case the functions r and h are given by
< 0 since x > d. Moreover, N (d 1 ) → 1, N (−d 2 ) → 0 and N (d) → 0 as x → +∞. Also, r(x; a) > 0 on (x 1 , ∞)with some x 1 > a since the negative term in the first equation in (4.27) is bounded. On the other hand h(x) > 0 for x ∈ R + and h ′ (x) → 0 as x → ∞. The derivative of r is
Figure 1 .
1A numerical example of a Brownian motion with drift with parameters (µ, α, σ, λ, ): (a) The graph of β(a) that attains the global maximum at a * = 0.755 with β * = 1.443. (b) The value function v(x) (below) with b * = 2.719. It is compared with the case of ∆ = 0 (above) with (a 0 , b 0 , β 0 )=(0.850, 2.895, 1.466).
Figure 2 .
2A numerical example of an OU process with parameters (ρ, α, λ, ∆) = (0.01, 0.05, 0.2, 0.25): (a) The graph of β(a) that attains the global maximum at a * = 0 with β * = 8.706. (b) The value function v(x) (below) with b * = 1.785. It is compared with the case v 0 (x) of ∆ = 0 (above) with (a 0 , b 0 , β 0 )=(0, 1.783, 8.841). (c) Plot of the difference v 0 (x) − v(x). (d) In the case of penalty at ruin, P = 10, the graph of β(a) that attains the global maximum at a * = 4.290 with β * = 0.953. (b) The value function v(x) (below) with b * = 6.811. It is compared with the case v 0 (x) of ∆ = 0 (above) with (a 0 , b 0 , β 0 ) = (4.349, 7.141, 0.979). (e) Plot of the difference v 0 (x) − v(x).
Figure 3 .
3A numerical example of a square root process with parameters (ρ, α, λ,
Figure 4 .
4A numerical example of an geometric Brownian motion with parameters (µ, σ, α, λ, ∆) = (0.05, √ 2, 0.1, 0.1, 0.25) and the ruin level d = 1: (a) The graph of β(a) that attains the global maximum at a * = 1 with β * = 0.853. (b) The value function v(x) (below) with b * = 9.138. It is compared with the case of ∆ = 0 (above) with (a 0 , b 0 , β 0 )=(1, 7.318, 0.865). (c) Plot of the difference v 0 (x) − v(x). (E. Bayraktar) Department of Mathematics, University of Michigan, Ann Arbor, MI 48109 E-mail address: [email protected] (M. Egami) Graduate School of Economics, Kyoto University, Sakyo-Ku, Kyoto, 606-8501, Japan E-mail address: [email protected]
Representations of the first hitting time density of an Ornstein-Uhlenbeck process. L Alili, P Patie, J L Pedersen, Stochastic Models. 21L. Alili, P. Patie, and J. L. Pedersen. Representations of the first hitting time density of an Ornstein-Uhlenbeck process. Stochastic Models, 21:967-980, 2005.
The impact of delivery lags on irreversible investment under uncertainty. L H R Alvarez, J Keppo, European Journal of Operations Research. 136L. H. R. Alvarez and J. Keppo. The impact of delivery lags on irreversible investment under uncertainty. European Journal of Operations Research, 136:173-180, 2002.
A model of sequential invetment. A Bar-Ilan, W C Strange, Journal of Economic Dynamics and Control. 22A. Bar-Ilan and W. C. Strange. A model of sequential invetment. Journal of Economic Dynamics and Control, 22:437-463, 1998.
The effects of implementation delay on decision-making under uncertainty. E Bayraktar, M Egami, Stochastic Processes and Their Applications. 117E. Bayraktar and M. Egami. The effects of implementation delay on decision-making under uncertainty. Stochastic Processes and Their Applications, 117 (3):333-358, 2007.
Impulse Control and Quai-Variational Inequalities. A Bensoussan, J L Lions, Gauthier-VillarsParisA. Bensoussan and J. L. Lions. Impulse Control and Quai-Variational Inequalities. Gauthier-Villars, Paris, 1982.
Handbook of Brownian Motion Facts and Formulae. A N Borodin, P Salminen, BirkhäuserBostonA. N. Borodin and P. Salminen. Handbook of Brownian Motion Facts and Formulae. Birkhäuser, Boston, 2002.
Optimal dividend policy with mean-reverting cash reservoir. A Cadenillas, S Sarkar, F Zapatero, Mathematical Finance. 171A. Cadenillas, S. Sarkar, and F. Zapatero. Optimal dividend policy with mean-reverting cash reservoir. Mathematical Finance, 17 (1):81-109.
The first passage problem for a continuous markov process. D A Darling, A J F Siegert, Annals of Mathematical Statistics. 24D. A. Darling and A. J. F. Siegert. The first passage problem for a continuous markov process. Annals of Mathematical Statistics, 24:624-639, 1953.
): (a) The graph of β(a) that attains the global maximum at a * = 0.09 with β * = 2.807. ( b) The value function v(x) (below) with b * = 0.662. S Dayanik, I Karatzas, Stochastic Processes and Their Applications. 107On the optimal stopping problem for one-dimensional diffusions. It is compared with the case of ∆ = 0 (above) with (a 0 , b 0 , β 0 )=(0.165, 1.014, 3.561S. Dayanik and I. Karatzas. On the optimal stopping problem for one-dimensional diffusions. Stochastic Processes and Their Applications, 107 (2):173-212, 2003. (1, 0.1, 0.1, 0.25): (a) The graph of β(a) that attains the global maximum at a * = 0.09 with β * = 2.807. ( b) The value function v(x) (below) with b * = 0.662. It is compared with the case of ∆ = 0 (above) with (a 0 , b 0 , β 0 )=(0.165, 1.014, 3.561).
Geometric Brownian Motion Models for Assets and Liabilities : From Pension Funding to Optimal Dividends. H U Gerber, E S W Shiu, North American Actuarial Journal. 73H. U. Gerber and E. S. W. Shiu. Geometric Brownian Motion Models for Assets and Liabilities : From Pension Funding to Optimal Dividends. North American Actuarial Journal, 7 (3):37-56, 2003.
Optimal dividends: Analysis with Brownian motion. H U Gerber, E S W Shiu, North American Actuarial Journal. 81H. U. Gerber and E. S. W. Shiu. Optimal dividends: Analysis with Brownian motion. North American Actuarial Journal, 8 (1):1-20, 2004.
Optimal Dividends in an Ornstein-Uhlenbeck Type Model with Credit and Debit Interest. H U Gerber, E S W Shiu, H Yang, North American Actuarial Journal. 102H. U. Gerber, E. S. W. Shiu and H. Yang. Optimal Dividends in an Ornstein-Uhlenbeck Type Model with Credit and Debit Interest. North American Actuarial Journal, 10 (2):94-108, 2006.
Optimization of the flow of dividends. M Jeanblanc-Picque, A N Shiryaev, Russian Mat. Surveys. 502M. Jeanblanc-Picque and A. N. Shiryaev. Optimization of the flow of dividends. Russian Mat. Surveys, 50 (2):257-277, 1995.
Agency cost of free cash flow, corporate finance and takeovers. M Jensen, American Economic Reveiew. 76M. Jensen. Agency cost of free cash flow, corporate finance and takeovers. American Economic Reveiew, 76:323-329, 1986.
Brownian Motion and Stochastic Calculus. I Karatzas, S E Shreve, Springer-VerlagNew YorkI. Karatzas and S. E. Shreve. Brownian Motion and Stochastic Calculus. Springer-Verlag, New York, 1991.
Optimal bank capital with costly recapitalization. J Keppo, S Peura, Journal of Business. To appear in theJ. Keppo and S. Peura. Optimal bank capital with costly recapitalization. To appear in the Journal of Business, 2005.
Special Functions and Their Applications. N N Lebedev, Dover PublicationsNew YorkN. N. Lebedev. Special Functions and Their Applications. Dover Publications, New York, 1972.
On the pricing of corporate debt: The risk structure of interest rates. R Merton, Journal of Finance. 29R. Merton. On the pricing of corporate debt: The risk structure of interest rates. Journal of Finance, 29:449-470, 1974.
Martingale Methods in Financial Modelling. M Musiela, M Rutkowski, SpringerNew YorkM. Musiela and M. Rutkowski. Martingale Methods in Financial Modelling. Springer, New York, 1997.
Applied Stochastic Control of Jump Diffusions. B Øksendal, A Sulem, Springer VerlagNew YorkB. Øksendal and A. Sulem. Applied Stochastic Control of Jump Diffusions. Springer Verlag, New York, 2005.
Optimal stochastic impulse control with delayed reaction. B Øksendal, A Sulem, University of OsloPreprintB. Øksendal and A. Sulem. Optimal stochastic impulse control with delayed reaction. Preprint. University of Oslo, 2005.
Optimal dividend payouts for diffusions with solvency constraints. J Paulsen, Finance and Stochastics. 7J. Paulsen. Optimal dividend payouts for diffusions with solvency constraints. Finance and Stochastics, 7:457-473, 2003.
Optimal dividend payments until ruin of diffusion processes when payments are subject to both fixed and proportional costs. J Paulsen, Advances in Applied Probability. 3J. Paulsen. Optimal dividend payments until ruin of diffusion processes when payments are subject to both fixed and proportional costs. Advances in Applied Probability, 3:669-689, 2007.
Breaking down the barriers. M Rubinstein, E Reiner, Risk. M. Rubinstein and E. Reiner. Breaking down the barriers. Risk, September:28-35, 1991.
The liquidity discount. A Subramanian, R A Jarrow, Mathematical Finance. 11A. Subramanian and R. A. Jarrow. The liquidity discount. Mathematical Finance, 11:447-474, 2001.
Optimal risk and dividend distribution control models for an insurance company. M Taksar, Mathematical Methods of Operations Research. 51M. Taksar. Optimal risk and dividend distribution control models for an insurance company. Mathematical Methods of Operations Research, 51:1-42, 2000.
| []
|
[
"Conformal covariance and the split property",
"Conformal covariance and the split property"
]
| [
"Vincenzo Morinelli [email protected] ",
"Yoh Tanimoto ",
"Mihály Weiner [email protected] ",
"\nDipartimento di Matematica\nDepartment of Analysis\nUniversitá di Roma Tor Vergata Via della Ricerca Scientifica\nI-00133RomaItaly\n",
"\nBudapest University of Technology\n& Economics (BME) Műegyetem rk. 3-9, H1111BudapestHungary\n"
]
| [
"Dipartimento di Matematica\nDepartment of Analysis\nUniversitá di Roma Tor Vergata Via della Ricerca Scientifica\nI-00133RomaItaly",
"Budapest University of Technology\n& Economics (BME) Műegyetem rk. 3-9, H1111BudapestHungary"
]
| []
| We show that for a conformal local net of observables on the circle, the split property is automatic. Both full conformal covariance (i.e. diffeomorphism covariance) and the circle-setting play essential roles in this fact, while by previously constructed examples it was already known that even on the circle, Möbius covariance does not imply the split property.On the other hand, here we also provide an example of a local conformal net living on the two-dimensional Minkowski space, which -although being diffeomorphism covariant -does not have the split property. | 10.1007/s00220-017-2961-3 | [
"https://arxiv.org/pdf/1609.02196v2.pdf"
]
| 119,638,119 | 1609.02196 | 7b26ebb52fb3b9cdf6503ae10ca0a7a6df950e24 |
Conformal covariance and the split property
7 Sep 2016
Vincenzo Morinelli [email protected]
Yoh Tanimoto
Mihály Weiner [email protected]
Dipartimento di Matematica
Department of Analysis
Universitá di Roma Tor Vergata Via della Ricerca Scientifica
I-00133RomaItaly
Budapest University of Technology
& Economics (BME) Műegyetem rk. 3-9, H1111BudapestHungary
Conformal covariance and the split property
7 Sep 2016
We show that for a conformal local net of observables on the circle, the split property is automatic. Both full conformal covariance (i.e. diffeomorphism covariance) and the circle-setting play essential roles in this fact, while by previously constructed examples it was already known that even on the circle, Möbius covariance does not imply the split property.On the other hand, here we also provide an example of a local conformal net living on the two-dimensional Minkowski space, which -although being diffeomorphism covariant -does not have the split property.
Introduction
More than half a century passed away since the first formulation of an axiomatic quantum field theory. There are several existing different settings (differing e.g. on the chosen spacetime, or whether their fundamental notion is that of a quantum field or a local observable) with many "additional" properties that are sometimes included among the defining axioms. For an introduction and overview of the topic we refer to the book of Haag [24].
Whereas properties like locality are unquestionably among the basic axioms, some other properties are less motivated and accepted. Haag-duality has an appealing mathematical elegance, but there seems to be no clear physical motivation for that assumption. Technicalities, like the separability of the underlying Hilbert space are sometimes required with no evident physical reason.
The split property is the statistical independence of local algebras associated to regions with a positive (spacelike) separation. It might be viewed as a stronger version of locality, and contrary to the previous two examples, it was formulated on direct physical grounds. However, traditionally it is not included among the defining axioms, as in the beginning it was unclear how much one can believe in it. Indeed, many years passed till this stronger version of locality was first established at least for the massive free field by Buchholz [2]. Only after the introduction of the nuclearity condition (which was originally motivated by the need of a particle interpretation [25]) it became more of a routine to verify the split property in various models, when its connection to nuclearity was discovered [6]. Another important step was the general mathematical understanding of split inclusions brought by the work of Doplicher and Longo [15].
In the meantime, interest rose in conformal quantum field theories, especially in the low dimensional case; i.e. conformal models given on the 2-dimensional Minkowski space and their chiral components that can be naturally extended onto the compactified lightray, the circle. The theory of conformal net of local algebras on S 1 is rich in examples and it provides an essential "playground" to people studying operator algebras as it turned out to have incredibly deep connections to the modular theory of von Neumann algebras as well as to subfactor theory; see e.g. [19,41,29]. In particular, the modular group associated to a local algebra and the vacuum vector always acts in a certain geometric manner: the so-called Bisognano-Wichmann property is automatic. In turn, this was used to conclude that several further important structural properties -e.g. Haag-duality and Additivityare also automatic in this setting. We refer to the original works [19,18,1,4] for more details on this topic.
The case of the split property seemed to be different -but there is an important detail to mention here. Initially, when studying chiral conformal nets, in the so-far cited works only Möbius covariance was exploited. There were several reasons behind this choice. First, because it is a spacetime symmetry implemented by a unitary representation for which the vacuum is an invariant vector. This is exactly how things go in higher dimension, but this is not how diffeomorphism covariance is implemented (no invariant vectors and one is forced to consider projective representations rather than true ones). Second, because the mentioned connection to modular theory of von Neumann algebras relies on Möbius covariance only. Thus, the listed structural properties -with the exception of the split property -are already automatic even if diffeomorphism covariance is not assumed.
From the physical point of view, however, diffeomorphism covariance is natural in the low dimensional conformal setting; by an argument of Lüscher and Mack, it should merely be a consequence of the existence of a stress-energy tensor [20]. All important models are diffeomorphism covariant with the exception of some "pathological" counterexamples; see [30,9]. It is worth noting that the example constructed in [9] by infinite tensor products, has neither diffeomorphism symmetry nor the split property. Thus, unlike the mentioned other properties, the split property surely cannot be derived in the Möbius covariant setting. However, as we shall prove it here, the split property is automatic if diffeomorphism covariance is assumed. Note that together with the result of Longo and Xu in [34] regarding strong additivity, this shows that a diffeomorphism covariant local net on S 1 is completely rational if and only if its µ-index is finite.
The crucial points of our proof are the following. We consider a conformal net A on the circle with conformal Hamiltonian L 0 , and fix two (open, proper) intervals I a , I b ∈ I with positive distance from each other. Inspired by the complex analytic argument used in [18] to prove the conformal cluster theorem, for an element X of the * -algebra A(I a ) ∨ alg A(I b ) generated by A(I a ) and A(I b ) with decomposition X = n k=1 A k B k (where n ∈ N, A k ∈ A(I a ), B k ∈ A(I b )), we consider the function on the complex unit disc
z → n k=1 Ω, A k z L 0 B k Ω .
For every |z| ≤ 1, this defines a functional φ z on A(I a ) ∨ alg A(I b ). For z = 1 this is simply the vacuum state ω, but for z = 0 this is the product vacuum state AB → ω(A)ω(B) (A ∈ A(I a ), B ∈ A(I b )). The split property is essentially equivalent to saying that φ 0 is normal (actually, here some care is needed: in general one needs the product state to be normal and faithful. Fortunately, general results on normality and conormality in a Möbius covariant net [23] of the inclusions A(I 1 ) ⊂ A(I 2 ) for an
I 1 ⊂ I 2 imply that A(I a ) ∨ A(I b )
is a factor; see more details in the preliminaries. It then turns out that the normality of φ 0 is indeed equivalent to the split property).
However, we do not have a direct method to show that φ z is normal at z = 0. On the other hand, we can treat several points inside the disc. Using the positive energy projective representation U of Diff + (S 1 ) given with the theory, for example for any (fixed) r ∈ (0, 1) and I c , I d ∈ I covering the full circle we find a decomposition r L 0 = CD in which C r ∈ A(I c ) and D r ∈ A(I d ). Choosing the intervals I c and I d carefully, C will commute with the A k operators while D will commute with the B k operators and hence
φ r (X) = n k=1 Ω, A k r L 0 B k Ω = n k=1 Ω, A k CDB k Ω = n k=1 C * Ω, A k B k DΩ = C * Ω, X DΩ
showing that for our real r ∈ (0, 1), the functional φ r is normal as it is given by two vectors. Note that the origin of the decomposition r L 0 = CD is the fact that a rotation can be decomposed as a product of local diffeomorphisms; something that using Möbius transformations alone, cannot be achieved (as all nontrivial Möbius transformations are global). However, even using the full diffeomorphism group, the issue is tricky, since we need a decomposition that can be analytically continued over to some imaginary parameters -and of course the words "local" and "analytical" are usually in conflict with each other. Nevertheless, this kind of problem was already treated in [39], and the methods there developed were also used in the proof of [7,Theorem 2.16], so all we needed here was some adaptation of earlier arguments.
We then proceed by "deforming" our decomposition using the work [36] of Neretin, which allows us to access further regions inside the unit disk. In this way we establish normality along a ring encircling the origin, and thus we can use the Cauchy integral formula to conclude normality of φ z at z = 0.
Note that we have really made use of the fact that the conformal Hamiltonian L 0 generates a compact group. Indeed, for a generic complex number z, the very expression z L 0 is meaningful only because Sp(L 0 ) contains integer values only. However, unlike with chiral nets, in the 2-dimensional conformal case the theory does not necessarily extends in a natural way to the compactified spacetime. Thus one might wonder whether our result will remain valid or not: is this compactness of the spacetime just some technicality, or is it an essential ingredient of our proof? The answer turns out to be the latter one.
In fact, we manage to present an example of a diffeomorphism covariant local net on the 2-dimensional spacetime, which does not have the split property. More concretely, we consider a local extension B ⊃ A of the net A = A U (1) ⊗ A U (1) obtained by taking two copies of the U(1)-current net (here considered as "left" and "right" chiral parts). Irreducible sectors of the U(1)-current net are classified by a certain charge q ∈ R. Our construction is such that when considered as a representation of A U (1) ⊗ A U (1) , the net A ⊂ B decomposes as a direct sum ⊕ q∈R (σ q ⊗ σ q ) where σ q is the representation corresponding to the sector with charge q. This model is naturally diffeomorphism covariant, but because its Hilbert space is not separable, it cannot have the split property. Note that here "diffeomorphism covariance" means only that we have an action of Diff + (S 1 ) × Diff + (S 1 ) which factors through the spacelike 2π-rotation, but not that of Diff + (S 1 )×Diff + (S 1 ). This is in complete accordance with our earlier remark on the spectrum of L 0 . This paper is organized as follows. In Section 2 we introduce our operator-algebraic setting for conformal field theory and recall relevant technical results concerning conformal covariance and the split property. Sections 3 and 4 provide our technical ingredients, namely certain decompositions of z L 0 into local elements. In Section 5 we prove our main result, that the split property follows from diffeomorphism covariance, by proving the normality of φ 0 . A two-dimensional counterexample is provided in Section 6. In Section 7 we conclude with open problems.
Preliminaries
Let I be the set of nonempty, nondense, open connected intervals of the unit circle S 1 = {z ∈ C : |z| = 1}. A Möbius covariant net is a map A which assigns to every interval of the circle I ∈ I a von Neumann algebra A(I) acting on a fixed Hilbert space H satisfying the following properties: 1. Isotony: if I 1 , I 2 ∈ I and I 1 ⊂ I 2 , then A(I 1 ) ⊂ A(I 2 ); 2. Möbius covariance: there exists a strongly continuous, unitary representation U of the Möbius group Möb (≃ PSL(2, R)) on H such that U(g)A(I)U(g) * = A(gI), I ∈ I, g ∈ Möb;
3. Positivity of the energy: the conformal Hamiltonian L 0 , i.e. the generator of the rotation one-parameter subgroup has a non negative spectrum.
4.
Existence and uniqueness of the vacuum: there exists a unique (up to a phase) unit U-invariant vector Ω ∈ H, i.e. U(g)Ω = Ω for g ∈ Möb;
5. Cyclicity: Ω is cyclic for the von Neumann algebra I∈I A(I).
6. Locality: if I 1 , I 2 ∈ I and I 1 ∩ I 2 = ∅, then A(I 1 ) ⊂ A(I 2 ) ′ .
We will denote a Möbius covariant net with the triple (A, U, Ω). Some consequences of the axioms are (see e.g. [19,18,22] 13. Normality and conormality: for any inclusion I 1 ⊂ I 2 , it holds that A(
I 1 ) = A(I 2 ) ∩ (A(I 1 ) ′ ∩ A(I 2 )) ′ and A(I 2 ) = A(I 1 ) ∨ (A(I 1 ) ′ ∩ A(I 2 ))
From conormality, it follows that two-interval algebras are factors. Indeed, take I 1 ⊂ I 2 such that they have no common end points. Then I 1 and I ′ 2 are disjoint intervals with a finite distance. By Haag duality it follows that (A(I 1 ) ∨ A(I ′ 2 )) ′ = A(I 1 ) ′ ∩ A(I 2 ), and by conormality we have
(A(I 1 ) ∨ A(I ′ 2 )) (A(I 1 ) ∨ A(I ′ 2 )) ′ = A(I 1 ) ∨ A(I ′ 2 ) ∨ (A(I 1 ) ′ ∩ A(I 2 )) = A(I 2 ) ∨ A(I ′ 2 ) = B(H),
where the last equality is a consequence of Haag duality and factoriality. Let us add this to the list of consequences.
14. Factoriality of two-interval algebras: for disjoint intervals I 1 and I 2 with a finite distance, A(I 1 ) ∨ A(I 2 ) is a factor. Now, we briefly discuss diffeomorphism covariance. Let Diff + (S 1 ) be the group of orientation preserving diffeomorphisms of the circle. It is an infinite dimensional Lie group modelled on the real topological vector space Vect(S 1 ) of smooth real vector fields on S 1 with the C ∞ -topology [35]. Its Lie algebra has to be considered with the negative of the usual bracket on vector fields, in order to have the proper exponentiation of vector fields. We shall identify the vector field f (e iθ ) d dθ ∈ Vect(S 1 ) with the corresponding real function f ∈ C ∞ (S 1 , R). We denote with Diff + (I) the subgroup of Diff + (S 1 ) acting identically on I ′ , namely the diffeomorphisms of S 1 with support included in I.
A strongly continuous, projective unitary representation U of Diff + (S 1 ) on a Hilbert space H is a strongly continuous homomorphism of Diff + (S 1 ) into U(H)/T, the quotient of the group of unitaries in B(H) by T. The restriction of U to Möb ⊂ Diff + (S 1 ) always lifts to a unique strongly continuous unitary representation of the universal covering group Möb of Möb. U is said to have positive energy, if the generator L 0 of rotations, the conformal Hamiltonian, has a nonnegative spectrum in this lift. Let γ ∈ Diff + (S 1 ). Note that expressions Ad U(γ) makes sense as an action on B(H). We also write U(γ) ∈ M although U(γ) is defined only up to a scalar.
When one has a strongly continuous projective unitary representation U of Diff + (S 1 ) with positive energy, e i2πL 0 is a multiple of the identity and therefore L 0 has a pure pointspectrum. It follows that the linear span D fin of eigenvectors of L 0 (the so-called "finite energy vectors") form a dense set. U can then be "differentiated" to obtain a representation at the Lie algebra level [8, Appendix A] (see also [31]). Any smooth function f ∈ C ∞ (S 1 , R), as a vector field on S 1 , defines a one-parameter group of diffeomorphisms R ∋ t → γ t= Exp(tf ) ∈ Diff + (S 1 ), hence, up to an additive constant, defines the selfadjoint generator T (f ) of the unitary group t → U(γ t ). For any real smooth function f as above, T (f ) is essentially self-adjoint on the set C ∞ (L 0 ) := n∈N 0 Dom (L n 0 ). T shall be called the stress energy tensor.
Irreducible, projective, unitary positive energy representation of Diff + (S 1 ) are labelled by certain values of the central charge c > 0 and the lowest weight h ≥ 0. h is the lowest point in the discrete spectrum of the conformal Hamiltonian L 0 . There is a unique (up to a phase) vector Φ ∈ H corresponding to the lowest eigenvalue. See [21,26] for a detailed description of such representations.
One considers particular elements {L n : n ∈ Z}, L n = iT (y n ) − T (x n ), L −n = iT (y n ) + T (x n ) for n ∈ N, where x n (θ) := − 1 n sin nθ and y n (θ) := − 1 n cos nθ (there is a canonical way to fix the scalar part of T (x n ), T (y n ), as L n , L −n and L 0 generate a (projective) representation of Möb). These operators satisfy the so-called Virasoro algebra on finite energy vectors D fin . In particular for all n, m ∈ Z: D fin is an invariant common core for any closed operator L n ; if n > 0 then L n Φ = 0; L −n ⊂ L * n ; the family {L n } n∈Z satisfies the Virasoro algebra relations on D fin :
[L n , L m ] = (n − m)L n+m + c 12 (n 3 − n)δ −m,n ½.
Let f ∈ C ∞ (S 1 , R) be a vector field on S 1 , with Fourier coefficientŝ
f n = 1 2π 2π 0 f (θ)e −inθ dθ, n ∈ Z,
then, one can recover the stress-energy tensor by
T (f ) = n∈Zf n L n(1)
and
e iT (f ) = U(Exp(f ))
gives the correspondence between the infinitesimal generators and the representation of Diff + (S 1 ) (up to a scalar).
Throughout the next few sections we shall often consider the net of von Neumann algebras
A U (I) = {e iT (f ) | f ∈ C ∞ (S 1 , R), supp(f ) ⊂ I} ′′ (I ∈ I).(2)
Note that when U is a so-called vacuum representation associated to central charge c, A U is nothing else than the well-known Virasoro net with central charge c. In general though, A U is not a conformal net in the sense we are introducing them in this preliminary; e.g. we might not have a vacuum vector. Nevertheless, we still have the locality relation
A U (I 1 ) ⊂ A U (I 2 ) ′ whenever I 1 ∩ I 2 = ∅.
The stress energy tensor can be evaluated on a larger set of functions [9]. For a continuous function f : S 1 → R with Fourier coefficients {f n } n∈Z we shall set
f 3 2 = n∈Z |f n | 1 + |n| 3 2 . Then · 3 2 is a norm on the space {f ∈ C(S 1 , R)| f 3 2 < ∞}. By [9], if f ∈ C(S 1 , R) with f 3 2 < ∞, then T (f ), defined as in (1), is self-adjoint and moreover if f k → f in the norm · 3 2
, then T (f k ) → T (f ) in the strong resolvent sense. In particular, even for a non necessarily smooth function f with f 3
2 < ∞, supp f ⊂ I, the self-adjoint T (f ) is still affiliated to A U (I).
We shall say that a Möbius covariant net (A, U, Ω) is conformal (or diffeomorphism covariant) if the Möb representation U extends to a projective unitary representation Diff + (S 1 ) → U(H)/T of Diff + (S 1 ) (that with a little abuse of notation we continue to indicate the extension with U) and satisfying
• Ad U(γ)(A(I)) = A(γI), for γ ∈ Diff + (S 1 ) • Ad U(γ)(x) = x, for γ ∈ Diff + (I), x ∈ A(I ′ )
Now we recall the definition of the split property for von Neumann algebra inclusions and conformal nets. A Möbius covariant net (A, U, Ω) satisfies the split property if the von Neumann algebra inclusion A(I 1 ) ⊂ A(I 2 ) is split, for any inclusion of intervals I 1 ⋐ I 2 , namely when I 1 and I 2 have no common end points.
The following proposition provides an equivalent condition to the split property. Although similar statements are quite well-known to experts (see [11] and [15, Below Definition 1.4]), the precise assumptions we need are difficult to find in the literature (note, for example, that we do not assume neither the separability of the underlying Hilbert space 1 nor the faithfulness of the split state in the implication 2 ⇒ 1 below).
Proposition 2.2. Let (N ⊂ M, Ω) be a standard inclusion of von Neumann algebras. We further assume that that N ∨ M ′ is a factor. Then the following are equivalent.
N ⊂ M is split;
2. there exists a normal state φ on N ∨ M ′ such that the restrictions φ N and φ M ′ are faithful and φ is split, namely,
φ(xy) = φ(x)φ(y), x ∈ N , y ∈ M ′ .
Proof. If N ⊂ M is split, namely if there is an intermediate type I factor R ≃ B(K), then N ∨ M ′ is isomorphic to N ⊗ M ′ , from which the implication 1 ⇒ 2 follows. Conversely, let there be a split state as in 2. First of all, as φ are faithful on N and M ′ , their GNS representations π N , π M ′ are faithful and have a cyclic and separating vector. Next, as φ is normal on N ∨ M ′ , its GNS representation π N ∨M ′ is also normal. The Hilbert space supporting π N ∨M ′ is isomorphic to the closure of N ∨ alg M ′ w.r.t. the scalar product inherited by the normal state φ as x, y φ = φ(x * y). By the factorization assumption on φ, the Hilbert space is the tensor product L 2 (N , ·, · φ ) ⊗ L 2 (M ′ , ·, · φ ) and the GNS representation π N ∨M ′ restricted to N and M ′ are of the form π N ⊗½ and ½⊗π M ′ , respectively. Furthermore, as both N and M ′ have a cyclic and separating vector Ω, their GNS representations π N , π M ′ are actually unitary equivalences [37,Corollary 10.15]. As a consequence, by normality, we can assume that π N ∨M ′ (N ∨ M ′ ) = N ⊗ M ′ . Furthermore, by assumption N ∨ M ′ is a factor, hence the GNS representation is an isomorphism. Now, both N ∨ M ′ and N ⊗ M ′ have a cyclic and separating vector (Ω and Ω ⊗ Ω respectively), therefore, the GNS representation is actually a unitary equivalence. Then the preimage 3 Local decompositions of e −βL 0
R = π −1 N ∨M ′ (B(H) ⊗ C½) gives the intermediate subfactor N ⊂ R ⊂ M.
Throughout this section, we shall not need a conformal net, as we only work with a strongly continuous projective unitary representation U of Diff + (S 1 ) with positive energy. We shall use the notations introduced in the preliminaries for all associated objects (i.e. L n , (n ∈ N) will stand for the associated Virasoro algebra representation, T for the stressenergy tensor, A U for the system of von Neumann-algebras appearing at (2) etc.).
In what follows, for a β > 0, r = e −β and two open proper arcs (intervals) I c , I d ∈ I that cover the circle:
I c ∪ I d = S 1 , we shall find a decomposition e −βL 0 = r L 0 = C r D r with the bounded operators C r ∈ A U (I c ) and D r ∈ A U (I d ).
The main idea for producing such a decomposition was already presented and exploited in [39] and in the proof of [7, Theorem 2.16]. Here we shall recall the essential points of the argument presented there and then adjust and refine it to our purposes.
∋ r → C r ∈ A U (I c ) and (0, 1) ∋ r → D r ∈ A U (I d ) such that r L 0 = C r D r and C r , D r ≤ 1 r q where the exponent q = c 48 (N 2 − 1)
with N being a positive integer such that 6π/N is smaller than the lengths of both arcs that are obtained by taking the intersection I c ∩ I d (note that N must be at least 4).
Proof. Let us fix a positive integer N satisfying the condition of the proposition (see Figure 1). The operators H :
= 1 N L 0 + c 24 (N − 1 N )½, L + := 1 N L −N and L − := 1 N L N satisfy the following relations on D fin : [H, L ± ] = ∓L ± , [L − , L + ] = 2H, L ± = L * ∓ .
Moreover H is diagonalizable with non-negative eigenvalues only, the span of its eigenvectors is exactly D fin which is an invariant core for the operators L ± . It then follows that these operators generate a strongly continuous, positive energy unitary representation of the universal cover Möb of the Möbius group. This construction -both at the Lie algebra as well as the Lie group level -was already considered and used by various authors; see e.g. the work [34]. In particular,
P = 1 4 (2H − L + − L − ) andP = 1 4 (2H + L + + L − ) I d I c I k I k
for all s > 0. Let us now consider how P andP can be written in terms of the stress-energy T . We have
P = 1 4n (2L 0 − L −N − L N ) + c 48 N − 1 N ½ = T (p) + b½ and likewiseP = T (p) + b½, where b = c 48 N − 1 N
and p andp are the functions defined by the formulas
p(z) = 1 4n (2 − z N − z −N ) and p(z) = 1 4N (2 + z N + z −N )
. The function p is nonnegative on S 1 and it has exactly N points where its value is zero:
p(z) = 0 ⇐⇒ z = e i 2π N k for k = 1, . . . N.
All these null-points are of course local (and also global) minima, where the derivative is zero. We can thus "cut" p into N "nice" pieces: p = p 1 + . . . + p N where the support of the nonnegative function p k is the closure of the arc
I k = e iθ : k − 1 N < θ 2π < k N ,
and p k 3 2 < ∞. This latter follows from the fact that p k is once differentiable and its derivative is of bounded variations; see the similar considerations at [9, Lemma 5.3]. Thus for every k = 1, . . . N,
P k = T (p k ) + b N
½ is a well-defined self-adjoint operator affiliated to A U (I k ) and we have P = P 1 + . . . + P N .
Since the terms in this decomposition are affiliated to commuting factors, just as in the proof [39, Proposition 3.2], we have that
Sp(P 1 ) + . . . + Sp(P N ) = Sp(P ) = R + ∪ {0}.
On the other hand, the spectrum of the operators P k (k = 1, . . . N) must all coincide, since using rotations one can easily show that they are all unitary conjugate to each other. It then follows that each of them must be a positive operator. Thus for the bounded operator e −tanh( s 2 )P appearing in formula (3), we have the local decomposition into a product of commuting bounded operators
e −tanh( s 2 )P = N k=1 e −tanh( s 2 )P k
where the norm of each term is smaller or equal than 1. Let us turn toP . As we haveP = Ad e i π N L 0 (P ), the localization ofP k = Ad e i π N L 0 (P k ) are different from that of P :P k is affiliated to A U (Ĩ k ) whereĨ k = e i π N I k and A U is defined in Section 2 (we are considering the intervals as subsets in C). With this localization, we can still assure the strong commutation between P k andP j whenever k = j, j + 1 (mod N). So in the decomposition
e −2sH = e −tanh( s 2 )P e −sinh(s)P e −tanh( s 2 )P = N k=1 e −tanh( s 2 )P k N k=1 e −sinh(s)P k N k=1 e −tanh( s 2 )P k we can make some rearrangements. Note that e −2sH = r −L 0 r 2q , where q = c 48 (N 2 − 1) if we set r = e −2s/N . To shorten notations, let us introduce the self-adjoint contractions X k = e −tanh( s 2 )P k and Y k = e −sinh(s)P k .
For simplicity, we did not indicate their dependence on r, but note that in the range 0 < r < 1 they depend norm-continuously on r (for t > 0, x ≥ 0, the function e −tx is uniformly continuous in t).
All X-operators and separately, all Y -operators commute between themselves, and moreover [X l , Y m ] = 0 whenever l = m, m + 1 (mod N). Recall that 6π N is smaller than the length of each of the intervals of I c ∩I d . Therefore, by cyclically renaming the intervals (but keeping the relation between I k andĨ k and the corresponding localization of the operators), we may assume that there are 1 ≤ k < j ≤ N such that I k ∪ I k+1 and I j ∪ I j+1 are included in the different connected components of I c ∩ I d . Furthermore, to fix the notation, we Figure 2: Localization of the factors of C r . The indicated intervals I • ,Ĩ • correspond to thick segments. The operators j+1 l=k X l , j l=k Y l , j l=k+1 X l are localized in the arcs, from the inside, respectively. The corresponding factors in D r are localized in the complements of these arcs, respectively. may assume that I k ∪ · · · ∪ I j+1 ⊂ I c , while I j ∪ · · · ∪ I N ∪ I 1 · · · ∪ I k ⊂ I d . Note that I k ∪ · · · ∪Ĩ j ⊂ I c andĨ j ∪ · · · ∪ I N ∪ I 1 · · · ∪Ĩ k−1 ⊂ I d (see Figure 2).
I c I d I k I k I k+1 I j+1 I j I j
By the localization explained above, we obtain
r −L 0 r 2q = N l=1 X l N l=1 Y l N l=1 X l = k−1 l=1 X l j+1 l=k X l N l=j+2 X l k−1 l=1 Y l j l=k Y l N l=j+1 Y l k l=1 X l j l=k+1 X l N l=j+1 X l = j+1 l=k X l j l=k Y l j l=k+1 X l k−1 l=1 X l N l=j+2 X l k−1 l=1 Y l N l=j+1 Y l k l=1 X l N l=j+1 X l .(4)
Here the first part
C r = j+1 l=k X l j l=k Y l j l=k+1 X l is an element of A U (I c ), where whereas the second part D r = k−1 l=1 X l N l=j+2 X l k−1 l=1 Y l N l=j+1 Y l k l=1 X l N l=j+1 X l is an element of A U (I d ).
By construction, C r , D r ≤ 1. Thus, we have obtained the desired decomposition
r −L 0 = ( 1 r q C r )( 1 r q D r ).
In the above proposition we specifically worked with L 0 . However, by considering the adjoint actions of U(γ) for all diffeomorphisms γ ∈ Diff + (S 1 ) on the decompositions found above, it is now easy to draw the following conclusion.
(0, 1) ∋ r → C r ∈ A U (I c ) and (0, 1) ∋ r → D r ∈ A U (I d ) such that r T (f ) = C r D r .
Further decompositions
In this section, we shall consider further decompositions of r L 0 , for which the crucial ingredient will be a result of Neretin [36]. Though in his work the relevant theorem is stated for representations which are direct sums of those highest weight ones, as was already mentioned, a positive energy, strongly continuous, projective unitary representation of Diff + (S 1 ) can only be of that form. For better readability, we shall recall the statement that we are going to exploit. We will need the concept of analytic diffeomorphism; we will denote by Diff + a (S 1 ) the set of γ ∈ Diff + (S 1 ) that extends to an annulus around S 1 ⊂ C in a complex analytic manner. Then elements of the form U(γ)r L 0 U(γ) (r ∈ (0, 1], γ,γ ∈ Diff + a (S 1 )) form a projective semigroup: for any γ 1 , γ 2 ,γ 1 ,γ 2 ∈ Diff + a (S 1 ) and r 1 , r 2 ∈ (0, 1] there exist some γ 3 ,γ 3 ∈ Diff + a (S 1 )), r 3 ∈ (0, 1] such that
U(γ 1 )r L 0 1 U(γ 1 ) U(γ 2 )r L 0 2 U(γ 2 ) = U(γ 3 )r L 0 3 U(γ 3 ),
where of course equality is meant in the projective sense.
Actually, what we shall really use is a certain adaptation of the above result for the case of true (and not just projective) representations of the Möbius group. Note that every Möbius transformation is of course analytic. To make the necessary modifications, we shall first make an observation 2 . Proof. We need to show the "only if" part; the other direction is true by definition. If γ = id, the statement immediately follows, otherwise, by composing a Möbius element g, we may assume that γ fixes three points on S 1 . Let I 1 , I 2 , I 3 be three intervals with such end points.
By contradiction, let us assume that γ = id. Then there is a point s ∈ S 1 such that γ(s) = s. As γ fixes three points on S 1 , it also preserves each interval bounded by any pair of these points. Say lim n γ n (s) =: s ∞ (for a single element γ ∈ Diff + (S 1 ), S 1 can be decomposed into intervals such that γ is monotone on each interval, hence such a limit exists).
It is not restrictive to assume that I 1 , the closure of I 1 , contains neither {γ n (s)} nor s ∞ . Let us go to the real line picture (which is only necessary below, in order to simplify the conformal distance) and call I 1 := (t 1 , t 2 ), and I := (s, s ∞ ) (or (s ∞ , s) depending on in which direction s is moved, without losing the generality, we may assume the former case). Assume that I 1 and I are bounded intervals on the line. Now they are separated by a finite distance.
We can pick x ∈ A(I 1 ) and y ∈ A(I) such that xΩ, yΩ = 0 by Reeh-Schlieder property. We may further assume that Ω, xΩ = 0 = Ω, yΩ , as we can subtract their vacuum expectation. By the assumption that U(γ)Ω ∈ CΩ and [18, Conformal cluster theorem], we obtain
| xΩ, yΩ | = | x(U(γ) * ) n Ω, y(U(γ) * ) n Ω | = | Ad U(γ) n (x)Ω, Ad U(γ) n (y)Ω | ≤ (t 2 − t 1 )(s ∞ − γ n (s)) (γ n (s) − t 1 )(s ∞ − t 2 ) x y → 0,
as γ n (s) → s ∞ , while other distances remain finite. This is a contradiction, hence γ = id under the assumption that γ fixes three points. Then elements of the form V (γ)r L 0 V (γ) consist a semigroup: for any g 1 , g 2 ,g 1 ,g 2 ∈ Möb and r 1 , r 2 ∈ (0, 1], there exist some g 3 ,g 3 ∈ Möb, r 3 ∈ (0, 1) such that
V (g 1 )r L 0 1 V (g 1 ) V (g 2 )r L 0 2 V (g 2 ) = V (g 3 )r L 0 3 V (g 3 )
in the proper (not only projective) sense.
Proof. Consider the well-known conformal net usually referred as the U(1)-current net [5]. One may restrict its projective unitary representation U of Diff + (S 1 ) to the Möbius group and arrange its phase factors in such a way that the vacuum Ω will be an invariant vector (see Section 2). In this way we get a positive energy, strongly continuous, unitary representation V of Möb in which all such irreducible representations (i.e. every possible integer highest weight) appear: this is evident because for every n ≥ 1, the dimension of (n + 1) th energy space is strictly larger than the dimension of the n th one. Thus, if we can show the statement for our particular representation V , we have proved it for all positive energy representations of Möb. By applying Theorem 4.1 to U of the U(1)-current net, we obtain that for every g 1 , g 2 ,g 1 ,g 2 ∈ Möb and r 1 , r 2 ∈ (0, 1] there must exist some γ 3 ,γ 3 ∈ Diff + a (S 1 ) and an r 3 ∈ (0, 1] such that U(g 1 )r L 0 1 U(g 1 ) U(g 2 )r L 0 2 U(g 2 ) = U(γ 3 )r L 0 3 U(γ 3 ) in the projective sense. If r 1 = r 2 = 1, then of course g 3 andg 3 can be chosen to be in Möb. On the other hand, if r 1 r 2 < 1, then they must be in Möb. Indeed, in such a case r must be strictly smaller than 1 (the right hand side cannot be unitary as the left hand side does decrease the length of some vectors). Then "sandwiching" the left hand sides by Ωi.e. considering the scalar product Ω, · Ω for the left hand side (which is of course only defined up-to-phase) -gives a complex number of modulus 1, whereas by Lemma 4.2 and the fact CΩ is the unique eigenvector of L 0 with the eigenvalue 0, the same sandwiching of the right hand side gives a number of modulus 1 if and only if γ 3 ,γ 3 ∈ Möb.
Considering the relevant unitary operators rather than projective ones, we therefore have that for every g 1 , g 2 ,g 1 ,g 2 ∈ Möb and r 1 , r 2 ∈ (0, 1] there must exist some g 3 ,g 3 ∈ Möb and an r 3 ∈ (0, 1] such that V (γ 1 )r L 0
1 V (γ 1 ) V (γ 2 )r L 0 2 V (γ 2 ) and V (γ 3 )r L 0 3 V (γ 3 )
are proportional to each other. The proof is then finished by evaluating both sides on Ω and concluding that this proportion must be 1.
Corollary 4.4. Let V be a positive energy, strongly continuous, unitary representation of
Möb with associated conformal Hamiltonian L 0 . There exists some r, r 1 , r 2 ∈ (0, 1) and g, g 1 , g 2 ∈ Möb, g = id, such that
r L 0 = r H 1 1 r H 2 2 V (g) in the proper sense, where H j = Ad U(g k )(L 0 ) (k = 1, 2).
Proof. We choose two elementsg 1 ,g 2 ∈ Möb such thatH k = Ad V (g k )(L 0 ), andH 1 andH 2 do not strongly commute: such choices are actually abundant, since L 0 is maximally abelian in the Lie algebra. Then there must exist some r 1 , r 2 ∈ (0, 1) such that rH 1 1 and rH 2 2 do not commute (otherwise their generators would strongly commute by analytic continuation). Now we apply Corollary 4.3 to rH 1 1 rH 2 2 = V (g 1 )r L 0 1 V (g 1 ) * V (g 2 )r 2 L 0 V (g 2 ) * to obtain g 3 ,g 3 ∈ Möb and r ∈ (0, 1) such that
V (g 1 )r L 0 1 V (g 1 ) * V (g 2 )r L 0 2 V (g 2 ) * = V (g 3 )r L 0 V (g 3 ) *
, in the proper sense, or equivalently,
r L 0 = V (g −1 3g 1 )r L 0 1 V (g −1 3g 1 ) * V (g −1 3g 2 )r L 0 2 V (g −1 3g 2 ) * V (g 3g3 )
. By defining g k = g −1 3g k , hence accordingly H k := Ad V (g −1 3g k )(L 0 ) and g := g 3g3 , we obtain the desired equality. To check that g = id, note that by our choice ofH k , r H 1 1 and r H 2 2 do not commute as well. Yet, in the equality
r L 0 = r H 1 1 r H 2 2 V (g)
, the left-hand side is self-adjoint, while if g = id, the right-hand side would not be selfadjoint. Therefore, g = id.
Proposition 4.5. Let A be a conformal net, U be the associated projective unitary representation of Diff + (S 1 ), and L 0 the conformal Hamiltonian. For some r ∈ (0, 1), there exists a Möbius transformation g = id, such that for any I c , I d ∈ I be two open proper arcs such that I c ∪ I d = S 1 we have two bounded operators C ∈ A U (I c ) and D ∈ A U (I d ) such that we have the decomposition
r L 0 = CDU(g)
in the proper sense.
Proof. We apply Corollary 4.4 to obtain r, r 1 , r 2 ∈ (0, 1), g ∈ Möb, g = id and H 1 , H 2 such that r L 0 = r H 1 1 r H 2 2 U(g). Then we apply Corollary 3.2 to H k with the intervals K k,c , K k,d such that K k,c ⊂ I c , K k,d ⊂ I d and K 1,d ∩ K 2,c = ∅ (see Figure 3), to obtain operators C k , D k such that r H k k = C k D k . By the localization, C 2 and D 1 commute.
Hence it holds that
r L 0 = r H 1 1 r H 2 2 U(g) = C 1 D 1 C 2 D 2 U(g) = C 1 C 2 D 1 D 2 U(g), and C := C 1 C 2 is localized in K 1,c ∪ K 2,c ⊂ I c , while D := D 1 D 2 is localized in K 1,d ∪ K 2,d ⊂ I d , as desired. I d I c K 1,c K 1,d K 2,c K 2,d
Normality of the product vacuum state
We can now prove our main claim: for a conformal net on S 1 -where by "conformal" we mean that it has the full diffeomorphism covariance (see Section 2) -the split property is automatic. Let (A, U, Ω) be a conformal net, and assume I a , I b ∈ I are two open proper arcs separated by a positive distance.
Consider the * -algebra A(I a ) ∨ alg A(I b ) generated by the commuting factors A(I a ) and A(I b ). We shall now introduce a family {φ z } of functionals on this algebra indexed by a complex number z, |z| ≤ 1. For a generic element X ∈ A(I a ) ∨ alg A(I b ),
X = n k=1 A k B k (n ∈ N, A k ∈ A(I a ), B k ∈ A(I b ))(5)
and a complex number z in the closed unit disk
D 1 = {z ∈ C : |z| < 1}, let φ z (X) = n k=1 Ω, A k z L 0 B k Ω .
The above quantity is well-defined in the sense that it indeed depends only on z and X, but not on the particular decomposition chosen for X. Note that the expression z L 0 is indeed a well-defined bounded operator for every z ∈ D 1 (for z = 0, we define it by continuity in the strong operator topology, hence to be the projection P 0 onto CΩ): this is because Sp(L 0 ) ⊂ N. That is, we are using not just the positivity of L 0 , but also that elements of its spectrum are all integers (e.g. z 1 2 = √ z would be ambiguous).
For every X ∈ A(I 1 ) ∨ alg A(I 3 ), the map z → φ z (X) is analytic in D 1 . In fact, denoting by P m the spectral projection of L 0 associated to the eigenvalue m, we have the power series decomposition of φ z (X)
φ z (X) = n k=1 ∞ m=0 Ω, A k P m B k Ω z m .
Since P 0 = Ω, · Ω is the one-dimensional projection on the vacuum vector, we have that
φ 0 (X) = n k=1 ω(A k )ω(B k ),
i.e. φ 0 is the product vacuum state, whereas φ 1 = ω. Thus, in view of Proposition 2.2, in order to prove the split property, we need to show that while "changing" the parameter z from 1 to 0, the functional φ z remains normal. In particular, it would be desirable to obtain estimates on φ 1 − φ 0 .
The idea of considering φ z not only at the points z = 1 and z = 0, but on a larger area (so that its analytic dependence on z can be exploited) comes from [18]. There the authors work with the function z → φ z (AB) to obtain a bound on |φ 1 (AB) − φ 0 (AB)| for a pair of elements A ∈ A(I 1 ) and B ∈ A(I b ) thereby proving the conformal cluster theorem for a Möbius covariant net. However, their estimate involves the product of norms A B ; when it is reformulated for an element X of the considered form (5), we get some bounds in terms of k A k B k , rather than in terms of the norm of X. Hence their method does not give a useful estimate on φ 1 − φ 0 . In fact, they cannot obtain anything that would imply the split property: this is because they only use Möbius covariance, and as was mentioned in the introduction, counterexamples to the split property exist when diffeomorphism covariance is not assumed [9,Section 6].
Instead, our idea is the following: using diffeomorphism covariance and in particular the decompositions of r L 0 established in the previous sections, we can show that φ z depends norm-continuously on z and is normal (i.e. extends to a normal linear functional of the von Neumann algebra A(I a ) ∨ A(I b )) when z is in a certain region. Unfortunately, the region directly obtainable by such decompositions do not contain the desired point z = 0. However, if this region contains a ring encircling the point z = 0 (and as we shall see, this will exactly be the case) we can use general complex analytic arguments (essentially the Cauchy theorem) to deduce normality of φ 0 : Lemma 5.1. Let r 0 ∈ (0, 1) be a fixed radius and suppose that φ z is normal whenever |z| = r 0 and that on the circle with radius r 0 , r 0 S 1 ∋ z → φ z is norm-continuous. Then φ 0 is also normal.
Proof. We shall use some well-known technical facts. In particular, we shall exploit that the norm-limit of a sequence of normal functionals on a von Neumann algebra M is normal (see e.g. [27,Corollary 7.1.13]). To apply this fact, one should note that the norm is defined on the von Neumann algebra M, but by the Kaplansky density theorem, the norm of a normal functional on A(I a )∨A(I b ) is equal to the norm of its restriction to A(I a )∨ alg A(I b ). Therefore, in the following we do not distinguish them.
Thus one has -e.g. by considering Riemann-sums -that if ϕ : [s 1 , s 2 ] ∋ t → ϕ t ∈ M * is a norm-continuous family of normal linear functionals, then ϕ(·) = s 2 s 1 ϕ t (·)dt is also a well-defined normal functional on M.
Since D 1 ∋ z → φ z (X) is analytic, by the Cauchy integral formula we have
φ 0 (X) = 1 2πi r 0 S 1 φ r 0 e iθ (X) dz z for every X ∈ A(I a ) ∨ alg A(I b ).
Let us now discuss how the decompositions r L 0 help us out in different regions of D 1 . Let I c = I ′ a , and I d be an (open) interval containing the closure of I a but not intersecting I b .
(Such an "enlargement" of I b exists as I a and I b were assumed to have a positive distance from each other). We then have that I c ∪ I d = S 1 and we can consider the decomposition of r L 0 given by Proposition 3.1 with {C r } r∈(0,1) ⊂ A(I c ) and {D r } r∈(0,1) ⊂ A(I d ). Let us denote by R θ the rotation by θ. Then, as long as z = re iθ is such that r ∈ (0, 1) and the angle θ satisfies the condition
I d ∩ R θ (I b ) = ∅,(6)
we have that the action of ρ θ ≡ Ad e iθL 0 leaves A(I b ) inside A(I d ) ′ and thus for an X ∈ A(I a ) ∨ alg A(I b ) with decomposition (5), we can use locality to rewrite φ z (X) as
φ z (X) = n k=1 Ω, A k r L 0 e iθL 0 B k Ω = n k=1 Ω, A k r L 0 e iθL 0 B k e −iθL 0 Ω = n k=1 Ω, A k C r D r ρ θ (B k )Ω = C * r Ω, n k=1 A k ρ θ (B k ) D r Ω .(7)
Let (θ − , θ + ) be the largest open interval of angles satisfying our condition (6) and containing 0, i.e., θ ± is the smallest positive / largest negative angle for which R θ ± (I b ) intersects I d .
We can obviously find a smooth function f : S 1 → R such that f on I a is zero, but is constant 1 on the complement of I d (i.e. on the complement of the "enlarged" version of I a ). Viewing f as the vector field on S 1 formally written as f (e iθ ) d dθ , it gives rise to a one parameter group of diffeomorphisms
R ∋ θ → γ θ ≡ Exp(θf )
such that γ θ is "localized" in I ′ a , but if θ − < θ < θ + , then the action of γ θ on I b coincides with that of the rotation by θ. Thus, e iθT (f ) commutes with elements of A(I a ) but for θ ∈ (θ − , θ + ), its adjoint action on A(I b ) coincides with the action of ρ θ and hence we can write
n k=1 A k ρ θ (B k ) = n k=1 A k e iθT (f ) B k e −iθT (f ) = e iθT (f ) n k=1 A k B k e −iθT (f ) = = e iθT (f ) Xe −iθT (f ) .
Putting this back in (7), we get that for θ ∈ (θ − , θ + ),
φ re iθ (X) = η θ , Xζ θ where the vectors η θ = e −iθT (f ) C *
r Ω and ζ θ = e −iθT (f ) D r Ω. Corollary 5.2. {φ z } is a norm-continuous family of normal functionals in the region {re iθ |r ∈ (0, 1) and I d ∩ R θ (I b ) = ∅}.
Note that Proposition 3.1 gives some bounds on the norms of C r and D r , and so actually with the constant q > 0 defined there, in the discussed region we have
φ re iθ ≤ C * r Ω D r Ω ≤ 1 r 2q .
Unfortunately, though this estimate is nicely uniform in θ, it "blows up" at r → 0 and hence in itself it does not show that φ r converges to φ 0 in norm as r → 0. However, so far we have only used our first decomposition of r L 0 . We shall now exploit the second one derived in Section 4. Consider a radius r 0 ∈ (0, 1) such that the decomposition in Proposition 4.5 holds; that is, we have two bounded elements C ∈ A(I c ) and D ∈ A(I d ) and g = id a Möbius transformation such that r L 0 0 = CDU(g) (recall that this is valid in the proper sense, namely one can fix the phase of U(g) for g ∈ Möb unambiguously). Then, repeating the steps we did before with our previous decomposition and setting instead of (7), this time we get
φ r 0 e iθ (X) = n k=1 Ω, A k CDU(g)e iθL 0 B k Ω = C * Ω, n k=1 A k ρ θ (B k ) DΩ
whenever the disjointness condition
I d ∩ g • R θ (I b ) = ∅
holds. It is an easy exercise to show, that we can continue exactly as in the first case, and hence this time obtain the following.
Corollary 5.3. {φ z } is a norm-continuous family of normal functionals in the region
{r 0 e iθ |I d ∩ g • R θ (I b ) = ∅}.
Does the union of the two treated regions encircle the point 0? This might not be the case. However, note that the Möbius transformation g given by Proposition 4.5 is an "absolute" one; i.e. it does not depend on the intervals I c and I d (whereas of course the elements C and D obviously do). And though for some choices of I a , I b and I d ⊃ I a might lead to nowhere, it is enough for us to show that there is a "right" choice. Proof. By conformal covariance, we may assume that I a , I b and even the enlargement I d ⊃ I a are "tiny"; almost point-like intervals around two points which we will conveniently call a and b. Then the region guaranteed by Corollary 5.2 is D 1 minus a slightly enlarged version of the half-line {te iα |t ≥ 0} where α is the angle for which R α (b) = a. On the other hand, the region guaranteed by Corollary 5.3 is the circle r 0 S 1 minus a slightly enlarged version of the point r 0 e iα , whereα is the angle for which g • Rα(b) = a, which is of course equivalent to saying that Rα(b) = g −1 (a). Since g is a certain fixed, non trivial Möbius transformation, we might even assume that our choice of a is such that g −1 (a) = a. Then α =α and the union of the two regions covers the circle r 0 S 1 .
Lemma 5.1 shows that φ 0 is a normal linear functional on A(I 1 ) ∨A(I 3 ). Now our claim is concluded by Proposition 2.2 and a technical Lemma 5.5 below, by noting that
• A(I a ) ∨ A(I b )
is a factor (Section 2, Factoriality of two-interval algebras)
• The restrictions of φ 0 to A(I a ) and A(I b ) are equal to the vacuum state, hence faithful.
Lemma 5.5. φ 0 is a positive normal functional on A(I a ) ∨ A(I b ).
Proof. We first consider φ 0 on A(I a ) ∨ alg A(I b ). By [38,Proposition 4.20], the map τ :
k x k y k → x k ⊗ y k is well-defined and is a * -isomorphism from A(I 1 ) ∨ alg A(I 3 ) onto A(I a ) ⊙ A(I b ). Now the linear functional φ 0 ( k x k y k ) translates into A(I a ) ⊙ A(I b ) as Ω ⊗ Ω, · Ω ⊗ Ω . Namely, φ 0 = (ω ⊗ ω) • τ −1 . Now, ω ⊗ ω is clearly positive, and τ −1 (x * x) = τ −1 (x) * τ −1 (x), therefore, φ 0 (x * x) ≥ 0 for any x ∈ A(I a ) ∨ alg A(I b ).
We claim that, also on A(I a ) ∨ alg A(I b ), φ 0 is positive 3 . Indeed, take a positive element a ∈ A(I a ) ∨ alg A(I b ). The function f (x) = x 1 2 , x ∈ [0, a ] can be arbitrarily approximated by polynomials f n with real coefficients, uniformly on [0, a ] and f n (a) 2 tends to a in norm. We saw that φ 0 is a normal linear functionals, hence it is in particular continuous in norm.
Since φ 0 (x * x) ≥ 0 for x ∈ A(I a ) ∨ alg A(I b ) then φ 0 (f n (a) 2 ) ≥ 0, hence φ 0 (a) ≥ 0 by norm continuity of φ 0 .
Now, by the Kaplansky density theorem and the normality of φ 0 , φ 0 is a positive functional. 1. Compatibility: if I 1 , I 2 ∈ I and I 1 ⊂ I 2 then ρ I 2 | A(I 1 ) = ρ I 1
A non-split conformal net in two-dimensions
Covariance: Ad
U ρ (g) • ρ I = ρ gI • Ad U(g), g ∈ Möb A representation ρ is irreducible if I∈I ρ(A(I)) = B(H ρ ). The defining representation {id A(I) } is called the vacuum representation.
A representation of a conformal net ρ is said to be localizable in I 0 if ρ I ′ 0 ≃ id, where ≃ means unitary equivalence. The unitary equivalence class of ρ defines a superselection sector, also called a DHR (Doplicher-Haag-Roberts) sector [14]. By Haag duality we have that ρ(A(I)) ⊂ A(I) if I 0 ⊂ I. Thus we can always choose, within the sector of ρ, a representation ρ 0 on the defining Hilbert space H such that ρ 0,I 0 is an endomorphism of A(I 0 ). If each ρ I is an automorphism of A(I), we call ρ an automorphism of (A, U, Ω). Automorphisms can be composed in a natural way.
Let (A, U, Ω) be the U(1)-current net [5]. The main ingredients are (see [40] for a more detailed review): 2π 0 dθ f (e iθ )ϕ(e iθ ). We call this automorphism of the net σ q . Different functions ϕ with the conditions above with the same q give the equivalent sectors, while sectors with different q are inequivalent. It holds that σ q • σ q ′ = σ q+q ′ .
• Each irreducible sector is covariant: the projective representation γ → U q (γ) := σ q (U(γ)) of local diffeomorphisms extends to Diff + (S 1 ), hence makes the automorphism σ q covariant [10, Proposition 2] (in an irreducible representation σ q , the choice of U q (γ) is unique up to a scalar [10, Remark after Proposition 2]): Ad U q (γ)(σ q (x)) = σ q (Ad U(γ)(x)). Furthermore, we can fix the phase of U q (γ) and consider them as unitary operators (see [17,Proposition 5.1], where the phase does not depend on h, hence one can take the direct sum of multiplier representations (projective representations with fixed phases)). In this case, it holds that U q (γ 1 )U q (γ 2 ) = c(γ 1 , γ 2 )U(γ 1 , γ 2 ) where c(γ 1 , γ 2 ) ∈ C½. c(γ 1 , γ 2 ) can be chosen without dependence on q, and continuous in a neighborhood of the unit element. This projective representation (restricted to Möb) has positive energy [9].
• For two equivalent automorphisms ρ,ρ localized in I,Ĩ, respectively, an operator which intertwines them is called a charge transporter. In the present case, as both ρ,ρ are irreducible, such a charge transporter is unique up to a scalar. A charge transporter acts trivially on A((I ∪Ĩ) ′ ), hence belongs to A((I ∪Ĩ) ′ ) ′ . In particular, it can be considered as an element in a local algebra containing I andĨ.
• The operator z q (γ) := U(γ)U q (γ) * is a charge transporter between σ q and α γ σ q α γ −1 .
• For a given pair of automorphisms ρ 1 , ρ 2 , one defines the braiding ǫ ρ 1 ,ρ 2 : one chooses equivalent automorphismsρ 1 ,ρ 2 localized inĨ 1 ,Ĩ 2 , respectively, such thatĨ 1 ∩Ĩ 2 = ∅ and charge transporters V 1 , V 2 between ρ 1 andρ 1 , and ρ 2 andρ 2 , respectively. Define
ǫ ± ρ 1 ,ρ 2 := ρ 2 (V * 1 )V * 2 V 1 ρ 1 (V * 2 ),
where + or − depends on the choice whetherĨ 1 is on the left/right ofĨ 2 (which results from the choice of localization of the charge transporter above), but ǫ ± ρ 1 ,ρ 2 do not depend on the choice ofρ k , V k under such a configuration. • For our concrete automorphisms σ q , σ q ′ on the U(1)-current net, one can take the charge transporters V q , V q ′ as Weyl operators and finds that the braiding satisfies
ǫ ± σq,σ q ′ ∈ C½, ǫ + σq,σ q ′ = ǫ − σq ,σ q ′ .
The following is may be well known to experts, but it is difficult to find the right reference (for example, [32,Proposition 1.4] is proved for Möbius covariance). We note that a systematic formulation, closer to our needs, is to appear in [13]. Nevertheless, in part because we deal with multiplier representations, and in part for better readability, we include a formal statement with a proof. Proposition 6.1 (Tensoriality of cocycles). It holds that z q (γ)σ q (z q ′ (γ)) = z q+q ′ (γ).
Proof. First recall that z q (γ) is an intertwiner between σ q and α γ σ q α γ −1 , hence the product z q (γ)σ q (z q ′ (γ)) is an intertwiner between σ q σ q ′ = σ q+q ′ and α γ σ q α γ −1 • α γ σ q ′ α γ −1 = α γ σ q+q ′ α γ −1 . z q+q ′ (γ) also intertwines σ q+q ′ and α γ σ q+q ′ α γ −1 . As they are automorphisms, hence irreducible, the difference between z q (γ)σ q (z q ′ (γ)) and z q+q ′ (γ) must be a scalar.
Next we show that
U ′ q+q ′ (γ) := (z q (γ)σ q (z q ′ (γ))) * U(γ) is a multiplier representation of Diff + (S 1 ) such that U ′ q+q ′ (γ 1 )U ′ q+q ′ (γ 2 ) = c(γ 1 , γ 2 )U ′ q+q ′ (γ 1 γ 2 )
, namely it has the same 2-cocycle c as U q+q ′ . Indeed,
U ′ q+q ′ (γ 1 )U ′ q+q ′ (γ 2 ) = (z q (γ 1 )σ q (z q ′ (γ 1 ))) * U(γ 1 )(z q (γ 2 )σ q (z q ′ (γ 2 ))) * U(γ 2 ) = σ q (z q ′ (γ 1 )) * U q (γ 1 )σ q (z q ′ (γ 2 )) * U q (γ 2 ) = σ q (z q ′ (γ 1 )) * σ q (α γ 1 (z q ′ (γ 2 ))) * · c(γ 1 , γ 2 )U q (γ 1 γ 2 ) = σ q (z q ′ (γ 1 ) * α γ 1 (z q ′ (γ 2 )) * ) · c(γ 1 , γ 2 )U q (γ 1 γ 2 ) = σ q (U q ′ (γ 1 )U(γ 1 ) * U(γ 1 )U q ′ (γ 2 )U(γ 2 ) * U(γ 1 ) * ) · c(γ 1 , γ 2 )U q (γ 1 γ 2 ) = σ q (U q ′ (γ 1 γ 2 )U(γ 1 γ 2 ) * ) · c(γ 1 , γ 2 )z q (γ 1 γ 2 ) * U(γ 1 γ 2 ) = c(γ 1 , γ 2 )U ′ q+q ′ (γ 1 γ 2 ),
where in the 3rd and 6th equalities we used that U and U q share the same 2-cocycle c. Now let us define U ′′ (γ) := U ′ q (γ) * U q (γ). As the difference between U ′ q (γ) and U q (γ) is just a phase and they share the same 2-cocycle c, it is easy to show that U ′′ is a C-valued true (with trivial multiplier) representation of Diff + (S 1 ). It is well-known that then U ′′ must be trivial, U ′′ (γ) = ½. From this the claim immediately follows.
Let G be the quotient of Möb × Möb by the normal subgroup generated by (R 2π , R −2π ), where Möb naturally includes the universal covering R of the rotation subgroup S 1 and R 2π , R −2π are the elements corresponding to 2π, −2π rotations, respectively. We call R×S 1 the Einstein cylinder E, where the Minkowski space is identified with a maximal square (−π, π) × (−π, π) (see [1]) 4 . The group G acts naturally on it. Furthermore, let Diff 0 (R) be the group of diffeomorphisms of the real line R with compact support. Then Diff 0 (R) × Diff 0 (R) acts naturally on the Minkowski space as the product of two lightrays 5 , and its action naturally extends to E by periodicity. Let us denote by Conf(E) the group generated by G and Diff 0 (R) × Diff 0 (R). A two-dimensional conformal net (Ã,Ũ ,Ω) consists of a family {Ã(O)} of von Neumann algebras parametrized by double cones {O} in the Minkowski space R 2 , a strongly-continuous unitary representation of G which extends to a projective unitary representation of Conf(E), and a vectorΩ such that the following axioms are satisfied [28, Section 2]:
• Isotony. If O 1 ⊂ O 2 , thenÃ(O 1 ) ⊂Ã(O 2 ). • Locality. If O 1 and O 2 are spacelike separated, thenÃ(O 1 ) andÃ(O 2 ) commute. • Covariance. For a double cone O, it holds that AdŨ(γ)(Ã(O)) =Ã(γO) for γ ∈ V ⊂ Conf(E), where V is a neighborhood of the unit element of Conf(E) such that γO ⊂ R 2 for γ ∈ V. For x ∈Ã(O) and if γ ∈ Diff 0 (R) ×Diff 0 (R) acts identically on O, then AdŨ (γ)(x) = x.
• Existence and uniqueness of vacuum.Ω is a unique (up to a scalar) invariant vector forŨ | G .
• Cyclicity.Ω is cyclic for O⊂R 2Ã (O).
• Positivity of energy. The restriction ofŨ to the group of translations has the spectrum contained in V + := {(x 0 , x 1 ) : x 0 ≥ |x 1 |}.
Now we construct a two-dimensional conformal net as follows, following the ideas of [16,33]. Let us fix an interval I ⊂ R ⊂ S 1 and a real smooth function ϕ as above. On the Hilbert space H q = H, we take the automorphism σ q of the U(1)-current net A. The full Hilbert space is the non-separable direct sumH = q∈R H q ⊗ H q . The observable net A ⊗ A acts onH as the direct sumσ(x ⊗ y) = q σ q (x) ⊗ σ q (y). We can also define a multiplier representation of Diff + (S 1 ) × Diff + (S 1 ) byŨ (γ + , γ − ) := q U q (γ + ) ⊗ U q (γ − ). The representationŨ actually factors through Conf(E). This can be seen by noting that in each component U q ⊗ U q the generator of spacelike rotations is L σq 0 ⊗ ½ − ½ ⊗ L σq 0 whose spectrum is included in Z, since the spectrum of L σq 0 is included in N + q 2 2 . As all the components are the same H q ⊗H q = H⊗H, the shift operators {ψ q } ("fields") act naturally onH: for Ψ ∈ H, where (Ψ) q ∈ H q ⊗ H q ,
(ψ q ′ Ψ) q = (Ψ) q+q ′ .
It is useful to note how they behave under covariance:
(AdŨ (γ + , γ − )(ψ q ′ )Ψ) q = U q (γ + ) ⊗ U q (γ − )(ψ q ′ ·Ũ (γ + , γ − ) * Ψ) q = U q (γ + ) ⊗ U q (γ − )(Ũ(γ + , γ − ) * Ψ) q+q ′ = (U q (γ + ) ⊗ U q (γ − )) · (U q+q ′ (γ + ) * ⊗ U q+q ′ (γ − ) * ) (Ψ) q+q ′ = (z q (γ + ) * z q+q ′ (γ + )) ⊗ (z q (γ − ) * z q+q ′ (γ − ))(Ψ) q+q ′ = (σ q (z q ′ (γ + ))) ⊗ (σ q (z q ′ (γ − ))) (Ψ) q+q ′ = (σ(z q ′ (γ + ) ⊗ z q ′ (γ − ))ψ q ′ Ψ) q where we used tensoriality of cocycles in the 5th equality.
We define the local algebra, first for I × I ⊂ R × R ⊂ R 2 , where the real lines are identified with the lightrays x 0 ± x 1 = 0, bỹ A(I × I) = {σ(x ⊗ y), ψ q : x, y ∈ A(I), q ∈ R} ′′ , and for other bounded regions by covariance: take γ ± ∈ Diff 0 (R) such that γ ± I = I ± and A(I + × I − ) = {σ(x ⊗ y), ψ q : x, y ∈ A(I), q ∈ R} ′′ . This does not depend on the choice of γ ± . Indeed, if γ ± preserves I, then z q ′ (γ + )⊗z q ′ (γ − ) ∈ A(I) ⊗ A(I) and AdŨ (γ + , γ − )(ψ q ′ ) ∈Ã(I × I) by above computation.
• Covariance. AdŨ (γ + , γ − )(Ã(O)) =Ã((γ + , γ − ) · O) holds by definition. If (γ + , γ − ) ∈ Diff 0 (R) × Diff 0 (R) acts trivially on I × I, thenŨ (γ + , γ − ) =σ(U(γ + ) ⊗ U(γ − )) and this commutes withÃ(I × I), as supp γ ± are disjoint from I.
• Isotony. By covariance, we may assume that I ± ⊃ I. Take γ ± such that γ ± I = I ± . From the expression AdŨ(γ + , γ − )(ψ q ′ ) = (σ(z q ′ (γ + ) ⊗ z q ′ (γ − ))ψ q ′ Ψ) q and from the fact that z q ′ (γ ± ) ∈ A(I ± ), the isotony follows.
• Positivity of energy. Each component U q ⊗ U q has positive energy.
• Existence and uniqueness of the vacuum. Only U 0 ⊗ U 0 contains the vacuum vector.
• Cyclicity. The fields ψ q brings H 0 ⊗ H 0 to any H q ⊗ H q , while the local algebrã σ(A(I) ⊗ A(I)) acts irreducibly on each H q ⊗ H q .
• Locality. In the two-dimensional situation, the spacelike separation of I × I and I + × I − means either I + sits on the left of I and I − on the right, or vice versa. We may assume the former case, as the latter is parallel.
The commutativity between the observablesσ(x⊗y) is trivial. As for the observables and the fields {ψ q }, if x, y ∈ A(I ± ) respectively, as I ± are disjoint from I and σ q are localized in I, we haveσ(x⊗y) = q x⊗y and this commutes with shifts ψ q . Finally, we need to check the commutativity between fields ψ q 1 , Ad U(γ + )⊗U(γ − )(ψ q 2 ), where γ ± I = I ± . We can compute the commutator explicitly:
([ψ q 1 , (AdŨ(γ + ) ⊗Ũ(γ − )(ψ q 2 )]Ψ) q = (ψ q 1σ (z q 2 (γ + ) ⊗ z q 2 (γ − ))ψ q 2 Ψ −σ(z q 2 (γ + ) ⊗ z q 2 (γ − ))ψ q 2 ψ q 1 Ψ) q = (σ(σ q 1 (z q 2 (γ + )) ⊗ σ q 1 (z q 2 (γ − )))ψ q 1 +q 2 Ψ −σ(z q 2 (γ + ) ⊗ z q 2 (γ − ))ψ q 1 +q 2 Ψ) q , and this vanishes because z q 2 (γ + ) * σ q 1 (z q 2 (γ + ))⊗z q 2 (γ − ) * σ q 1 (z q 2 (γ − )) = ǫ + q 1 ,q 2 ⊗ǫ − q 1 ,q 2 = ½, as the braidings ǫ ± q 1 ,q 2 are scalar and conjugate to each other.
This net cannot satisfy the split property. Namely, if there were a type I factor R such thatÃ(D 1 ) ⊂ R ⊂Ã(D 2 ), Ω, · Ω would define a faithful normal state on R, as it is separating for R. As the full Hilbert spaceH is non-separable, by conformal covariance R must be isomorphic to B(H). But this is impossible because the existence of a faithful normal state implies that B(H) is σ-finite, while it is not whenH is non separable.
Outlook
In general, a standard technique to prove the split property is to verify certain nuclearity conditions for the dynamics. In the Möbius covariant case, the most handy one is the trace class condition of the conformal Hamiltonian e −βL 0 [4]. The split property in turn implies certain compactness conditions [3]. With our result, one is lead to conjecture that the trace class property should be also automatic.
The existence of an intermediate type I factor does not depend on the sector. Assume A to be a Möbius covariant net satisfying split property (for instance A is a conformal net) and I 1 ⊂ I 2 an inclusion of intervals with no common end points. Any representation π of A is a family of local algebra faithful isomorphisms onto their image, as any local algebra is a factor. Then an intermediate type I factor A(I 1 ) ⊂ R ⊂ A(I 2 ) is mapped through ρ onto an intermediate type I factor ρ I 2 (A(I 1 )) ⊂ ρ I 2 (R) ⊂ ρ I 2 (A(I 2 )) as ρ I 2 restricts to an isomorphism of R on ρ I 2 (R). Furthermore, when ρ is localizable, then ρ I 1 (A(I 1 )) ⊂ ρ I 2 (A(I 2 )) is a standard split inclusion acting on a separable Hilbert space (we can unitarily identify the Hilbert spaces). At this point it is also natural to expect that the trace class property of L ρ 0 in irreducible or factorial sectors should be automatic. While the split property has important implications in algebraic QFT, it is almost never seen in other approaches to CFT, such as vertex operator algebras (VOAs). On the other hand, the trace class property, or even the finite-dimensionality of the eigenspaces of L 0 would be useful for the study of VOAs.
We are grateful to Marcel Bischoff, Sebastiano Carpi, Luca Giorgetti and Roberto Longo for various interesting discussions on two-dimensional CFT.
): 7 .
7Reeh-Schlieder property: Ω is a cyclic and separating vector for each A(I), I ∈ I; 8. Haag duality: A(I ′ ) ′ = A(I), where I ∈ I and I ′ is the interior of S 1 \I; 9. Bisognano-Wichmann property: U(δ I (−2πt)) = ∆ it A(I),Ω where δ I is the dilation subgroup associated to the interval I and ∆ it A(I),Ω is the modular group of A(I) with respect to Ω; 10. Irreducibility: i∈I A(I) = B(H); 11. Factoriality: algebras A(I) are type III 1 factors; 12. Additivity: let {I κ } ⊂ I be a covering of I, namely I ⊂ κ I κ , then A(I) ⊂ κ A(I κ ). The following seems relatively less known, yet it follows from Möbius covariance and has an important implication [23, Theorem 1.6].
Definition 2. 1 .
1Let (N ⊂ M, Ω) be an standard inclusion of von Neumann algebras, i.e. Ω is a cyclic and separating vector for N, M and N ′ ∩ M. A standard inclusion (N ⊂ M, Ω) is split if there exists a type I factor R such that N ⊂ R ⊂ M.
Proposition 3 . 1 .
31Let I c , I d ∈ I be two open proper arcs covering the circle: I c ∪ I d = S 1 . Then there exist two norm-continuous families of operators (0, 1)
Figure 1 :
1Intervals I c , I d covering S 1 and I k ,Ĩ k with N = 36. are conjugate to each other by the unitary operator e iπH , with P being the self-adjoint generator of "translations" with spectrum Sp(P ) = Sp(P ) = R + ∪ {0}. Moreover, by [4, Theorem 3.3] we have the relation e −2sH = e −tanh( s 2 )P e −sinh(s)P e −tanh( s 2 )P
Corollary 3. 2 .
2Let I c , I d ∈ I be two open proper arcs such that I c ∪ I d = S 1 , and f a strictly positive smooth function on S 1 . Then there exist two norm-continuous families of operators
Theorem 4.1. [36, Theorem 2] Let U be a positive energy, strongly continuous, projective unitary representation of Diff + (S 1 ) with the associated conformal Hamiltonian L 0 .
Lemma 4. 2 .
2Let (A, U, Ω) be a conformal net. Then U(γ)Ω ∈ CΩ if and only if γ ∈ Möb.
Corollary 4. 3 .
3Let V be a strongly continuous, unitary representation of Möb with positive energy, with the associated conformal Hamiltonian L 0 .
Figure 3 :
3Intervals I c , I d , K 1,c , K 1,d , K 2,c , K 2,d .
Indeed, since A(I a ) and A(I b ) are commuting factors, there is a natural isomorphism between the algebraic tensor product A(I a ) ⊙ A(I b ) and A(I a ) ∨ alg A(I b ), see [38, Proposition IV.4.20]. In particular, the bilinear form A(I a ) × A(I b ) ∋ (A, B) → Ω, Az L 0 BΩ ∈ C extends to a unique linear functional φ z on A(I a ) ∨ alg A(I b ).
Theorem 5.4. A conformal net (A, U, Ω) on the circle automatically has the split property.
Conformal nets on S 1 constitute the building blocks of two-dimensional conformal nets. Let us recall the relevant definitions. A locally normal, positive energy, Möbius covariant representation ρ of a conformal net (A, U, Ω) on S 1 is a family of normal representations {ρ I : I ∈ I} of the von Neumann algebras {A(I) : I ∈ I} on a fixed Hilbert space H ρ and a unitary, positive energy unitary representation U ρ on H ρ of the universal covering group of the Möbius group Möb:
•
The Weyl operators W (f ) parametrized by real smooth functions f on S 1 which satisfy the commutation relations W (f )W (g) = e i 2 Im(f,g) W (f + g), where (f, g) f ′ (e iθ )g(e iθ ). • There is a distinguished realization ("vacuum representation") of the Weyl operators (which we denote again by W (f )) with a unitary positive energy representation of Möb which extends to a projective unitary representation U of Diff + (S 1 ), and the vacuum vector Ω such that Ad U(γ)(W (f )) = W (f • γ) and U(g)Ω = Ω if g ∈ Möb. • The U(1)-current net A(I) := {W (f ) : supp f ⊂ I} ′′ . • Irreducible sectors parametrized by q ∈ R: we fix a real smooth function ϕ such that ϕ(e iθ ) = 1. The map W (f ) → e iqϕ(f ) W (f ) extends to an automorphism σ q,I of A(I), where supp f ⊂ I and ϕ(f ) = 1 2π
Remark 2.3. The split property implies separability of the Hilbert space. Indeed, if we have a standard split inclusion of von Neumann algebra on an Hilbert space H, then H has to be separable: Ω is a cyclic and separating vector for the intermediate type I factor R. By considering the cardinality of the basis, either R or R ′ must be isomorphic to B(H) and Ω defines a faithful vector state on it, hence B(H) is σ-finite, which is only possible if H is separable.
If the Hilbert spaces are not separable, several well-known statements no longer hold. For example, an isomorphism between type III algebras might be not a unitary equivalence.
Roberto Longo suggested another idea for the proof of Lemma 4.2: as in the proof in the main text, we may assume that γ preserves three points. As U (γ) preserves the vacuum vector, it commutes with the modular group of the three intervals between these points, hence with the whole Möbius group. From this it is straightforward that γ = id.
A(I a ) ∨ alg A(I b ) is not a C * -algebra, in particular, a positive element a ∈ A(I a ) ∨ alg A(I b ) in the sense of B(H) is not necessarily of the form x * x, where x ∈ A(I a ) ∨ alg A(I b ).
Here the segments (−π, π) × {0} and {0} × (−π, π) are identified with the time and space axis, respectively.5 The lightray decomposition R 2 = R × R is not compatible with the above identification of R with (−π, π) × (−π, π), where the components correspond to the time and space axis.
AcknowledgmentWe would like to thank James Tener for calling our attention to the article[36]of Neretin, which allowed us to bridge the gap in the concept of the proof we previously had.
Modular structure and duality in conformal quantum field theory. R Brunetti, D Guido, R Longo, Comm. Math. Phys. 1561R. Brunetti, D. Guido, and R. Longo. Modular structure and duality in conformal quantum field theory. Comm. Math. Phys., 156(1):201-219, 1993. http://projecteuclid.org/euclid.cmp/1104253522.
Product states for local algebras. Detlev Buchholz, Comm. Math. Phys. 36Detlev Buchholz. Product states for local algebras. Comm. Math. Phys., 36:287-304, 1974. http://projecteuclid.org/euclid.cmp/1103859773.
Nuclear maps and modular structures. I. General properties. Detlev Buchholz, D' Claudio, Roberto Antoni, Longo, 10.1016/0022-1236(90)90104-SJ. Funct. Anal. 882Detlev Buchholz, Claudio D'Antoni, and Roberto Longo. Nuclear maps and modular structures. I. General properties. J. Funct. Anal., 88(2):233-250, 1990. http://dx.doi.org/10.1016/0022-1236(90)90104-S.
Nuclearity and thermal states in conformal field theory. Detlev Buchholz, D' Claudio, Roberto Antoni, Longo, Communications in Mathematical Physics. 2701Detlev Buchholz, Claudio D'Antoni, and Roberto Longo. Nuclearity and thermal states in conformal field theory. Communications in Mathematical Physics, 270(1):267-293, 2007. http://arxiv.org/abs/math-ph/0603083.
The current algebra on the circle as a germ of local field theories. Detlev Buchholz, Gerhard Mack, Ivan Todorov, Nuclear Phys. B Proc. Suppl. 5Detlev Buchholz, Gerhard Mack, and Ivan Todorov. The current algebra on the cir- cle as a germ of local field theories. Nuclear Phys. B Proc. Suppl., 5B:20-56, 1988. https://www.researchgate.net/publication/222585851.
Causal independence and the energy-level density of states in local quantum field theory. Detlev Buchholz, H Eyvind, Wichmann, Comm. Math. Phys. 1062Detlev Buchholz and Eyvind H. Wichmann. Causal independence and the energy-level den- sity of states in local quantum field theory. Comm. Math. Phys., 106(2):321-344, 1986. http://projecteuclid.org/euclid.cmp/1104115703.
Representations of conformal nets, universal C * -algebras and K-theory. Sebastiano Carpi, Roberto Conti, Robin Hillier, Mihály Weiner, Comm. Math. Phys. 3201Sebastiano Carpi, Roberto Conti, Robin Hillier, and Mihály Weiner. Representations of conformal nets, universal C * -algebras and K-theory. Comm. Math. Phys., 320(1):275-300, 2013. http://arxiv.org/abs/1202.2543.
On the representation theory of Virasoro nets. Sebastiano Carpi, Comm. Math. Phys. 2442Sebastiano Carpi. On the representation theory of Virasoro nets. Comm. Math. Phys., 244(2):261-284, 2004. http://arxiv.org/abs/math/0306425.
On the uniqueness of diffeomorphism symmetry in conformal field theory. Sebastiano Carpi, Mihály Weiner, Comm. Math. Phys. 2581Sebastiano Carpi and Mihály Weiner. On the uniqueness of diffeomorphism sym- metry in conformal field theory. Comm. Math. Phys., 258(1):203-221, 2005. http://arxiv.org/abs/math/0407190.
Implementation of conformal covariance by diffeomorphism symmetry. Klaus Claudio D'antoni, Søren Fredenhagen, Köster, Lett. Math. Phys. 673Claudio D'Antoni, Klaus Fredenhagen, and Søren Köster. Implementation of confor- mal covariance by diffeomorphism symmetry. Lett. Math. Phys., 67(3):239-247, 2004. http://arxiv.org/abs/math-ph/0312017.
Interpolation by type I factors and the flip automorphism. D'antoni Claudio, Roberto Longo, 10.1016/0022-1236(83)90018-6J. Funct. Anal. 513Claudio D'Antoni and Roberto Longo. Interpolation by type I fac- tors and the flip automorphism. J. Funct. Anal., 51(3):361-371, 1983. http://dx.doi.org/10.1016/0022-1236(83)90018-6.
Conformal nets, maximal temperature and models from free probability. D' Claudio, Roberto Antoni, Florin Longo, Rădulescu, J. Operator Theory. 451Claudio D'Antoni, Roberto Longo, and Florin Rădulescu. Conformal nets, maximal tem- perature and models from free probability. J. Operator Theory, 45(1):195-208, 2001. https://arxiv.org/abs/math/9810003.
Infinite index extensions of local nets and defects. Del Simone, Luca Vecchio, Giorgetti, Work in progressSimone Del Vecchio and Luca Giorgetti. Infinite index extensions of local nets and defects. Work in progress.
Fields, observables and gauge transformations. Sergio Doplicher, Rudolf Haag, John E Roberts, I. Comm. Math. Phys. 13Sergio Doplicher, Rudolf Haag, and John E. Roberts. Fields, observ- ables and gauge transformations. I. Comm. Math. Phys., 13:1-23, 1969. http://projecteuclid.org/euclid.cmp/1103841481.
Standard and split inclusions of von Neumann algebras. S Doplicher, R Longo, Invent. Math. 753S. Doplicher and R. Longo. Standard and split inclusions of von Neumann algebras. Invent. Math., 75(3):493-536, 1984. https://eudml.org/doc/143108.
Why there is a field algebra with a compact gauge group describing the superselection structure in particle physics. Sergio Doplicher, John E Roberts, Comm. Math. Phys. 1311Sergio Doplicher and John E. Roberts. Why there is a field algebra with a compact gauge group describing the superselection structure in particle physics. Comm. Math. Phys., 131(1):51-107, 1990. http://projecteuclid.org/euclid.cmp/1104200703.
Quantum energy inequalities in two-dimensional conformal field theory. J Christopher, Stefan Fewster, Hollands, Rev. Math. Phys. 175Christopher J. Fewster and Stefan Hollands. Quantum energy inequalities in two-dimensional conformal field theory. Rev. Math. Phys., 17(5):577-612, 2005. http://arxiv.org/abs/math-ph/0412028.
Conformal Haag-Kastler nets, pointlike localized fields and the existence of operator product expansions. Klaus Fredenhagen, Martin Jörß, Comm. Math. Phys. 1763Klaus Fredenhagen and Martin Jörß. Conformal Haag-Kastler nets, pointlike localized fields and the existence of operator product expansions. Comm. Math. Phys., 176(3):541-554, 1996. https://projecteuclid.org/euclid.cmp/1104286114.
Operator algebras and conformal field theory. Fabrizio Gabbiani, Jürg Fröhlich, Comm. Math. Phys. 1553Fabrizio Gabbiani and Jürg Fröhlich. Operator algebras and conformal field theory. Comm. Math. Phys., 155(3):569-640, 1993. http://projecteuclid.org/euclid.cmp/1104253398.
Two-dimensional conformal quantum field theory. P Furlan, G M Sotkov, I T Todorov, file:/localhost/opt/grobid/grobid-home/tmp/link.springer.com/content/pdf/10.1007/BF02742979.pdfRiv. Nuovo Cimento (3). 126P. Furlan, G. M. Sotkov, and I. T. Todorov. Two-dimensional confor- mal quantum field theory. Riv. Nuovo Cimento (3), 12(6):1-202, 1989. link.springer.com/content/pdf/10.1007/BF02742979.pdf.
Roe Goodman, Nolan R Wallach, Projective unitary positiveenergy representations of Diff. S 1 )Roe Goodman and Nolan R. Wallach. Projective unitary positive- energy representations of Diff(S 1 ).
. J. Funct. Anal. 633J. Funct. Anal., 63(3):299-321, 1985. http://www.sciencedirect.com/science/article/pii/0022123685900904.
The conformal spin and statistics theorem. Daniele Guido, Roberto Longo, Comm. Math. Phys. 1811Daniele Guido and Roberto Longo. The conformal spin and statistics theorem. Comm. Math. Phys., 181(1):11-35, 1996. http://projecteuclid.org/euclid.cmp/1104287623.
Extensions of conformal nets and superselection structures. D Guido, R Longo, H.-W Wiesbrock, Comm. Math. Phys. 1921D. Guido, R. Longo, and H.-W. Wiesbrock. Extensions of conformal nets and superselection structures. Comm. Math. Phys., 192(1):217-244, 1998. http://arxiv.org/abs/hep-th/9703129.
Local quantum physics. Texts and Monographs in Physics. Rudolf Haag, Springer-VerlagBerlinsecond editionRudolf Haag. Local quantum physics. Texts and Monographs in Physics. Springer-Verlag, Berlin, second edition, 1996.
When does a quantum field theory describe particles?. R Haag, J A Swieca, Comm. Math. Phys. 1R. Haag and J. A. Swieca. When does a quantum field theory describe particles? Comm. Math. Phys., 1:308-320, 1965. https://projecteuclid.org/euclid.cmp/1103758947.
. V G Kac, A K Raina, Bombay Lectures on Highest Weight Representations of Infinite Dimensional Lie Algebras. World Scientific. V. G. Kac and A. K. Raina: Bombay Lectures on Highest Weight Representations of Infinite Dimensional Lie Algebras. World Scientific, Singapore, 1987.
R V Kadison, J R Ringrose, Advanced theory. Graduate Studies in Mathematics. Providence, RIAmerican Mathematical Society16Fundamentals of the theory of operator algebrasR. V. Kadison and J. R. Ringrose: Fundamentals of the theory of operator algebras. Vol. II. Advanced theory. Graduate Studies in Mathematics, 16. American Mathematical Society, Providence, RI, (1997).
Classification of two-dimensional local conformal nets with c < 1 and 2-cohomology vanishing for tensor categories. Yasuyuki Kawahigashi, Roberto Longo, Comm. Math. Phys. 2441Yasuyuki Kawahigashi and Roberto Longo. Classification of two-dimensional local conformal nets with c < 1 and 2-cohomology vanishing for tensor categories. Comm. Math. Phys., 244(1):63-97, 2004. http://arxiv.org/abs/math-ph/0304022.
Multi-interval subfactors and modularity of representations in conformal field theory. Yasuyuki Kawahigashi, Roberto Longo, Michael Müger, Comm. Math. Phys. 2193Yasuyuki Kawahigashi, Roberto Longo, and Michael Müger. Multi-interval subfactors and modularity of representations in conformal field theory. Comm. Math. Phys., 219(3):631-669, 2001. http://arxiv.org/abs/math/9903104.
Absence of stress energy tensor in CFT 2 models. S Koester, S. Koester. Absence of stress energy tensor in CFT 2 models. 2003. http://arxiv.org/abs/math-ph/0303053.
T Loke, Operator algebras and conformal field theory of the discrete series representation of Diff(S 1 ). University of CambridgePh.D. ThesisT. Loke: Operator algebras and conformal field theory of the discrete series representation of Diff(S 1 ). Ph.D. Thesis, University of Cambridge, 1994.
An analogue of the Kac-Wakimoto formula and black hole conditional entropy. Roberto Longo, Comm. Math. Phys. 1862Roberto Longo. An analogue of the Kac-Wakimoto formula and black hole conditional en- tropy. Comm. Math. Phys., 186(2):451-479, 1997. https://arxiv.org/abs/gr-qc/9605073.
Nets of subfactors. R Longo, K.-H Rehren, Workshop on Algebraic Quantum Field Theory and Jones Theory. Berlin7R. Longo and K.-H. Rehren. Nets of subfactors. Rev. Math. Phys., 7(4):567-597, 1995. Workshop on Algebraic Quantum Field Theory and Jones Theory (Berlin, 1994). http://arxiv.org/abs/hep-th/9411077.
Topological sectors and a dichotomy in conformal field theory. Roberto Longo, Feng Xu, Comm. Math. Phys. 2512Roberto Longo and Feng Xu. Topological sectors and a dichotomy in conformal field theory. Comm. Math. Phys., 251(2):321-364, 2004. http://arxiv.org/abs/math/0309366.
Remarks on infinite-dimensional Lie groups. J Milnor, Relativity, groups and topology II. Les Houches, Session XL. B.S. De Witt and R. Stora EdsAmsterdam, New YorkElsevierJ. Milnor: Remarks on infinite-dimensional Lie groups. In B.S. De Witt and R. Stora Eds.: Relativity, groups and topology II. Les Houches, Session XL, 1983, Elsevier, Amsterdam, New York, 1984, pp. 1007-1057.
Holomorphic continuations of representations of the group of diffeomorphisms of the circle. Y A Neretin, Translation in Math. USSR-Sb. 1805Mat. Sb.Y. A. Neretin. Holomorphic continuations of representations of the group of diffeomorphisms of the circle. Mat. Sb., 180(5):635-657, 720, 1989. Translation in Math. USSR-Sb. 67(1):75-97, 1990. http://www.mat.univie.ac.at/~neretin/holomorphic.pdf.
S Strătilă, L Zsidó, Lectures on von Neumann algebras. KentEditura Academiei and Abacus PressS , . Strătilă, L. Zsidó: Lectures on von Neumann algebras. Editura Academiei and Abacus Press, Kent, 1979.
Theory of operator algebras I. M Takesaki, Springer-VerlagNew York-HeidelbergM. Takesaki: Theory of operator algebras I. Springer-Verlag, New York-Heidelberg, 2002.
Conformal covariance and positivity of energy in charged sectors. Mihály Weiner, Comm. Math. Phys. 2652Mihály Weiner. Conformal covariance and positivity of energy in charged sectors. Comm. Math. Phys., 265(2):493-506, 2006. http://arxiv.org/abs/math-ph/0507066.
Universitá di Roma "Tor Vergata. M Weiner, Conformal covariance and related properties of chiral QFT. Ph.D. thesisM. Weiner: Conformal covariance and related properties of chiral QFT. Ph.D. thesis, Univer- sitá di Roma "Tor Vergata" (2005). http://arxiv.org/abs/math/0703336
Conformal quantum field theory and half-sided modular inclusions of von Neumann algebras. Hans-Werner Wiesbrock, Comm. Math. Phys. 1583Hans-Werner Wiesbrock. Conformal quantum field theory and half-sided modular inclusions of von Neumann algebras. Comm. Math. Phys., 158(3):537-543, 1993. http://projecteuclid.org/euclid.cmp/1104254361.
| []
|
[
"A thermodynamic analysis of the spider silk and the importance of complexity",
"A thermodynamic analysis of the spider silk and the importance of complexity"
]
| [
"S ",
"U Pugliese \nDipartimento di Scienza Applicata e Tecnologia\nPolitecnico di Torino\nCorso Duca degli Abruzzi 2410129TorinoItaly\n",
"Lucia \nDipartimento di Scienza Applicata e Tecnologia\nPolitecnico di Torino\nCorso Duca degli Abruzzi 2410129TorinoItaly\n\nDipartimento di Energia\nPolitecnico di Torino\nCorso Duca degli Abruzzi 2410129TorinoItaly\n"
]
| [
"Dipartimento di Scienza Applicata e Tecnologia\nPolitecnico di Torino\nCorso Duca degli Abruzzi 2410129TorinoItaly",
"Dipartimento di Scienza Applicata e Tecnologia\nPolitecnico di Torino\nCorso Duca degli Abruzzi 2410129TorinoItaly",
"Dipartimento di Energia\nPolitecnico di Torino\nCorso Duca degli Abruzzi 2410129TorinoItaly"
]
| []
| The spider silk is one of the most interesting bio-materials investigated in the last years. One of the main reasons that brought scientists to study this organized system is its high level of resistance if compared to other artificial materials characterized by higher density. Subsequently, researchers discovered that the spider silk is a complex system formed by different kinds of proteins, organized (or disorganized) to guarantee the required resistance, which is function of the final application and of the environmental conditions. Some spider species are able to make different silks, up to twelve, having a composition that seems to be function of the final use (i.e. dragline web, capture web, etc). The aim of this paper is to analyze the properties of the spider silk by means of a thermodynamic approach, taking advantage of the well-known theories applied to polymers, and to try to underline and develop some intriguing considerations. Moreover, this study can be taken as an example to introduce and discuss the importance of the concept of optionality and of the anti-fragile systems proposed by N. N. Thaleb in his book "Antifragile: Things that gain from disorder". | null | [
"https://arxiv.org/pdf/1703.06497v1.pdf"
]
| 119,230,828 | 1703.06497 | 0938fd0208ee51d47cc6f7d6fac8ae0304ea1c14 |
A thermodynamic analysis of the spider silk and the importance of complexity
March 21, 2017
S
U Pugliese
Dipartimento di Scienza Applicata e Tecnologia
Politecnico di Torino
Corso Duca degli Abruzzi 2410129TorinoItaly
Lucia
Dipartimento di Scienza Applicata e Tecnologia
Politecnico di Torino
Corso Duca degli Abruzzi 2410129TorinoItaly
Dipartimento di Energia
Politecnico di Torino
Corso Duca degli Abruzzi 2410129TorinoItaly
A thermodynamic analysis of the spider silk and the importance of complexity
March 21, 2017
The spider silk is one of the most interesting bio-materials investigated in the last years. One of the main reasons that brought scientists to study this organized system is its high level of resistance if compared to other artificial materials characterized by higher density. Subsequently, researchers discovered that the spider silk is a complex system formed by different kinds of proteins, organized (or disorganized) to guarantee the required resistance, which is function of the final application and of the environmental conditions. Some spider species are able to make different silks, up to twelve, having a composition that seems to be function of the final use (i.e. dragline web, capture web, etc). The aim of this paper is to analyze the properties of the spider silk by means of a thermodynamic approach, taking advantage of the well-known theories applied to polymers, and to try to underline and develop some intriguing considerations. Moreover, this study can be taken as an example to introduce and discuss the importance of the concept of optionality and of the anti-fragile systems proposed by N. N. Thaleb in his book "Antifragile: Things that gain from disorder".
General Introduction
The present work can be divided in three sections. In the first one, a general description of the silk using the olog approach [1] is proposed. This part is useful to schematize the possible behaviour of the spider silk. In the second section, the hierarchical nature of the spider silk is investigated through a thermodynamic approach. The right side of Figure 2. 1 shows two examples of the spider web, while in the left side of the picture a scheme of the hierarchical spider silk structure is depicted. It was the starting point to build up the description proposed. Finally, Figure 1: On the left, a spider web; on the right, a schematic picture of the spider silk structure [3] .
in the third section, it has been discussed how the interpretation proposed can be taken as an example to explain the importance of the antifragile theory of N. N. Thaleb [2] , where the antifragility should not be interpreted only as a toughness anymore. From another point of view, the interpretation proposed suggests that the nature follows the ideas of complexity and optionality: maybe the most important keys to survive. The olog approach was considered necessary to create an initial and general scheme of the structure of the spider silk (see Figure 2), putting in evidence important aspects necessary to build up the proposed interpretation. In order to approach the problem from a thermodynamic point of view, the starting point taken into the account was the equation of free energy proposed by Helmholtz (2.1):
dA = −T dS + µdN + σd(1)
The equation (2.1) collects all the terms that describe "where" the energy is stored in a system. The free energy (F) represents the work that the system is able to do on the environment or, from another point of view, the energy that the system can provide when it is stressed by an external force. In order to use the equation of the free energy, it was necessary to identify all possible actors that play a role in the "energy storage". Referring to the general scheme of the internal structure of the spider silk (see Figure 1) and starting from the internal structure deeply studied in literature at the level of protein composition [4,5] , different responses can occur at the microscopic level when a spider web is stressed. Upon the occurrence of an external stress, the following phenomena were considered at the microscopic level: • the crystalline part of the silk, formed by β-sheets based proteins, becomes oriented on the force direction, thus reaching an internal order (i.e. the rectangles represented in Figures 1 and 3).
• the amorphous phase, that links the β-sheets structures, is stretched and the hydrogen bonds (H-H) between these glycine chains are broken. At the same time, the distance between the β-sheets increases and the system starts to work as a spring.
• the same behaviour of the β-sheets occurs at the amorphous bonds level, i.e. the long chains of glycine are stretched.
After the third step, and before to reach the rupture, the system may offer a final resistance (proportional to the Young's modulus E s ilk of the entire system). Anyway, in this study this behaviour was evaluated negligible and therefore was not considered. After these considerations, it is possible to re-write the equation (2.1) as follows:
A = −T (S ϑ + S M + S m ) + µΘ + σ(2)
The last term on the right side was neglected, and the three entropic terms were differently approached. The first entropic term, S teta , refers to the β-sheets orientation and was considered as a first response of the material. Having the possibility to re-organize its internal structure, the spider silks re-arranges itself in order to exhibit the maximum level of resistance. It passes from a disordered phase to an ordered one. From a theoretical point of view, part of the internal energy stored is thus employed to change its configuration or, from another point of view, it is initially stored into the disordered configuration (i.e. the spiders exploit the disorder). Looking for a representation of this behaviour, the literature provided an exhaustive scheme of the spider silk response (see Figure 3) [6,7] . In particular, the phenomenon of β-sheets orientation is well depicted passing from panels g to h, before and after the stretching.
Thermodynamic Formulation
The entropic term due to the initial disorder can be written using the Boltzmann's relation (3.1):
S ϑ = −k B lnW(3)
where k B is the Boltzmann's constant and W is the configuration assumed by the system. The possible mobility of the β-sheets in a plane was considered to describe the configuration of the system. If a β-sheet is not oriented in the stress direction, it rotates up to reach the longitudinal direction. In other words, the energy stored as disposition with respect to the longitudinal direction is utilized at this level. In light of this aspect, the configuration entropy was used considering the possibility to have many small systems able to rotate and having initial directions in the range [-π/2, π/2]; where 0 degrees is the condition when the β-sheets are aligned. Therefore, a β-sheet can assume two positions per rotation having the same energy quantity. Re-writing the equation (3.1), it is possible to assess that:
W (ϑ) = i p i lnp i = i n i e − E i T k B ln n i e − E i T k B(4)
where n i is the number of the states and E i is the energy stored. Taking into the account that the energy is a force per displacement, it is possible to consider a total rotation of the β-sheets when an external stress is applied. In particular:
E i = F i ϑd = Al % F ext cos(π/2 − ϑ)ϑd(5)
where Al % is the percentage of the entire volume composed by Alanine, which is the main component of β-sheet structures, F ext is the external force exerted and d is the entire length of the piece of web analyzed. By substituting the equation
W (ϑ) = i p i lnp i = i n i e − Al % F ext cos(π/2−ϑ)ϑd k B T ln(n i e − Al % F ext cos(π/2−ϑ)ϑd T k B )(6)
or (3.5),
S ϑ = −k B lnW (ϑ)(7)
The other two entropic terms were obtained starting from a thermodynamic model [8−11] commonly used to describe the behaviour of polymers. The idea was to employ the theory separately, describing the silk at two levels and building up two different entropic terms. Indeed, observing the model proposed for the spider silk, it can be intuitively noticed that before the breakage of all the H-H bonds, and immediately after the alignment of the β-sheets, the silk starts to be stretched and its internal rigid components are subjected to a sort of deformation that augments the distance between them. A general scheme used to describe this theoretical behaviour is depicted in Fig. 3 the entropy varies because of the changes in the organization of the structure. The entropic term describes the passage from an higher level of disorder to a lower one, with the occurrence of a transformation. The second and third entropic terms were built up starting from the same theory explained above but applied to a lower scale, where the H-H bonds in the amorphous phase are broken and another step of relaxation occurs. Referring to the Fig. 3.1, it is possible to put in evidence different aspects. The scheme of the structure (i.e. nodes and bonds) are composed by bonds with the same length, b, that link not contiguous species. In particular, the end-to-end distance is:
.1. At this level no breakage of bonds occurs, and
R E = N i=1r i (8) R 2 E = N b 2(9)
This result follows from the ones obtained by Khun on the rubber [8] . Passing to a lower level, the third entropic term was analyzed with a similar approach, the freely joined chains theory (FJC) [12] , which was considered much more significant to describe the system. Carrying out the calculus, it is possible to extrapolate two potentials related to the "macro" and "micro" applications of the theory.
T S M = k B T lnp(M ) = k B T H(M ) = 3 2 k B T b N Al % (10) T S M = k B T lnp(m) = k B T h(m) = 3 2 k B T b 3 2 √ N(11)
In the first relation, an important role is played by the Al % term, which is the percentage of the volume able to perform this macro-behaviour. Indeed, the entropic term SM was built with the aim to describe the macroscopic behaviour performed by the β-sheets components. Once all the entropic terms reported in equation (2.2) have been thoroughly discussed, the attention can be now focused on the term µΦ, which describes the energy stored in the H-H bonds between the proteins. The number of these bonds should not be fixed. Indeed, as the silk reology is strictly related to the RH level of the environment and as by nature the silk has an hydrophobic behaviour, it is possible that exposing a smaller surface to a higher humidity level the contraction of the structure could bring to the formation of a higher number of H-H bonds. Referring to the literature and on the experiments performed [13,14] , the resistance of the spider silk is strictly dependent on the relative humidity of the air. Intuitively, in conditions of high relative humidity (RH) the intrinsic hydrophobic nature of the silk leads to a contraction of the web and thus to a higher storage of energy. More specifically, in presence of water, even if from one side the silk is able to increase, at time 0, the quantity of energy stored, on the other hand there is a higher possibility for the proteins to restore H-H bonds with water molecules. This theory is supported by the fact that when the humidity is higher than a certain limit, the Young's module decreases and the resilience response of the silk increases. Referring to this interpretation, the energy stored and in particular the number of bonds are strictly related to the RH level:
µ ∝ RH or µ(HR)(12)
It is impossible to know exactly the location and the number of the bonds present in each structure. For this reason, the approach chosen is also for this case probabilistic. It is possible to define the number of bonds µ as follows, in function of the RH value:
µ(HR) = 1 2πσ 2 (RH) e − µ 2 2σ 2 (RH)(13)
The number of bonds at different RH conditions should be determined through experimental tests. The choice to employ this kind of distribution came out from the experimental evidences reported by Vehoff and co-workers [5] . Once to have explicated all the terms, the equation (2.2) can be re-written as follows:
A = −k B T ln n i n i e − Al % F ext (π/2−ϑ i )ϑ i d T k B ln n i e − Al % F ext (π/2−ϑ i )ϑ i d T k B + 3 2 k B T b Al % N + 3 2 k B T b 3 2 √ N + Θ 1 2πσ 2 (RH) e − µ 2 2σ 2 (RH)(14)
From this equation, it is finally possible to extrapolate the value of the force:
F = dA dx(15)
Experimental tests and comparison with other results taken from the literature will be conducted in order to calculate the value of equation 3.11.
Conclusion
This study deals with the simple description of a natural material such as the spider silk. The main goal of the present work is to start from the natural complexity of the spider silk to put in evidence a general formulation of its mechanical behaviour, useful to design new materials with the same operating principle [15] . From another point of view, this study wants to pose some considerations on how the complexity in collaborative systems is one of the prerogative to survive. This aspect is at the base of the anti-fragile theory and of its peculiar optionality characteristic. The optionality of the spider silk could be considered as an interesting example and supports the theory of N. N. Thaleb. One of the most fascinating aspects of the present study on the spider silk lies within its particular nature. Indeed, it seems that spiders had well understood how to exploit natural principles such as entropy and strength of collaborating systems [16] . When a spider makes a web, it has not the possibility to forecast all the different conditions in which the web will operate. The lack of this possibility makes the spider to develop a strategy for building up a versatile material, able to store energy at different scales, that interact each other, and to resist at different conditions. A social example can be extrapolated from this consideration: a social multicultural community such as the one in a small company in the R&D department able to face unpredictable events. For this reason, the spider silk can be considered as an anti-fragile material and a good example of social anti-fragile communities or behaviours. It is not so tough to resist to a specific external stress, but it is tough enough to learn from the environment, rearranging itself in function of the humidity and becoming able to resist at different external stresses. This last capacity comes from the possibility of the silk to adapt its structure: from a certain point of view, the silk learns and reorganizes its components once the environmental conditions taught to it in which conditions it is going to operate. The social communities should operate following an equivalent approach. Even if we are comparing a thinking being with an inanimate material, one more time the nature is teaching us how the forecast is much less important if we learn to be prepared to face all possible conditions. On the other hand, this is a prerogative of the wise people.
Figure 2 :
2General possible olog scheme of the spider silk behaviour.
Figure 3 :
3Scheme of the spider silk response.
(3. 3 )
3in the equation (3.2), it is possible to obtain the equations (3.4):
Figure 4 :
4Macrostructure where the β-sheets are linked one to each other (A). The structure, simplified with nodes and bonds, undergoes a relaxation when an external stress is applied (B)
J Y Wong, J Mcdonald, M Taylor-Pinney, D I Spivak, D L Kaplan, M J Buehler, Materials by Design: Merging Proteins and Music. J.Y. Wong, J. McDonald, M.Taylor-Pinney, D.I. Spivak,4 D.L. Ka- plan, M.J. Buehler; Materials by Design: Merging Proteins and Mu- sic, PMC (2012) 488-495
Spider webs and silks. Fritz Vollrath, Scientific American. 52Vollrath, Fritz. Spider webs and silks. Scientific American, March 1992, p. 52.
Antifragile: Things that gain from disorder. N N Thaleb, Penguin Books LtdN.N. Thaleb, Antifragile: Things that gain from disorder, Penguin Books Ltd, 2013.
Spider webs and silks. F Vollrath, Sci. Am. 266F. Vollrath, Spider webs and silks, Sci. Am. 266 (1992) 70-76.
Mechanical properties of spider dragline silk: humidity, hysteresis, and relaxation. T Vehoff, A Glisović, H Schollmeyer, A Zippelius, T Salditt, Biophys. J. 93T. Vehoff, A. Glisović, H. Schollmeyer, A. Zippelius, T. Salditt, Me- chanical properties of spider dragline silk: humidity, hysteresis, and relaxation, Biophys. J. 93 (2007) 4425-4432.
New Secrets of Spider Silk: Exceptionally High Thermal C,onductivity and Its Abnormal Change under Stretching. X Huang, G Liu, X Wang, DOI-10.1002/adma.201104668Adv.Materials. X. Huang, G. Liu, X. Wang; New Secrets of Spider Silk: Exception- ally High Thermal C,onductivity and Its Abnormal Change under Stretching, Adv.Materials, DOI -10.1002/adma.201104668 (2012) [7]
The spider silk moves heat -New energy and fuel. X Wang, X. Wang, The spider silk moves heat -New energy and fuel, March 2012.
A history of thermodynamics: the doctrine of energy and entropy. I Muller, SpringerI. Muller, A history of thermodynamics: the doctrine of energy and entropy, Springer, 2006.
The elaborate structure of spider silk. L Rmer, T Scheibel, Prion. 2L. Rmer, T. Scheibel, The elaborate structure of spider silk, Prion. 2 (2008) 154-161.
The thermodynamics of protein unfolding. S Mulcahy, 4th Industrial Biochemistry Conference. S. Mulcahy, The thermodynamics of protein unfolding, 4th Industrial Biochemistry Conference, 2007.
R E Lyon, Thermodynamics of deformation, Doctoral Dissertations. 1896R.E. Lyon, Thermodynamics of deformation, Doctoral Dissertations 1896 -February 2014.
Physical Properties of Polymers Handbook. J E Mark, SpringerJ.E.Mark, Physical Properties of Polymers Handbook, Springer (2007)
Nonlinear material behaviour of spider silk yields robust webs. S W Cranford, A Tarakanova, N M Pugno, M J Buehler, Nature. 482S.W. Cranford, A. Tarakanova, N.M. Pugno, M.J. Buehler, Nonlin- ear material behaviour of spider silk yields robust webs, Nature 482 (2012) 72-76.
Hypotheses that correlate the sequence, structure, and mechanical properties of spider silk proteins. C Y Hayashi, N H Shipley, R V Lewis, Int. J. Biol. Macromol. 24C.Y. Hayashi, N.H. Shipley, R.V. Lewis, Hypotheses that correlate the sequence, structure, and mechanical properties of spider silk pro- teins, Int. J. Biol. Macromol. 24 (1999) 271-275.
Biology of spiders. R F Foelix, Oxford University Press3rd EdR.F. Foelix, Biology of spiders,3rd Ed., Oxford University Press, 2011.
B Wang, Study on corporate social responsibility based on the dissipative structure theory, International Conference on Education Technology and Information System (ICETIS). B. Wang, Study on corporate social responsibility based on the dissi- pative structure theory, International Conference on Education Tech- nology and Information System (ICETIS), 2013.
Dissipative structures, catastrophes, and pattern formation: a bifurcation analysis. G Nicolis, J F G Auchmuty, Proc. Nat. Acad. Sci. USA. 71G. Nicolis, J.F.G. Auchmuty, Dissipative structures, catastrophes, and pattern formation: a bifurcation analysis, Proc. Nat. Acad. Sci. USA 71 (1974) 2748-2751.
| []
|
[
"A Unified Approach to Configuration-based Dynamic Analysis of Quadcopters for Optimal Stability",
"A Unified Approach to Configuration-based Dynamic Analysis of Quadcopters for Optimal Stability"
]
| [
"Mojtaba Hedayatpour ",
"Mehran Mehrandezh ",
"Farrokh Janabi-Sharifi "
]
| []
| []
| A special type of rotary-wing Unmanned Aerial Vehicles (UAV), called Quadcopter have prevailed to the civilian use for the past decade. They have gained significant amount of attention within the UAV community for their redundancy and ease of control, despite the fact that they fall under an under-actuated system category. They come in a variety of configurations. The "+" and "x" configurations were introduced first. Literature pertinent to these two configurations is vast. However, in this paper, we define 6 additional possible configurations for a Quadcopter that can be built under either "+" or "x" setup. These configurations can be achieved by changing the angle that the axis of rotation for rotors make with the main body, i.e., fuselage. This would also change the location of the COM with respect to the propellers which can add to the overall stability. A comprehensive dynamic model for all these configurations is developed for the first time. | 10.1109/iros.2017.8206397 | [
"https://arxiv.org/pdf/1709.07936v1.pdf"
]
| 3,055,714 | 1709.07936 | 17d07942825a1e412fc1432f6df10bf473f0ae9b |
A Unified Approach to Configuration-based Dynamic Analysis of Quadcopters for Optimal Stability
22 Sep 2017
Mojtaba Hedayatpour
Mehran Mehrandezh
Farrokh Janabi-Sharifi
A Unified Approach to Configuration-based Dynamic Analysis of Quadcopters for Optimal Stability
22 Sep 2017
A special type of rotary-wing Unmanned Aerial Vehicles (UAV), called Quadcopter have prevailed to the civilian use for the past decade. They have gained significant amount of attention within the UAV community for their redundancy and ease of control, despite the fact that they fall under an under-actuated system category. They come in a variety of configurations. The "+" and "x" configurations were introduced first. Literature pertinent to these two configurations is vast. However, in this paper, we define 6 additional possible configurations for a Quadcopter that can be built under either "+" or "x" setup. These configurations can be achieved by changing the angle that the axis of rotation for rotors make with the main body, i.e., fuselage. This would also change the location of the COM with respect to the propellers which can add to the overall stability. A comprehensive dynamic model for all these configurations is developed for the first time.
I. INTRODUCTION
Multi-copter unmanned aerial vehicles (UAVs) with vertical take-off and landing (VTOL) capability are becoming more popular due to the ease of their operation. They come in a variety of shapes and configurations. Among them, quadcopters are gaining a lot of attention due to their simple structure and ease of control [1], quadcopters are used in application domains such as: aerial photogrammetry, aerial inspection of infrastructure, precision agriculture, immersive televising of sports events, and object delivery [1]- [3].
They usually come in two configurations, namely "+" and "x" configurations [4]. The main advantage of the "x" configuration is mainly due to its open frontal area that facilitates for employment of occlusion-free forwardlooking imaging sensors. Although, their dynamics would be different, not much attention has been given to their subtle differences within the research community.
Quadcopters with fixed rotors fall under the under-actuated and non-holonomic flying machine categories. Adoption of a larger number of rotors and/or adding the tilting effect on them for on-the-fly thrust vectoring can lead to fullyactuated holonomic machines at the cost of making them mechanically more complicated and less power efficient.
There have been some studies on: (i) building UAVs using variable-pitch blades [5]; (ii) configuring rotors to 1 yield non-parallel thrust vectors [6]- [8] and (iii) designing multi-copter UAVs with rotors that can tilt on the fly [9] (iv) building multi-copter UAVs with rotors fixedly mounted with an angle with respect to the fuselage [10]. However, very little attention has been given to calculating the optimal configuration in quadcopters with fixed rotors for highest static and dynamic stability. In this paper, we attempt to look at all possible controllable configurations for a quadcopter with fixed rotors and analyze their stability attributes in a quantitative fashion for the first time. We also provide a unified dynamic model for all the possible configurations from which special cases can be deducted. Literature pertinent to the mathematical modeling of quadcopters and their flight control is vast, [2]- [8]. In our derivation, we assume a full model of the gyroscopic moments for the first time. More specifically, we derive the dynamic model of quadcopters assuming that: (A1) the thrust vector for each rotor would make a non-zero angle with the vertical axis (i.e., the sagittal suture) of the quadcopter; and (A2) the center-of-mass (COM) of the quadcopter does not lie on the same plane where the center-of-mass of all motors lie on (blue plane shown in Fig. 1). However, we still assume that the quadcopter under study has two axes of congruency (see Fig. 1).
The angle between the thrust vector of each rotor and the vertical axis of the fuselage is further divided into: (1) the dihedral angle, and (2) twist (i.e., lateral tilting) angle ( Figs. 2 and 3). We assume that the central hub of all four blades lie on a flat horizontal plane (blue plane in Fig. 1), called Fig. 1. Quadcopter in "+" configuration. Body frame is shown in blue and is attached to the center-of-mass of the quadcopter. A frame, shown in blue, is attached to each motor in order to determine orientation of the motors with respect to body frame. Motors are located at distance L and d from z-axis and x-y plane of the body frame respectively. flat plane from this point on, from which the location of the COM is referenced (i.e., the COM can be either above, below, or right on this plane).
The dynamic model developed in this paper will, therefore, have three additional terms in comparison to that in the flat quadcopters (this is the term used for the original quadcopters, where the COM and the rotor hubs are all on the same plane), as: dihedral angle β i , twist angle α i , and the distance between the COM and the flat plane d (please note that d could take positive and negative values, measured in z-direction of the body frame). In existing flat-model of quadcopters one has: β i = α i = d = 0.
We will show that the flat model of quadcopters does not render itself as the most statically and dynamically-stable configuration. For instance, by adding a dihedral angle to the blades' thrust vectors, one can achieve better rolling stability in forward flight. Also, the twist angles in the blades would yield faster yaw dynamics without compromising the overall stability of the system. Furthermore, a positive value of d (i.e., positioning the COM of the quadcopter below the flat plane), one can achieve an open-loop roll/pitch stable configuration.
We use Newton's method for driving the dynamic model of the quadcopter. Also, without the loss of generality, we assume a "+" configuration. The rest of the paper is organized as follows: In section II, the derivation of equations of motion is presented. In section III, the effects of having dihedral and twist angles are given. In section IV, stability analysis for six different configurations is provided and compared with that in the flat-model quadcopters. Conclusions and future work are presented in Section V.
II. EQUATIONS OF MOTION
A. Notation & Parameters
Since there are many rotation matrices involved in this modeling, straight boldface letter R is only reserved for rotation matrices. The rotation from frame A to frame B is expressed as B R A . Also B ω ω ω P i ,I indicates that ω ω ω belongs to the i th propeller with respect to an inertial frame I and is expressed in the body frame B. R A (θ ) , represents a rotation matrix about axis A by angle θ .
B. Frames & Transformations
The body frame B O − B x B y B z (red color in Fig. 1) is attached to the center of mass of the vehicle. Four frames Fig. 1) are attached to motors. Motors are turning with angular velocitiesγ i (i = 1, 2, ..., 4) about axis z M i . Position of the vehicle is expressed in the inertial frame I.
named M i O − M i x M i y M i z (blue color in
Orientation of the body frame with respect to the inertial frame can be captured by the rotation matrix from body frame to inertial frame I R B . This rotation matrix is a function of time and its evolution in time can be obtained as follows [12]:
IṘ B = I R B S( B ω ω ω B,I ),(1)
where S( B ω ω ω B,I ) is the skew-symmetric matrix of angular velocity of the body with respect to the inertial frame as expressed in the body frame B ω ω ω B,I = [p, q, r] T . Likewise, the orientation of each motor frame M i can be obtained with respect to the body frame. First the position of the origin of frame M i with respect to body frame from the origin of the body frame can be written as:
B O M i = R z B ((i − 1) π 2 ) L 0 d , (i = 1, 2, ..., 4),(2)
Since we are using a quadcopter in "+" configuration, we assume that the motors are evenly distributed by angle π 2 about axis z B . Finally, the transformation from frame M i to body frame is obtained as follows:
B R M i = R z ((i − 1) π 2 )R y (β i )R x (α i ), (i = 1, 2, ..., 4),(3)
C. Equations of Motion
The quadcopter is consisted of several rigid bodies and it is considered to be symmetric about its axes of rotation. Because of the symmetry, the inertia tensor of the vehicle, I B , will be diagonal and is expressed in the body frame. We also assume that the moment of inertia of the propellers, I p , are very small compared to I B . We can neglect drag force in angular motion of the body by assuming very small angular velocities. Considering these simplifying assumptions, the rotational motion is governed by the following equation:
τ τ τ = I Bω ω ω B,I + B ω ω ω B,I × (I B ω ω ω B,I + 4 ∑ i=1 I p ω ω ω p i ),(4)
where ω ω ω p i is the angular velocity of the propeller with respect to the inertial frame as expressed in the body frame. τ τ τ is the torque generated by thrust forces and the reaction from motors expressed in body frame. Thrust force and reaction torque of each propeller P i in the frame M i , can be approximated by the following formulas [13]:
M i F P i = [0, 0, k fγ 2 i ] T ,(5)M i τ τ τ P i = (−1) i+1 k t M i F P i ,(6)
Using (5) and (6), we have:
τ τ τ = 4 ∑ i=1 ( B O M i × B R M i M i F P i + B R M i M i τ τ τ P i ),(7)
The position of the vehicle in inertial frame is shown by Cartesian coordinates s= [s 1 , s 2 , s 3 ] T . Finally, the equation governing translational motion can be written as follows:
ms = I R B 4 ∑ i=1 ( B R M i M i F P i ) + mg,(8)
where m is total mass of the vehicle and g is gravitational acceleration vector expressed in the inertial frame.
III. EFFECTS OF DIHEDRAL AND TWIST ANGLES
In this part, an aerodynamic phenomenon, called dihedral effect, which is very common in fixed wing aircraft is introduced [14]. As shown in Fig. 4, when quadcopter is hovering, local air linear velocity with respect to the blade is equal toγ i C blade (r), where C blade (r) is the distance from the blade element (airfoil) to the shaft of the motor. At hover condition, it is assumed, this is the only relative velocity between the blade and the air. In this case, the angle of attack (AOA) of the blade, Θ i , will be defined as the angle between the chordline of the blade element and the velocity vector of the airflow over the blade that is shown in blue color in Fig. 4.
Moving the motor up or down, will generate an additional relative velocity between the blade and airflow, which in this case is parallel to the angular velocity of the propeller and is shown in red color in Fig. 4. It should be noted that in Fig. 4. Dihedral Effect -On top is a propeller and on bottom is a front view of it. In the left, is the case when moving the motor up and in the right, is the case when moving the motor down. this figure, for visualization purpose and to save space, if the motor is moving down, the dihedral effect is shown in the right side of the figure and if the motor is moving up, dihedral effect is shown in the left side of the figure. The resultant of this additional velocity of airflow (due to the translational movement of the motor as shown in red in Fig. 4) with the linear velocity of airflow at each blade element due to rotation of the propeller (as shown in blue in Fig. 4) is the total airflow velocity relative to the blade, M i V Resultant (shown in green in Fig. 4).
If the motor is moving down (see right side of Fig. 4), it is clear that the AOA increases and as a result, thrust force M i F P i will increase [14]. On the other hand, if the motor is moving up (see left side of Fig. 4) the resultant velocity makes a smaller AOA than before when the motor was at rest, and as a result, thrust force decreases. This effect is called "Dihedral Effect". In summary:
• Any air flow with positive (negative) z-component velocity in frame M i increases (decreases) the AOA which increases (decreases) thrust force. Now consider a quadcopter in a 2D motion. In Fig. 5, a configuration with no twist angle (α i = 0) and constant dihedral angle β i = b (b is negative) is shown. Consider the vehicle is pitching down and moving to the left which is equivalent of having an air flow with horizontal velocity to the right as shown in blue color in Fig. 5. According to "Dihedral Effect", for the left motor, there will be an airflow with positive z-component in the frame M i as shown in green color and similarly, in the right motor, there will be an airflow with negative z-component velocity in the corresponding frame M i . As a result, the AOA in the left motor increases thus its thrust force increases. But in the right motor, the AOA decreases and thrust force decreases as well. This interesting effect can make the vehicle stable in translational motion. As the vehicle moves to the left, due to the difference between thrust of the left and right motors, a moment q ′ , is generated that acts like damping in the system which resists with pitch motion and tries to reduce pitch angle to zero.
In order to derive equations for this force and moment, first we find the equation to calculate the thrust force as a function of AOA as follows [14]:
M i F P i = 1 2 C blade 0 ρv 2 (r)C l (r)c(r)dr,(9)
where ρ is the air density, v(r) is the linear velocity of the airflow due to rotation of the blade at distance r from the motor shaft, C l (r) is the lift coefficient of the blade element at distance r from the motor shaft and c(r) is the chord of blade element. At low speed flight, C l changes linearly with AOA [14], which can be written as:
∆C l ∆Θ i = σ , 10)
The linear velocity of i th motor with respect to the inertial frame as expressed in frame M i can be written as: (11) is positive, AOA and thrust force will decrease and if it is negative, we will have an increase in AOA and thrust force accordingly. Using Fig. 5 and trigonometric relations, we can find the change in AOA as follows:
M iȮ M i ,I = M i [Ȯ M i ,x ,Ȯ M i ,y ,Ȯ M i ,z ] T ,(11)IfȮ M i ,z in∆Θ i = Θ i − Θ ′ i = arctan(Ȯ M i ,ż γ i r ),(12)
Combining (9)-(12), and assuming hover conditions, O M i ,z ≪ |γ i |r and constant chord in the blade will result in the following:
M i ∆F P i = [0, 0, − 1 4 cσ ρȮ M i ,z |γ i |C 2 blade ] T ,(13)
where the negative sign in the equation together with the sign ofȮ M i ,z determine either the change in thrust is negative or positive. Near hover conditions, if we considerγ i to be constant then (13) can be simplified further as follows:
M i ∆F P i = [0, 0, −ζȮ M i ,z ] T ,(14)
where ζ is a constant and a function of physical parameters of the blade and the airflow. Equation (14) is called "pitch damper" and likewise we will have a "roll damper". Effects of α i is also in the category of "Dihedral Effects". As shown in Fig. 6, to damp yaw motion, we need to choose α 1,3 > 0 and α 2,4 < 0. This is an interesting case where dihedral effect damps yaw motion. To better visualize this effect, assume that the quadcopter has a positive rotation about axis z B . In this case, due to dihedral effect, AOA of propellers 2 and 4 decreases since there is an airflow with negative z-component of its linear velocity in the corresponding frames M 2,4 . On the other hand, there will be an air flow with positive z-component of its linear velocity in the corresponding frames M 1,3 . This phenomenon, generates a yaw moment (shown in green) that resists yaw motion r. Note that using α 1,3 < 0 and α 2,4 > 0 will have an adverse effect on yaw motion and could make it unstable.
Finally, we will have changes in motor thrusts according to the following equations:
M i ∆F P i ,roll = [0, 0, −ζ rollȮM i ,z ] T ,(15)M i ∆F P i ,pitch = [0, 0, −ζ pitchȮM i ,z ] T ,(16)M i ∆F P i ,yaw = [0, 0, −ζ yawȮM i ,z ] T ,(17)M i ∆F P i = M i ∆F P i ,roll + M i ∆F P i ,pitch + M i ∆F P i ,yaw ,(18)
These changes in thrust force of each motor will affect translational motion as well as rotational motion by generating a moment about COM of the vehicle.
IV. STABILITY ANALYSIS
In this section, we expand the equations for yaw motion and present the effects of twist angle on stability in yaw motion followed by a discussion on effects of dihedral angle in roll and pitch motion and also the effects of location of center of mass on overall stability of the vehicle. Also, for brevity, cross-coupling of the angular momentum of propellers are not presented in this section. At the end six different configurations using aforementioned parameters will be compared in terms of stability and maneuverability.
A. Effect of Twist Angle in Yaw Motion
For simplicity assume that β i = 0, α 1,3 > 0 and α 2,4 < 0. Also all of these angles are kept constant during the analysis. Here d is positive meaning that the center of mass is located below the flat plane. Using (4) and (7), the equations governing the rotational motion can be written as follows:
τ τ τ = I xxṗ I yyq I zzṙ + (I zz − I yy )qr (I xx − I zz )pr 0 ,(19)
where
τ τ τ = k f ds a (γ 1 2 −γ 3 2 ) + (k f Lc a + k t k f s a )(γ 2 2 −γ 4 2 ) k f Lc a (γ 4 2 −γ 2 2 ) + (k t k f s a + k f ds a )(γ 3 2 −γ 1 2 ) (k t k f c a − k f Ls a )(γ 1 2 −γ 2 2 +γ 3 2 −γ 4 2 )
where τ is the external torque generated by the motors to control the attitude of the vehicle andγ i is the RPM of the motors. Also s and c represent sine and cosine functions. It can be shown that α i = 0 yields equations of motion for a regular quadcopter without tilting angles (details are saved for the sake of brevity). From (19), in a pure yaw motion, we have the following equation: Fig. 6. Quadcopter having only twist angles α 1,3 > 0 and α 2,4 < 0. The vehicle is going through pure yaw motion r and dihedral effect generates a counteracting yaw motion that damps yaw motion.
τ yaw = I zzṙ ,(20)
Assuming the motors input for yaw motion to be equal to u =γ 1 2 −γ 2 2 +γ 3 2 −γ 4 2 , we can rewrite (19) as follows:
r = (k t k f c a − k f Ls a ) I zz u,(21)
Taking Laplace transform of (21), we can derive yaw motion transfer function as follows:
r(s) u(s) = C 1 s ,(22)
where C 1 = (k t k f c a −k f Ls a ) I zz . Using (17), we can add the effects of twist angle into (21). Suppose the vehicle is going through pure yaw motion, r, as shown in Fig. 6. This yaw motion will generate local airflow over each blade with linear velocity equal to:
B v P i = [0, 0, r] T × B O M i ,(23)
Using (17) and (23), one can calculate the change in thrust force for each motor:
M i ∆F P i ,twist = −ζ yaw B R T M i B v P i ,(24)
For all motors, torque due to (24) can be calculated as:
τ τ τ twist = 4 ∑ i=0 B O M i × B R M i M i ∆F P i ,twist ,(25)
As shown in Fig. 6, any yaw motion r, will generate an airflow with negative z-component of its linear velocity in the frames M 2,4 . Likewise, it will generate an airflow with positive z-component of its linear velocity in the frames M 1,3 . As a result, based on (25), a torque will be generated that counteracts with the yaw motion r. Considering the simplifying assumptions made earlier in this section and using (23) we can calculate (24) as follows:
M i ∆F P i ,twist = [0, 0, (−1) i+1 ζ yaw Ls a r] T ,(26)
Using (25)-(26), we can write:
τ τ τ twist = [0, 0, −4ζ yaw L 2 s 2 a r] T ,(27)
Now, using (20) and the third component of (27) namely τ twist,yaw , we can add the effects of twist angle into equation (20) as follows:
τ yaw + τ twist,yaw = I zzṙ ,(28)C 1 u =ṙ + ζ ′ yaw I zz r,(29)
where ζ ′ yaw = 4ζ yaw L 2 s 2 a > 0. Taking Laplace transform of (29) and simplifying it will result in:
r(s) u(s) = C 1 s + ζ ′ yaw I zz ,(30)
Comparing (30) with (22) shows that the vehicle has become more stable indeed. Transfer function (30) shows that it only has one negative pole indicating asymptotic stability in yaw motion. In addition to stability, this configuration helps to yaw faster because of the twist angle. Using twist angle a component of thrust force can be used to generate yaw motion which can be larger and easier to generate compared to regular quadcopters which yaw using only reaction torques of the motors. Note that α 1,3 < 0 and α 2,4 > 0 will have adverse effect on stability and will destabilize the system. Similarly, it can be shown that such phenomenon exists in roll and pitch motion for negative values of dihedral angle β i and similar transfer functions can be derived (details are not provided for brevity). The effect of location of center of mass is hidden in the value of ζ ′ in roll and pitch motion. It can be shown that for d > 0 , as d increases, the location of the pole of the transfer function will move to the left in the complex plane and increases stability and decreases maneuverability. Similarly, as d decreases (even for negative values), the location of the pole of the transfer function will move to the right in the complex plane and stability will be decreased and maneuverability will be increased.
B. Comparison of Six Specific Configurations Based on Dihedral and Twist Angles
In this section, based on dihedral and twist angles, six different configurations are proposed followed by a comparison in terms of stability and maneuverability. A regular quadcopter with all motors' angles set to zero is considered as a reference for comparison. The sign of dihedral and twist angles for each motor determines degree of stability or maneuverability in each configuration. The following list, ranks these configurations from the most stable to the least stable (for simplicity, we assume that d is positive for all configurations): 1) β i < 0, α 1,3 > 0 and α 2,4 < 0 2) β i < 0, α i = 0 3) β i = 0, α 1,3 > 0 and α 2,4 < 0 4) β i = 0, α i = 0 5) β i = 0, α 1,3 < 0 and α 2,4 > 0 6) β i > 0, α 1,3 < 0 and α 2,4 > 0 In configuration (1), dihedral and twist angles are in favor of the stability and three dampers for roll, pitch and yaw motion are active in the quadcopter and are helping to stabilize its rotational motion. In configuration (2), twist angles are all set to zero, meaning that no damping (due to twist angles) exist in yaw motion and only roll and pitch dampers are active which results in having a vehicle less stable compared to configuration (1). In Configuration (3), only yaw damper is active and in configuration (4) all dihedral and twist angles are set to zero representing a regular quadcopter without tilting angles of the motors. In configuration (5), twist angles have an adverse effect compared to what we had in configuration (1), meaning that twist angles in this configuration will destabilize yaw motion of the quadcopter.
Note that having an adverse effect on stability means that the poles of the transfer function will move rightward in the complex plane and in some cases the poles will possibly fall in the right half of complex plane. Finally, in configuration (6), all dihedral and twist angles are having adverse effect Quadcopter having both dihedral and twist angles. In this configuration β i < 0, α 1,3 > 0 and α 2,4 < 0 which renders the most stable configuration considering tilting angles of the motors.
with regard to stability in the system. However, in configurations (5) and (6), the vehicle has the highest maneuverability compared to other configurations. In summary, depending on applications and the environment in which the quadcopter is operating, choosing the best configuration and optimized values for dihedral and twist angles will be a trade off between stability and maneuverability.
V. CONCLUSIONS
Equations of motion of a quadcopter with tilted motors and having center of mass offset, in the z-direction of the body frame, were derived. The effects of tilting angles (dihedral and twist angles) on the thrust generated by propellers and consequently on stability of the system were introduced afterwards. Transfer functions considering pure yaw motion were derived followed by stability analysis and formulation of a yaw damper produced by adding twist angles to the motors for a specific configuration. Six different configurations based on these angles were introduced and were ranked based on stability and maneuverability. One of those configurations led to finding the most stable design (Fig. 7) with intrinsic damping in roll, pitch and yaw motion. The formulation for these dampers was presented followed by stability analysis in yaw motion.
The dampers in the system would be favorable for applications where the vehicle hovers such as imaging, surveillance and monitoring. They will be unfavorable when the vehicle is in motion and maneuverability is needed. As seen in section IV, both stability and maneuverability can be achieved using different configurations. As a future work, a reconfigurable system can be designed in a way to transform from the most stable system to the most maneuverable system in the respective situation and vice versa. Such vehicle will be able to change dihedral and twist angles on the fly in order to transform to the required configuration.
Another possible future work is to find the optimized values for dihedral and twist angles. Two different optimization problems can be defined: 1) optimizing the angles for the most stable configuration; and 2) optimizing the angles for the most maneuverable configuration. Finally, verifying the results of this paper using experiments will be done in a future work as well.
Fig. 2 .
2Twist angle α 1 about the x-axis of the motor frame M 1 .
Fig. 3 .
3Dihedral angle β 1 about the y-axis of the motor frame M 1 .
Fig. 5 .
5Dihedral effect in 2D motion of quadcopter. The quadcopter is pitching down and moving to the left. Dihedral effect generates the moment q ′ and acts like a damping the in system.
Fig. 7 .
7Fig. 7.
Avian-inspired grasping for quadcopter micro UAVs. J Thomas, J Polin, K Sreenath, V Kumar, Bioinspiration & Biomimetics. 9J. Thomas, J. Polin, K. Sreenath, and V. Kumar. "Avian-inspired grasping for quadcopter micro UAVs," Bioinspiration & Biomimetics, vol. 9, pp. 25010-25019, 2014.
Sensor Planning for a Symbiotic UAV and UGV System for Precision Agriculture. P Tokekar, J Vander, D Hook, V Mulla, Isler, IEEE Transactions on Robotics. 32P. Tokekar, J. Vander Hook, D. Mulla, and V. Isler. "Sensor Planning for a Symbiotic UAV and UGV System for Precision Agriculture." IEEE Transactions on Robotics, Vol. 32, pp. 1498 -1511, 2016.
Trajectory generation and control for precise aggressive maneuvers with quadcopters. D Mellinger, N Michael, V Kumar, V Khatib, G Kumar, Sukhatme, Experimental Robotics. 3361373SpringerD. Mellinger, N. Michael, and V. Kumar. "Trajectory generation and control for precise aggressive maneuvers with quadcopters" in Experimental Robotics, vol. 3. O. Khatib, V. Kumar, G. Sukhatme, Springer Berlin Heidelberg, 2014, pp. 361373.
Quadrocopter ball juggling. M Muller, S Lupashin, R D'andrea, Proc. 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2011 IEEE/RSJ International Conference on Intelligent Robots and SystemsSan Francisco, CA, USAM. Muller, S. Lupashin, and R. D'Andrea. "Quadrocopter ball jug- gling," in Proc. 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 2011, pp. 5113-5120.
Actuator constrained trajectory generation and control for variable-pitch quadcopters. M Cutler, J How, Proc. AIAA Guidance, Navigation, and Control Conference. AIAA Guidance, Navigation, and Control ConferenceMinneapolis, MN, USAM. Cutler and J. How. "Actuator constrained trajectory generation and control for variable-pitch quadcopters," in Proc. AIAA Guidance, Navigation, and Control Conference, Minneapolis, MN, USA, 2012, pp. 1-15.
Design, modeling and control of an omni-directional aerial vehicle. D Brescianini, R , Proc. 2016 IEEE International Conference on Robotics and Automation (ICRA). 2016 IEEE International Conference on Robotics and Automation (ICRA)Stockholm, SwedenD. Brescianini and R. D'Andrea. "Design, modeling and control of an omni-directional aerial vehicle," in Proc. 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 2016, pp. 3261-3266.
Aggressive Quadrotor Flight through Narrow Gaps with Onboard Sensing and Computing using Active Vision. D Falanga, E Mueggler, M Faessler, D Scaramuzza, Proc. IEEE International Conference on Robotics and Automation. IEEE International Conference on Robotics and AutomationSingaporeD. Falanga, E. Mueggler, M. Faessler, D. Scaramuzza. "Aggressive Quadrotor Flight through Narrow Gaps with Onboard Sensing and Computing using Active Vision," in Proc. IEEE International Confer- ence on Robotics and Automation, Singapore, 2017.
Modeling, control and design optimization for a fully-actuated hexarotor aerial vehicle with tilted propellers. S Rajappa, M Ryll, H H Blthoff, A Franchi, 2015 IEEE International Conference on Robotics and Automation (ICRA). Seattle, WAS. Rajappa, M. Ryll, H. H. Blthoff and A. Franchi, "Modeling, control and design optimization for a fully-actuated hexarotor aerial vehicle with tilted propellers," 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, 2015, pp. 4006-4013.
Modeling and control of a quadcopter UAV with tilting propellers. M Ryll, H H Bulthoff, P R Giordano, Proc. 2012 IEEE International Conference on Robotics and Automation. 2012 IEEE International Conference on Robotics and AutomationSt. Paul, MN, USAM. Ryll, H. H. Bulthoff, and P. R. Giordano. "Modeling and control of a quadcopter UAV with tilting propellers," in Proc. 2012 IEEE International Conference on Robotics and Automation, St. Paul, MN, USA, 2012, pp. 4606-4613.
. Cyphy Works, Cyphy Works, https://www.cyphyworks.com/products/parc/
Full control of a quadcopter. S Bouabdallah, R Siegwart, Proc. 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2007 IEEE/RSJ International Conference on Intelligent Robots and SystemsSan Diego, CA, USAS. Bouabdallah and R. Siegwart. "Full control of a quadcopter," in Proc. 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 2007, pp. 153-158.
Modeling and Simulation of Aerospace Vehicle Dynamics, 3 rd Ed. P H Zipfel, American Institute of Aeronautics and Astronautics. Reston, VAAIAAP. H. Zipfel, Modeling and Simulation of Aerospace Vehicle Dynamics, 3 rd Ed. Reston, VA: American Institute of Aeronautics and Astronau- tics (AIAA), 2014, pp. 87-127.
Design of a fourrotor aerial robot. P Pounds, R Mahony, P Hynes, J Roberts, Proc. Australasian Conference on Robotics and Automation. Australasian Conference on Robotics and AutomationWellington, New ZealandP. Pounds, R. Mahony, P. Hynes, and J. Roberts. "Design of a four- rotor aerial robot," in Proc. Australasian Conference on Robotics and Automation, Wellington, New Zealand, 2002, pp. 145-150.
J D Anderson, Fundamentals of Aerodynamics, 5 th Ed. United States: McGraw Hill Higher Education. J. D. Anderson, Fundamentals of Aerodynamics, 5 th Ed. United States: McGraw Hill Higher Education, 2016, pp. 194-228.
| []
|
[
"UNIFORMIZATION OF SIMPLY CONNECTED FINITE TYPE LOG-RIEMANN SURFACES",
"UNIFORMIZATION OF SIMPLY CONNECTED FINITE TYPE LOG-RIEMANN SURFACES"
]
| [
"Kingshook Biswas ",
"Ricardo Perez-Marco "
]
| []
| []
| We consider simply connected log-Riemann surfaces with a finite number of infinite order ramification points. We prove that these surfaces are parabolic with uniformizations given by entire functions of the form F (z) = Q(z)e P (z) dz where P, Q are polynomials of degrees equal to the number of infinite and finite order ramification points respectively. | 10.1090/conm/639/12827 | [
"https://arxiv.org/pdf/1011.0812v1.pdf"
]
| 119,133,406 | 1011.0812 | 020dda202d16c2d636dfbf359ef05b4cd5aae8ef |
UNIFORMIZATION OF SIMPLY CONNECTED FINITE TYPE LOG-RIEMANN SURFACES
3 Nov 2010
Kingshook Biswas
Ricardo Perez-Marco
UNIFORMIZATION OF SIMPLY CONNECTED FINITE TYPE LOG-RIEMANN SURFACES
3 Nov 2010
We consider simply connected log-Riemann surfaces with a finite number of infinite order ramification points. We prove that these surfaces are parabolic with uniformizations given by entire functions of the form F (z) = Q(z)e P (z) dz where P, Q are polynomials of degrees equal to the number of infinite and finite order ramification points respectively.
Introduction
In [BPM10a] we defined the notion of log-Riemann surface, as a Riemann surface S equipped with a local diffeomorphism π : S → C such that the set of points R added in the completion S * = S⊔R of S with respect to the flat metric on S induced by π is discrete. The mapping π extends to the points p ∈ R, and is a covering of a punctured neighbourhood of p onto a punctured disk in C; the point p is called a ramification point of S of order equal to the degree of the covering π near p. The finite order ramification points may be added to S to give a Riemann surface S × , called the finite completion of S. In this article we are interested in log-Riemann surfaces of finite type, i.e. those with finitely many ramification points and finitely generated fundamental group, in particular simply connected log-Riemann surfaces of finite type. We prove the following:
Theorem 1.1. Let S be a log-Riemann surface with d 1 < +∞ infinite order ramification points and d 2 < +∞ finite order ramification points (counted with multiplicity), such that the finite completion S × is simply connected. Then S is biholomorphic to C and the uniformizationF : C → S × is given by an entire function F = π •F of the form F (z) = Q(z)e P (z) dz where P, Q are polynomials of degrees d 1 , d 2 respectively.
Conversely we have:
Theorem 1.2. Let P, Q ∈ C[z] be polynomials of degrees d 1 , d 2 and F an entire function of the form F (z) = Q(z)e P (z) dz. Then there exists a log-Riemann surface S with d 1 infinite order ramification points and d 2 finite order ramification points (counted with multiplicity) such that F lifts to a biholomorphismF : C → S × .
The entire functions of the above form were first studied by Nevanlinna [Nev32], who essentially proved Theorem 1.1, although his proof is in the classical language. The uniformization theorem was also rediscovered by M. Taniguchi [Tan01] in the form of a representation theorem for a class of entire functions defined by him called "structurally finite entire functions". The techniques we use are very different and adapted to the more general context of log-Riemann surfaces. In a forthcoming article [BPM10b] we use these techniques to generalize the above theorems to a correspondence between higher genus finite type log-Riemann surfaces and holomorphic differentials on punctured Riemann surfaces with isolated singularities of "exponential type" at the punctures (locally of the form ge h dz where g, h are germs meromorphic at the puncture).
The proof of Theorem 1.1 proceeds in outline as follows: we approximate S by simply connected log-Riemann surfaces S × n with finitely many ramification points of finite orders such that d 1 ramification points of S × n converge to infinite order ramification points. The surfaces S n converge to S in the sense of Caratheodory (as defined in [BPM10a]) and by the Caratheodory convergence theorem proved in [BPM10a], the uniformizationsF n of S n converge to the uniformizationF of S. The uniformizationsF n are the lifts of polynomials F n = π n •F n , such that the nonlinearities G n = F ′′ n /F ′ n are rational functions of uniformly bounded degree with simple poles at the critical points of F n . As these critical points go to infinity as n → ∞, the nonlinearity of the function F = π •F is a polynomial, from which it follows that F is of the form Q(z)e P (z) dz.
To prove Theorem 1.2 we use the converse of Caratheodory convergence theorem: we approximate F = Q(z)e P (z) dz by polynomials F n = Q(z)(1 + P (z) n ) n dz. The polynomials F n define log-Riemann surfaces S n which then converge in the sense of Caratheodory to a log-Riemann surface S defined by F , and a study of the log-Riemann surfaces S n shows that the log-Riemann surface S has d 1 infinite order ramification points and d 2 finite order ramification points (counted with multiplicity).
We develop the tools necessary for the proofs in the following sections. We first describe a "cell decomposition" for log-Riemann surfaces, which allows one to approximate finite type log-Riemann surfaces by log-Riemann surfaces with finitely many ramification points of finite order. The cell decomposition allows us to read the fundamental group of a log-Riemann surface from an associated graph, and to prove a parabolicity criterion for simply connected log-Riemann surfaces which in particular implies that the log-Riemann surfaces S and S n considered in the proof of Theorem 1.1 are parabolic.
Cell decompositions of log-Riemann surfaces
We recall that a log-Riemann surface (S, π) comes equipped with a path metric d induced by the flat metric |dπ|. Any simple arc (γ(t)) t∈I in S which is the lift of a straight line segment in C is a geodesic segment in S; we call such arc unbroken geodesic segments. Note that an unbroken geodesic segment is maximal if and only if, as t tends to an endpoint of I not in I, either γ(t) tends to infinity, or γ(t) → p ∈ R.
2.1. Decomposition into stars. Let w 0 ∈ S. Given an angle θ ∈ R/2πZ, for some 0 < ρ(w 0 , θ) ≤ +∞, there is a unique maximal unbroken geodesic segment γ(w 0 , θ) : [0, ρ(w 0 , θ)) → S starting at w 0 which is the lift of the line segment
{π(w 0 )+te iθ : 0 ≤ t < ρ(w 0 , θ)}, such that γ(w 0 , θ)(t) → w * ∈ R if ρ(w 0 , θ) < +∞.
Definition 2.1. The star of w 0 ∈ S is the union of all maximal unbroken geodesics starting at w 0 ,
V (w 0 ) := θ∈R/2πZ γ(w 0 , θ)
.
Similarly we also define for a ramification point w * of order n ≤ +∞ the star V (w * ) as the union of all maximal unbroken geodesics γ(w * , θ) starting from w * , where the angle θ ∈ [−nπ, nπ): Proof: Since R is closed, the function ρ(w 0 , θ) is upper semi-continuous in θ, from which it follows easily that V (w 0 ) is open. Moreover π is injective on each γ(w 0 , θ), hence is a diffeomorphism from V (w 0 ) onto its image C − F , where F is the disjoint union of closed line segments {π(w 0 ) + te iθ : ρ(w 0 , θ) < +∞, t ≥ ρ(w 0 , θ)}; clearly C − F is simply connected. By continuity of π, each component C of ∂V (w 0 ) is contained in π −1 (γ) for some segment γ in F , hence is an unbroken geodesic segment (α(t)) t∈I . Since C is closed in S, C must be maximal. ⋄
V (w * ) := {γ(w * , θ)(t) : 0 ≤ t < ρ(w * , θ), −nπ ≤ θ ≤ nπ}
The set of ramification points R is discrete, hence countable. Let L ⊃ π(R) be the union in C of all straight lines joining points of π(R). Then C − L is dense in C. By a generic fiber we mean a fiber π −1 (z 0 ) = {w i } of π such that z 0 ∈ C − L. (2) The connected components of the stars ∂V (w i ) are geodesic rays γ : (0, +∞) → S such that γ(t) → w * ∈ R as t → 0, γ(t) → ∞ as t → ∞.
(3) The union of the stars is dense in S:
S = i V (w i ) = i V (w i ) Proof: (1): If w ∈ V (w i ) ∩ V (w j )
then the geodesic segments from w to w i , w j are lifts of [π(w), z 0 ], so by uniqueness of lifts (π is a local diffeomorphism) w i = w j .
(2): By the previous Proposition, each component of ∂V (w i ) is a maximal unbroken geodesic segment γ : (0, r) → S with lim t→0 γ(t) = w * ∈ R where w * is a ramification point such that π(γ) is a straight line segment contained in the straight line through π(w i ) and π(w * ). If r < +∞ then γ(t) → w * 1 ∈ R as t → r, so π(w i ) must lie on the straight line through π(w * ), π(w * 1 ), contradicting the fact that {w i } is a generic fiber. Hence r = +∞.
(3): Given p ∈ S, if π(p) = z 0 , take a path (p(t)) 0<t<ǫ ⊂ S converging to p as t → 0 such that the line segments [π(p(t)), z 0 ] make distinct angles at z 0 , then the discreteness of R implies that for t small enough these line segments admit lifts; again by discreteness of R for some i we have p(t) ∈ V (w i ) for all t small, and p ∈ V (w i ). ⋄ It is easy to see that for w i = w j , the components of ∂V (w i ), ∂V (w j ) are either disjoint or equal, and each component can belong to at most two such stars. The above Propositions hence give a cell decomposition of S into cells V (w i ) glued along boundary arcs γ ⊂ ∂V (w i ), ∂V (w j ).
2.2. The skeleton and fundamental group. Let π −1 (z 0 ) = {w i } be a generic fiber. The 1-skeleton of the cell decomposition into stars gives an associated graph:
Definition 2.4. The skeleton Γ(S, z 0 ) is the graph with vertices given by the stars V (w i ), and an edge between V (w i ) and V (w j ) for each connected component γ of ∂V (w i ) ∩ ∂V (w j ). Each edge corresponds to a geodesic ray γ : (0, +∞) → S starting at a ramification point. This gives us a map from edges to ramification points, foot :
γ → foot(γ) := lim t→0 γ(t) ∈ R ∈ V (w i ) ∩ V (w j ).
For w * ∈ R we let C(w * ) = {γ : foot(γ) = w * }.
We omit the proof of the following proposition which is straightforward:
Proposition 2.5. If w * is of finite order n then C(w * ) = (γ i ) 1≤i≤n is a cycle of edges in Γ of length n. If w * is of infinite order then C(w * ) = (γ i ) i∈Z is a bi-infinite path of edges in Γ.
We can compute the fundamental group of a log-Riemann surface from its skeleton:
Proposition 2.6. The log-Riemann surface S deformation retracts onto Γ(S, z 0 ). In particular π 1 (S) = π 1 (Γ(S, z 0 )).
Proof: Let ∂V (w i ) = ⊔ k∈Ji γ ik be the decomposition of ∂V (w i ) into its connected components. Choose points v ik ∈ γ ik , satisfying v ik = v jl if γ ik = γ kl . Choose simple arcs α ik , k ∈ J i , joining w i to v ik within V (w i ), with α ik ∩α ik ′ = {w i }.
Then V (w i ) deformation retracts onto the union of the arcs α ik ; moreover for i, j ∈ I we can choose the retractions compatibly on arcs γ ⊂ ∂V (w i ) ∩ ∂V (w j ), giving a retraction of S onto the union of all arcs α ik , i ∈ I, k ∈ J k , which is homeomorphic to Γ(S, z 0 ). ⋄ The relation of Γ(S, z 0 ) to the finitely completed log-Riemann surface S × is as follows:
Definition 2.7. The finitely completed skeleton Γ × (S, z 0 ) is the graph obtained from Γ(S, z 0 ) as follows: for each finite order ramification point w * , add a vertex v = v(w * ) to Γ(S, z 0 ), remove all edges in the cycle C(w * ) and add an edge from v i to v for each vertex v i in the cycle C(w * ).
Then as above we have:
Proposition 2.8. The finitely completed log-Riemann surface S × deformation retracts onto the finitely completed skeleton Γ × (S, z 0 ).
Proof: Let w * be a finite order ramification point. Observe that in the proof of the previous Proposition, for γ = γ ik an edge in C(w * ), in the finitely completed log-Riemann surface the arc α ik can be be homotoped to an arcα ik from w i to w * . Then S × deformation retracts onto the union of the arcs α ik ,α ik which is homeomorphic to Γ × (S, z 0 ). ⋄ Given a graph Γ satisfying certain compatibility conditions along with the information of the locations of the ramification points, we can also construct an associated log-Riemann surface S with skeleton Γ: Proposition 2.9. Let Γ = (V, E) be a connected graph with countable vertex and edge sets and a map foot : E → C. For each vertex v let E v be the set of edges with a vertex at v and let R v = foot(E v ). Assume that the following hold:
(1) The image foot(E) ⊂ C is discrete.
(2) For all vertices v and points z ∈ R v , the intersection foot −1 (z) ∩ E v has exactly two edges, labelled {e z (v, +), e z (v, −)}.
(3) For an edge e between vertices v, v ′ with foot(e) = z, either e = e z (v,
+) = e z (v ′ , −) or e = e z (v, −) = e z (v ′ , +).
Then there exists a log-Riemann surface S with skeleton Γ(S, z 0 ) = Γ for some z 0 ∈ C.
Proof: Let L ⊂ C be the union of all straight lines through pairs of points in foot(E), and let z 0 ∈ C − L. For each vertex v of Γ, let L v be the union of the half-lines l z starting at points z ∈ R v with direction z − z 0 . By assumption (1) this collection of half-lines is locally finite. Let U v be the domain C − L v . Equip U v with the path metric d(a, b) = inf β β |dz| (infimum taken over all rectifiable paths β joining a and b). Then the metric completion U * v of U v is given by adjoining for each z ∈ R v two copies of l z (the two 'sides' of the slit l z ) intersecting at a point z v , which we denote by
U * v = U v z∈Rv (l z (v, +) ∪ l z (v, −))
where we take l z (v, +) to be the 'upper side' and
l z (v, −) the 'lower' side (so z → l z (v, +) if z → l z in U v with arg(z − z 0 ) increasing and z → l z (v, −) if z → l z in U v with arg(z − z 0 ) decreasing). The inclusion of U v in C extends to a local isometry π v : U * v → C with π v (l z (v, +)) = π v (l z (v, −)) = l z . Let S * be S * = v∈V U * v / ∼ with the following identifications: for each edge e with vertices v, v ′ and foot(γ) = z, if e = e z (v, +) = e z (v ′ , −) we paste isometrically the half-lines l z (v, +), l z (v ′ , −), otherwise we paste isometrically l z (v, −), l z (v ′ , +)
. The identifications are compatible with the maps π v , giving a a map π : S * → C. We let R ⊂ S * be the subset corresponding to the points {z v } and S = S * − R.
Since π(R) = foot(E) is discrete, the set R is discrete. Moreover π restricted to S is a local isometry, and the completion of S with respect to the induced path metric is precisely S * , hence S is a log-Riemann surface. The fiber π −1 (z 0 ) is generic since z 0 ∈ C − L. The stars with respect to this fiber are precisely the open subsets U v ⊂ S. For any star U v its closure in S * is the image of U * v in S * . For vertices v, v ′ , according to the above identifications between U
* v , U * v ′ in S * , each component of ∂U v ∩ ∂U v ′ (if non-empty) is a half-line l arising from an edge e between v 1 , v 2 , of either the form l = l z (v, +) = l z (v ′ , −) or l = l z (v, −) = l z (v ′ , +).
It follows that Γ(S, z 0 ) = Γ. ⋄ 2.3. Truncation and approximation by finite sheeted surfaces. We can use the decomposition into stars to approximate any log-Riemann surface by finite sheeted log-Riemann surfaces by "truncating" infinite order ramification points to finite order ramification points. More precisely we have:
Theorem 2.10. Let (S, p) be a pointed log-Riemann surface. Then: (1) There exists a sequence of pointed log-Riemann surfaces (S n , p n ) converging to (S, p) in the Caratheodory topology such that each S n has only finitely many ramification points all of finite order.
(2) If S × is simply connected then all the surfaces S × n are simply connected.
We recall the definition of convergence of log-Riemann surfaces in the Caratheodory topology from [BPM10a]: (S n , p n ) → (S, p) if for any compact K ⊂ S containing p there exists N = N (K) ≥ 1 such that for all n ≥ N there is an isometric embedding ι n,K of K into S n mapping p to p n which is a translation in the charts π, π n on S, S n .
Proof of Theorem 2.10: (1): Since the generic fibers are dense in S we may assume without loss of generality that p = w 0 lies in a generic fiber {w i } = π −1 (z 0 ). Let V i = V (w i ) be the corresponding stars and Γ = Γ(S, z 0 ) the associated skeleton, equipped with the graph metric d Γ (where each edge has length 1). For any star V i and R > 0, the set V i ∩ B(w i , R) is compact, so it contains at most finitely many ramification points. It follows that the collection of edges
E(V i , R) := {γ : γ is an edge with a vertex at V i , foot(γ) ∈ B(w i , R)}
is finite, and hence so is the corresponding collection of vertices
V(V i , R) := {V j : γ ∈ E(V i , R) is an edge between V i , V j }.
For n ≥ 1 we define collections of edges and vertices (E n,k ) 1≤k≤n , (V n,k ) 1≤k≤n as follows:
We let E n,1 = E(V 0 , n), V n,1 = V(V 0 , n) and for 1 < k ≤ n,
E n,k := Vi∈V n,k−1 E(V i , n) V n,k := Vi∈V n,k−1 V(V i , n)
This gives us finite connected subgraphs Γ n = (V n,n , E n,n ) of Γ increasing to Γ. LetŜ n = V ∈Vn,n V ⊂ S * be the corresponding union of stars in S * . It is a Riemann surface with boundary, each boundary component being an edge γ of Γ n . We paste appropriate boundary components isometrically to obtain a Riemann surface without boundary S n = S n / ∼ as follows:
We let R n be the set of ramification points {foot(γ) : γ ∈ E n,n }. For w * ∈ R n we let Γ n (w * ) be the subgraph of Γ n consisting of vertices V i and edges γ such that w * = foot(γ) ∈ V i . Two cases arise:
(i) The ramification point w * is of finite order: Then there are finitely many stars V i such that w * ∈ V i . If Γ n (w * ) does not contain all of them, then the union of stars V i , V i ∈ Γ n (w * ) has two boundary components, both of which are lifts of a half-line in C starting at π(w * ); in this case we can paste the two components by an isometry which is the identity in charts.
(ii) The ramification point w * is of infinite order: Then the union of stars V i , V i ∈ Γ n (w * ) always has two boundary components, both of which are lifts of a half-line in C starting at π(w * ); we paste the two components by an isometry which is the identity in charts.
Let q n :Ŝ n ։Ŝ n / ∼ denote the quotient ofŜ n under the identifications made in (i), (ii). The subset S n := (Ŝ n / ∼) − q n (R n ) is a Riemann surface without boundary. Since the identifications are compatible with the map π, π induces a map π n : S n → C which is a local diffeomorphism. The completion of S n with respect to the flat metric induced by π n is isometric toŜ n / ∼, so that S n is a log-Riemann surface with finite ramification set q n (R n ); it is clear from the construction in (i), (ii) above that these ramification points are all of finite order. We let p n = q n (p).
Any compact K ⊂ S containing p can only intersect finitely many stars V i and hence K ⊂Ŝ n for n large enough. Moreover for n large K does not intersect the boundary ofŜ n (which is contained in stars going to infinity in Γ as n goes to infinity), hence the quotient map q n isometrically embeds K in S n . Thus (S n , p n ) converges to (S, p) as required.
(2): The graph Γ(S n , z 0 ) can be obtained by adding edges to the finite graph Γ n between certain vertices corresponding to edges in the sets C(w * ), w * ∈ R n , to give cycles C(q n (w * )) in Γ(S n , z 0 ). If S × is simply connected then by Proposition 2.8 the graph Γ × (S, z 0 ) is a tree. It follows from the construction of Γ × (S, z 0 ) that π 1 (Γ(S, z 0 )) is generated by cycles corresponding to finite order ramification points and hence π 1 (Γ(S n , z 0 )) is generated by the cycles C(q n (w * )). In constructing Γ × (S n , z 0 ) from Γ(S n , z 0 ) these cycles become trivial so π 1 (Γ × (S n , z 0 )) is trivial. ⋄ 2.4. Compactness for uniformly finite type log-Riemann surfaces. The family of finite type log-Riemann surfaces with a given uniform bound on the number of ramification points is compact, in the following sense:
Theorem 2.11. Let (S n , p n ) be a sequence of pointed log-Riemann surfaces with ramification sets R n . If for some M, ǫ > 0 we have #R n ≤ M, d(p n , R n ) > ǫ for all n then there is a pointed log-Riemann surface (S, p) with ramification set R such that #R ≤ M and (S n , p n ) converges to (S, p) along a subsequence.
Proof: Composing π n with a translation if necessary we may assume π n (p n ) = 0 for all n. Since d(p n , R n ) > ǫ we can change p n slightly (within the ball B(p n , ǫ)) to assume without loss of generality that the fiber π −1 n (0) containing p n is generic. Let Γ n be the corresponding skeleton and v n,0 the vertex containing p n . Passing to a subsequence we may assume the projections π n (R n ) converge (in the Hausdorff topology) to a finite set {w * 1 , . . . , w *
N } ∪ {∞} ⊂Ĉ − B(0, ǫ) (where N ≤ M )
, and for all n lie in small disjoint neighbourhoods B 1 , . . . , B N and B of the points of R = {w * 1 , . . . , w * N } and ∞ respectively. Let γ 1 , . . . , γ N be generators for the group G = π 1 (C − R) where each γ i is a simple closed curve in C − (B ∪ i B i ) starting at the origin with winding number one around B i and zero around B j , j = i. There is a natural action of G on the vertices of Γ n : given a vertex v, let w be the point of the fiber π −1 n (0) in v. Then any g ∈ G has a unique liftg to S n starting at w. Let g · v be the vertex of Γ n containing the endpoint ofg.
We define a graph Γ ′ n = (V n , E n ) as follows: the vertex set V n is the orbit of v n,0 under G. We put an edge e between distinct vertices v, v ′ of Γ ′ n for each generator γ ∈ {γ ± i , i = 1, . . . , N } such that v ′ = γ · v. We define foot n (e) = w * i if the edge e corresponds to either of the generators γ i , γ −1 i . This defines a map foot n : E n → R ⊂ C.
For v ∈ V n let E v be the set of edges with a vertex at v and
R v = foot n (E v ) ⊂ R. Since γ i · v = v if and only if γ −1 i · v = v, it follows that for z = w * i ∈ R v , the intersection foot −1 n (z) ∩ E v
consists of precisely the two edges corresponding to the generators γ i , γ −1 i ; we label these edges as e z (v, +), e z (v, −). It is easy to see that the graphs Γ ′ n satisfy the hypotheses of Proposition 2.9. Since each vertex has valence at most 2N , the balls B(v n,0 , k) are finite, so we can pass to a subsequence such that the pointed graphs (Γ ′ n , v n,0 ) converge to a limit pointed graph (Γ = (V, E), v 0 ), in the sense that for any k ≥ 1, for all n large enough there is an isomorphism i n of the ball B(v 0 , k) with B(v n,0 , k) taking v 0 to v n,0 . We may also assume that the isomorphisms i n for different n are compatible with the mappings foot n and the labeled edges e z (v, +), e z (v, −), thus inducing a corresponding mapping foot : E → R ⊂ C and a labeling of the edges of Γ. Then the limit graph Γ satisfies the hypotheses of Proposition 2.9 and we obtain a corresponding pointed log-Riemann surface (S, p) ramified over the points of R such that Γ(S, 0) = Γ, with p in a generic fiber π −1 (0), and the star containing p corresponding to the vertex v 0 of Γ. Moreover S has at most N ramification points. It is easy to see that any compact K ⊂ S containing p embeds isometrically in all the log-Riemann surfaces S n via an isometry ι n such that ι n (p) = p n , ι ′ n (p) = 1, hence (S n , p n ) converges to (S, p). ⋄ 2.5. Decomposition into Kobayashi-Nevanlinna cells. Let S be a log-Riemann surface with R = ∅. We define a cellular decomposition of S due to Kobayashi [Kob35] and Nevanlinna ([Nev53] which is useful in determining the type (parabolic or hyperbolic) of simply connected log-Riemann surfaces.
Definition 2.12. Let w * ∈ R. The Kobayashi-Nevanlinna cell of w * is defined to be the set
W (w * ) := {w ∈ S * |d(w, w * ) < d(w, R − {w * })}
Proposition 2.13. The Kobayashi-Nevanlinna cells satisfy:
(1) Any w ∈ W (w * ) lies on an unbroken geodesic [w * , w] ⊂ W (w * ). In particular W (w * ) ⊂ V (w * ) is open and path-connected.
(2) The boundary of W (w * ) is a locally finite union of geodesic segments.
(3) S = ∪ w * ∈R W (w * ) Proof: (1): For any w ∈ W (w * ), w = w * , since R = ∅ there is a maximal unbroken geodesic γ(w, θ) converging to a point of R at one end, and since w * is the point in R closest to w, there must be such a geodesic [w, w * ] converging to w * . Moreover for any
w ′ ∈ [w, w * ], w * 1 ∈ R − {w * }, we have d(w * , w ′ ) = d(w * , w) − d(w, w ′ ) < d(w * 1 , w) − d(w, w ′ ) ≤ d(w * 1 , w ′ ) hence [w, w * ] ⊂ W (w * ).
(2): Let w ∈ ∂W (w * ). By discreteness of R there are finitely many ramification points w * = w * 1 , . . . , w * n at minimal distance r > 0 from w, and n ≥ 2. The disc B(w, r) is a euclidean disk, with the points w * i lying on its boundary; the angular bisectors of the sectors formed by [w,
w * i ], [w, w * i+1 ] then are equidistant from w * i , w * i+1 and lie in ∂W (w * i ) ∩ ∂W (w * i+1 )
, while all other points in the disk lie in W (w * i ) for some i. Hence a neighbourhood of w in ∂W (w * ) is given either by a geodesic segment passing through w (if n = 2) or by two geodesic segments meeting at w (if n > 2).
(3): Any w ∈ S belongs to W (w * ) for any ramification point w * at minimal distance from w. ⋄ 2.6. Kobayashi-Nevanlinna parabolicity criterion. We consider a log-Riemann surface S such that the finite completion S × is simply connected. We will use the following theorem of Nevanlinna ([Nev53] p. 317):
Theorem 2.14. Let F ⊂ S × be a discrete set and U : S × − F → [0, +∞) be a continuous function such that:
(1) U is C 1 except on at most a family of locally finite piecewise smooth curves.
(2) U has isolated critical points.
(3) U → +∞ as z → F or as z → ∞.
For ρ > 0 let Γ ρ be the union of the curves where U = ρ, and let
L(ρ) = Γρ |grad z U ||dz|.
where |grad z U ||dz| is the conformally invariant differential given by is divergent then the surface S × is parabolic.
We now define a function U on S as follows:
Let ω be the continuous differential ω := |d arg(w − w * )|, where for each w ∈ S, w * is a ramification point such that w ∈ W (w * ). Fix a base point w 0 ∈ S and define τ : S → [0, +∞) by
τ (w) := inf w w0 ω
where the infimum is taken over all paths from w 0 to w. We define another nonnegative continuous function σ : S → [0, +∞) by σ(w) := | log |w − w * || where as before for each w ∈ S the point w * is a ramification point such that w ∈ W (w * ).
Then the sum U = τ + σ : S → R is a function satisfying the conditions (1)-(3) of the above theorem. The map t = σ + iτ gives a local holomorphic coordinate away from the boundaries of the Kobayashi-Nevanlinna cells, for which we have |grad t U ||dt| = √ 2|dt|. On a level set Γ ρ = {U = ρ} we have 0 ≤ τ ≤ ρ, t = (ρ − τ ) + iτ , so |grad t U ||dt| = √ 2|dt| = 2|dτ |. For a given θ > 0, the connected components of the level set {τ (w) = θ} are Euclidean line segments which are halflines or intervals; let 0 ≤ n(θ) ≤ ∞ denote the number of such line segments. Each such segment intersects Γ ρ in at most one point; hence we obtain
L(ρ) = Γρ |grad t U ||dt| = 2 Γρ |dτ | ≤ ρ 0 n(θ)dθ
Using Theorem 2.14 above, we obtain the following:
Theorem 2.15. Let S be a log-Riemann surface such that S × is simply connected. For θ > 0 let 0 ≤ n(θ) ≤ ∞ denote the number of connected components of the level set {τ (w) = θ}. If the integral ∞ 0 dρ ρ 0 n(θ)dθ is divergent then S × is biholomorphic to C.
This implies:
Corollary 2.16. Let S be a log-Riemann surface with a finite number of ramification points such that S × is simply connected. Then S is biholomorphic to C.
Proof: In this case the function n(θ) is bounded above by twice the number of ramification points of S, so ρ 0 n(θ)dθ ≤ Cρ and hence the integral in Theorem 2.15 diverges. ⋄
Uniformization theorems
We can now prove Theorem 1.1 as follows:
Proof of Theorem 1.1: Let p ∈ S. Let D 1 , D 2 be the numbers of infinite and finite order ramification points respectively of S. By Corollary 2.16 the log-Riemann surface S × is biholomorphic to C. The approximating finitely completed log-Riemann surfaces S × n given by Theorem 2.10 are also biholomorphic to C and for n large all have D 1 + D 2 ramification points. LetF : C → S × andF n : C → S × n be corresponding normalized uniformizations such thatF (0) = p,F ′ (0) = 1,F n (0) = p n ,F ′ n (0) = 1, with inverses G =F −1 , G n =F −1 n . By Theorem 1.2 of [BPM10a] the entire functions F n = π n •F n converge uniformly on compacts to the entire function F = π •F . Since π n : S × n → C is finite to one, the entire function F n has a pole at ∞ of order equal to the degree of π n , and is hence a polynomial. The nonlinearities R n = F ′′ n /F ′ n are rational functions whose poles are simple poles with integer residues at the critical points of F n , which are images of the ramification points of S n under G n . Thus the rational functions R n are all of degree D 1 + D 2 , converging normally to F ′′ /F ′ , so R = F ′′ /F ′ is a rational function of degree at most D.
Each ramification point w * of S corresponds to a ramification point w * n of S n of order converging to that of w * . We note that for n large any compact K ⊂ S × containing p embeds into the approximating surfaces S × n . Since the maps G n converge to G uniformly on compacts of S × by Theorem 1.1 [BPM10a], the images under G n of ramification points in S × n corresponding to finite ramification points in S converge to their images under G, giving in the limit D 2 simple poles of R, with residue at each equal to the order of the corresponding finite ramification point of S minus one.
On the other hand the infinite order ramification points of S are not contained in S × , so the images of the corresponding ramification points in S × n under G n cannot be contained in any compact in C and hence converge to infinity. The rational functions R n have a simple zero at infinity, and have D 1 simple poles converging to infinity. Applying the Argument Principle to a small circle around infinity it follows that R has a pole of order D 1 − 1 at infinity.
Thus R is of the form
F ′′ F ′ = D2 i=1 m i − 1 z − z i + P ′ (z)
where m 1 , . . . , m D2 are the orders of the finite ramification points of S and P is a polynomial of degree D 1 . Integrating the above equation gives
F (z) = π(p) + z 0 (t − z 1 ) m1−1 . . . (t − z D2 ) mD 2 −1 e P (t) dt
as required. ⋄
We can prove the converse using the above Theorem and the compactness Theorem. We need a lemma:
Lemma 3.1. Let (S n , p n ) converge to (S, p). If all the surfaces S × n are simply connected then S × is simply connected.
Proof: We may assume the points p n , p belong to generic fibers. Let Γ n , Γ denote the corresponding skeletons. Let γ be a loop in S × based at p. We may homotope γ away from the finite ramification points to assume that γ ⊂ S. By Proposition 2.6, γ corresponds to a path of edges α = {e 1 , . . . , e n }. By induction on the number of edges we may assume that α is simple. If foot(α) = {w * } is a singleton then w * is a finite ramification point and γ is trivial in S × . Otherwise there are distinct ramification points w * 1 , w * 2 ∈ foot(α). Considering the isometric embedding of γ in S n for n large gives a path γ n and a corresponding path of edges α n ; for n large, it follows that there are distinct ramification points in foot n (α n ), hence γ n is non-trivial in S × n , a contradiction. ⋄ Proof of Theorem 1.2: Given an entire function F with F ′ (z) = Q(z)e P (z) we can approximate it by polynomials F n such that F ′ n (z) = Q(z)(1 + P (z)/n) n . Let Z n = {P = −n} ∪ {Q = 0}∪ ⊂ C be the zeroes of F ′ n . The pair (S n = C − Z n , π n = F n : C − Z n → C) is a log-Riemann surface with finite ramification set R n which can be naturally identified with Z n , the order of a ramification point being the local degree of F n at the corresponding point of Z n .
For n large the surfaces S n all have the same number of ramification points D = D 1 + D 2 where D 1 is the degree of P and D 2 the number of distinct zeroes of Q. Moreover since F ′ n converge uniformly on compacts, choosing a point z 0 such that Q(z 0 ) = 0, for all n large |F ′ n | is uniformly bounded away from 0 on a fixed neighbourhood of z 0 , so d(z 0 , R n ) is uniformly bounded away from 0. It follows from Theorem 2.11 that (S n , p n = z 0 ) converge along a subsequence to a limit log-Riemann surface (S, p) with finitely many ramification points such that π(p) = z 0 . Since S × n is simply connected for all n, by the previous Lemma S × is simply connected. By Theorem 2.16, S × is biholomorphic to C. LetF : C → S × be a normalized uniformization such thatF (z 0 ) = p,F ′ (z 0 ) = F ′ (z 0 ). It follows from Theorem 1.2 of [BPM10a] that the maps F n converge normally to π •F , so F = π •F . Thus F defines the uniformization of a simply connected log-Riemann surface with finitely many ramification points. The degrees of Q, P relate to the numbers of finite poles and poles at infinity respectively of the nonlinearity F ′′ /F ′ ; the relations between the degrees of Q, P and the numbers of finite and infinite order ramification points of S then follow from the previous Theorem. ⋄
Proposition 2 . 2 .
22For w 0 ∈ S the star V (w 0 ) is a simply connected open subset of S. The boundary ∂V (w 0 ) ⊂ S is a disjoint union of maximal unbroken geodesic segments in S.
Proposition 2 . 3 .
23Let {w i } be a generic fiber. Then: (1) The stars {V (w i )} are disjoint.
local coordinate z = x + iy.
Log-riemann surfaces, caratheodory convergence and euler's formula. preprint. K Biswas, R Perez-Marco, K. Biswas and R. Perez-Marco. Log-riemann surfaces, caratheodory convergence and euler's formula. preprint, 2010.
Uniformization of higher genus finite type log-riemann surfaces. K Biswas, R Perez-Marco, preprintK. Biswas and R. Perez-Marco. Uniformization of higher genus finite type log-riemann surfaces. preprint, 2010.
Theorems on the conformal representation of riemann surfaces. Z Kobayashi, Sci. Rep. Tokyo Bunrika Daigaku. sect. A, 39, 1935Z. Kobayashi. Theorems on the conformal representation of riemann surfaces. Sci. Rep. Tokyo Bunrika Daigaku, sect. A, 39, 1935.
Uber riemannsche flache mit endlich vielen windungspunk-ten. R Nevanlinna, Acta Mathematica. 58R. Nevanlinna. Uber riemannsche flache mit endlich vielen windungspunk-ten. Acta Mathematica, 58, 1932.
Analytic functions. Grundlehren der Matematischen Wissenschaften in Einzeldarstellungen. R Nevanlinna, Springer Verlag1622nd EditionR. Nevanlinna. Analytic functions. Grundlehren der Matematischen Wissenschaften in Einzeldarstellungen 162, 2nd Edition, Springer Verlag, 1953.
Explicit representation of structurally finite entire functions. M Tanighuchi, Proc. Japan Acad. 77M. Tanighuchi. Explicit representation of structurally finite entire functions. Proc. Japan Acad., 77, pages 69-71, 2001.
. Belur Math. 202RKM Vivekananda University ; Université Paris 13RKM Vivekananda University, Belur Math, WB-711 202, India CNRS, LAGA, UMR 7539, Université Paris 13, Villetaneuse, France
| []
|
[
"A PHASE TRANSITION IN THE COMING DOWN FROM INFINITY OF SIMPLE EXCHANGEABLE FRAGMENTATION-COAGULATION PROCESSES",
"A PHASE TRANSITION IN THE COMING DOWN FROM INFINITY OF SIMPLE EXCHANGEABLE FRAGMENTATION-COAGULATION PROCESSES"
]
| [
"Clément Foucart [email protected] \nInstitut Galilée\nLAGA\nUniversité Sorbonne Paris Nord\n\n"
]
| [
"Institut Galilée\nLAGA\nUniversité Sorbonne Paris Nord\n"
]
| []
| We consider the class of exchangeable fragmentation-coagulation (EFC) processes where coagulations are multiple and not simultaneous, as in a Λcoalescent, and fragmentation dislocates at finite rate an individual block into sub-blocks of infinite size. We call these partition-valued processes, simple EFC processes, and study the question whether such a process, when started with infinitely many blocks, can visit partitions with a finite number of blocks or not. When this occurs, one says that the process comes down from infinity. We introduce two sharp parameters θ ≤ θ ∈ [0, ∞], so that if θ < 1, the process comes down from infinity and if θ > 1, then it stays infinite. We illustrate our result with regularly varying coagulation and fragmentation measures. In this case, the parameters θ , θ coincide and are explicit.MSC 2010 subject classifications: Primary 60J25, 60J50, 60J80, 60J90; secondary 60G09 | 10.1214/21-aap1691 | [
"https://arxiv.org/pdf/1605.07039v8.pdf"
]
| 174,798,281 | 1605.07039 | cf83be6144e86d6fd39bcece2e2f9805c7bd3440 |
A PHASE TRANSITION IN THE COMING DOWN FROM INFINITY OF SIMPLE EXCHANGEABLE FRAGMENTATION-COAGULATION PROCESSES
Clément Foucart [email protected]
Institut Galilée
LAGA
Université Sorbonne Paris Nord
A PHASE TRANSITION IN THE COMING DOWN FROM INFINITY OF SIMPLE EXCHANGEABLE FRAGMENTATION-COAGULATION PROCESSES
We consider the class of exchangeable fragmentation-coagulation (EFC) processes where coagulations are multiple and not simultaneous, as in a Λcoalescent, and fragmentation dislocates at finite rate an individual block into sub-blocks of infinite size. We call these partition-valued processes, simple EFC processes, and study the question whether such a process, when started with infinitely many blocks, can visit partitions with a finite number of blocks or not. When this occurs, one says that the process comes down from infinity. We introduce two sharp parameters θ ≤ θ ∈ [0, ∞], so that if θ < 1, the process comes down from infinity and if θ > 1, then it stays infinite. We illustrate our result with regularly varying coagulation and fragmentation measures. In this case, the parameters θ , θ coincide and are explicit.MSC 2010 subject classifications: Primary 60J25, 60J50, 60J80, 60J90; secondary 60G09
1. Introduction and main results. Fragmentation and coagulation are natural phenomena that can be observed in many different contexts. We refer to Bertoin [Ber06] and Pitman [Pit06] for an introduction to exchangeable fragmentations and coalescents. These processes form random systems of disjoint subsets, so-called blocks, covering N := {1, 2...}, evolving either by fragmentations of blocks or by coagulations of two or more blocks. By exchangeability, it is meant that the rate of coalescence only depends on the number of subsets that are merging and not on their constituent elements. Similarly, blocks fragmentate into sub-blocks independently of each other, with a same rate.
One striking feature of pure exchangeable coalescents lies in the so-called "coming down from infinity". This phenomenon states that, although started from a partition with infinitely many blocks, the coalescent process reaches a partition with only a finite number of blocks. This phenomenon has received a great deal of attention in the last two decades. The most important results in this respect are certainly Schweinsberg's necessary and sufficient condition, [Sch00b], for the coming down from infinity of coalescents with no simultaneous coagulations, and the study of their speed of coming down by Berestycki et al. [BBL10] and Limic and Talarczik [LT15].
Most studies have been carried out for processes of pure fragmentation or pure coagulation. However many natural stochastic particle models, ranging from physics to mathematical genetics, evolve in time by both fragmentation and coalescence. We refer for instance to Aldous's review [Ald99, Sections 1.4 and 1.5] for a list of models. This led Berestycki to define in [Beres04] a class of partition-valued processes called exchangeable fragmentationcoagulation (EFC) processes. Some examples of EFC processes have been recently studied by Bertoin and Kortchemski [BK16], Kyprianou et al. [KPRS17] and Foutel-Rodier et al. [FRLS20]. It is also noteworthy that EFC processes arise in the background of several mathematical population models involving interactions, see for instance [Lam05], [Fou13] and [Fou19], as well as González-Casanova and Spanò [GS18], González-Casanova et al. [GPP21] and Foucart and Zhou [FZ20+b].
The purpose of this article is to consider the coming down from infinity phenomenon for EFC processes. We stress that in the literature the terminology "coming down from infinity" has been used in different contexts and often includes the assumption that the boundary ∞ is inaccessible for the process under study. We do not assume this here and when an EFC process comes down from infinity, it may also return to a partition with infinitely many blocks at some other positive time.
In his seminal paper, Berestycki has shown that EFC processes are characterized in law by two exchangeable σ-finite measures, µ Frag and µ Coag on P ∞ , the space of partitions of N, governing respectively the fragmentation and the coagulation in the system. Among other results, he established in [Beres04,Theorem 12], that when the fragmentation occurs at infinite rate, namely µ Frag (P ∞ ) = ∞, the EFC process may have finitely many blocks only at times of exceptional coalescence in which, instantaneously, infinitely many blocks are merged into a finite number. It leads naturally to discard these cases for a further study of the block-counting process. In this direction, Kyprianou et al. [KPRS17] have studied a particular extreme example, the so-called "fast" EFC process, where pairwise coagulations occur at rate c k > 0, as in the Kingman coalescent, and fragmentation splits any individual block into its constituent elements at finite rate λ, creating thus infinitely many singletons blocks. They establish a nice phase transition phenomenon, see [KPRS17, Theorem 1.1] stating that the "fast"-EFC process comes down from infinity if and only if θ := 2λ ck < 1. We will investigate such properties for a class of EFC processes with less extreme fragmentation and coagulation mechanisms. We call them simple EFC processes and describe them now briefly. We assume that the fragmentation measure is finite, i.e. µ Frag (P ∞ ) < ∞, and is supported by the partitions with no singleton blocks. As we shall notice in the sequel, see Section 2.1, since the measure µ Frag is exchangeable, this latter condition is equivalent to the fact that the measure µ Frag has for support the set of partitions whose blocks are infinite, that is to say (1.1) µ Frag ({π; #π i < ∞ for some i ≤ #π}) = 0, where we have denoted by #π i the number of elements in the block π i and #π the number of blocks in the partition π. We assume furthermore that there are no simultaneous multiple coagulations of blocks, nor coagulations of all blocks at once. Under this latter assumption, coalescences occur as in a Λ-coalescent. The measure Λ, governing coalescences, stands for a finite Borel measure on [0, 1) of the form Λ(dx) := x 2 ν Coag (dx) + c k δ 0 where c k ≥ 0 is the Kingman parameter, driving pairwise coagulations, and ν Coag is a Borel measure 1 on (0, 1), driving multiple coagulations and satisfying
The measure µ in (i), called splitting measure, is by definition the image of µ Frag by the map π → #π − 1. Note that µ(∞) can be positive. A simple use of the exchangeability property, see the forthcoming background section, ensures that µ can be any finite measure on N ∪ {∞}. Some simple EFC processes have already been studied in the literature. When there are no multiple coagulations (namely ν Coag ≡ 0) but only binary coalescences at rate c k > 0, and under the additional assumption that µ(∞) = 0, Berestycki [Beres04, Section 5] and Lambert [Lam05,Section 2.3] have observed that the process (#Π(t), t ≥ 0) has the same transitions as a discrete logistic branching process (defined in Section 2 of [Lam05]), when Π starts from a partition with blocks of infinite size. A sufficient condition over µ (entailing µ(∞) = 0) for coming down from infinity of the EFC process, see [Beres04,Proposition 15], was derived from this observation. We also wish to mention that continuous-time Markov chains with jump rates (i) and (ii) have been studied in [GPP21], under some assumptions on Λ and µ.
Our main aim is to study the coming down from infinity for the whole class of simple EFC processes. In particular, we shall not make any assumption on the measure µ. We will find sharp parameters measuring, in some sense, how fragmentation interplays with the coagulations and obtain a general phase transition phenomenon for coming down from infinity.
Plainly if the pure Λ-coalescent process stays infinite, then any EFC process with coalescences driven by Λ stays infinite. We work therefore, without loss of generality, under the assumption that the pure coalescent comes down from infinity. Recall Schweinsberg's condition. For any n ≥ 2, set Fundamental properties of Λ-coalescents and of the function Φ are recalled in Section 2.2.
Recall the definition of the splitting measure µ and for any k ≥ 1, letμ(k) be its tail µ(k) := µ({k, k + 1, · · · , ∞}). THEOREM 1.1. Let (Π(t), t ≥ 0) be a simple EFC process started from an exchangeable partition such that #Π(0) = ∞. Assume (1.3) and set
(1.4) θ := lim inf n→∞ ∞ k=1 nμ(k) Φ(k + n) ∈ [0, ∞] and θ := lim sup n→∞ ∞ k=1 nμ(k) Φ(k + n) ∈ [0, ∞].
If θ < 1 then the process comes down from infinity. If θ > 1 then the process stays infinite.
Note that if θ = 0, the process comes down from infinity and if θ = ∞, it stays infinite. When both parameters agree, namely θ = θ , we shall denote their common value simply by θ. A phase transition will occur at θ, between the regime where the process stays infinite and the regime where it visits partitions with a finite number of blocks. The parameters θ and θ measure how fragmentations and coagulations combine when there are a large number of blocks. Indeed, nμ(k) is the rate at which the process, started from a partition with n blocks, jumps to a partition with more than n + k blocks and Φ(n + k) is the decrease rate when there are n + k blocks. A more precise heuristics of Theorem 1.1 is given in Section 3.
The question whether or not the EFC process can reach partitions with infinitely many blocks when it starts from a finite partition is not addressed in this work. Clearly µ(∞) > 0 is a sufficient condition for ∞ to be accessible for the process (#Π(t), t ≥ 0). The case µ(∞) = 0 is more involved and is studied in Foucart and Zhou [FZ20+a].
The proof of Theorem 1.1 is based on two different couplings of the partition-valued process (Π(t), t ≥ 0) and is differed in Section 3.
As a first application of Theorem 1.1, we study the case where the fragmentation can split blocks into infinitely many sub-blocks. (1) If c k > 0 then θ = 2λ/c k . In particular, if λ = 0 then θ = 0.
(2) If λ > 0 and c k = 0, then θ = ∞.
The corollary above ensures that when there are binary coagulations, namely c k > 0, a fragmentation measure with no mass on the partitions with infinitely many blocks, i.e. µ(∞) = 0, will never prevent the EFC process to come down from infinity. Moreover when µ(∞) > 0, only coalescences with a Kingman component can make the process come down from infinity. REMARK 1.3. The phase transition in (1), occurring at θ = 2λ/c k , is similar as the one observed in the "fast"-EFC process in [KPRS17, Theorem 1.1]. We shall see later in Proposition 3.19, that when only binary coagulations are allowed, namely Λ = c k δ 0 , the process stays infinite in the case θ = 1. (
1.5) If ∞ k=2 k Φ(k)μ (k) < ∞, then θ = 0.
REMARK 1.5. The series convergence in Corollary 1.4 is a tractable sufficient condition, however it is far from being necessary. For instance, when Λ = c k δ 0 , Φ(k) = c k k 2 for all k ≥ 2 and one can check that the condition of convergence of the series in (1.5) coincides with a log-moment condition on µ. We know however by Corollary 1.2 that when c k > 0, such a moment assumption is not necessary in order to have θ = 0. Note also that (k/Φ(k), k ≥ 2) is always bounded, so that if µ admits a first moment, then the condition in (1.5) is fulfilled and the process comes down from infinity as soon as (1.3) holds. We mention that Gonzáles et al. [GPP21,] have shown that if µ admits a first moment and ck 2 > ∞ k=1 kµ(k) then, the minimal process with jumps (i) and (ii) has ∞ as entrance boundary.
The parameters θ and θ , in their very definition (1.4), are rather intricate. We will provide in Section 4 sufficient conditions entailing either θ = 0, θ = ∞ or θ and θ ∈ (0, ∞), see the forthcoming Lemma 4.1. This enables us in particular to find classes of EFC processes with θ = θ = θ ∈ (0, ∞).
PROPOSITION 1.6. Let d > 0 and λ > 0. Assume Φ(n) ∼ n→∞ dn 1+β andμ(n) ∼ n→∞ λ n α , with α > 0, β ∈ (0, 1). We have the following three cases:
(1) β < 1 − α then θ = ∞ and the process stays infinite, (2) β > 1 − α then θ = 0 and the process comes down from infinity, (3) β = 1 − α then Φ(n) ∼ n→∞ dn 2−α and one has for some d > 0 and β ∈ (0, 1) are measures Λ of the Beta form
θ = λ d(1 − α) ∈ (0, ∞(1.6) ν Coag (dx) = x −2 Λ(dx) = c Beta(1 − β, a) x −β−2 (1 − x) a−1 dx
with a > 0 and c > 0. In this case, the factor constant d is c Γ(a−β+1) Γ(a)β(β+1) and when β = 1 − α, the phase transition occurs at
θ = (2 − α)λ c Γ(a) Γ(a + α) ∈ (0, ∞).
Heuristically, by letting α towards 0, one recovers θ = 2λ c the parameter of the phase transition in the case µ = λδ ∞ and c k = c.
We also consider the case of EFC processes with "slower" coalescences.
PROPOSITION 1.8. Let d > 0 and λ > 0. Assume Φ(n) ∼ n→∞ dn(log n) β with β > 1, and µ(n) ∼ n→∞ λ (log n) α n with α ∈ R.
We have the following three cases
(1) β < 1 + α then θ = ∞ and the process stays infinite, (2) β > 1 + α then θ = 0 and the process comes down from infinity, (3) β = 1 + α then Φ(n) ∼ n→∞ dn log(n) 1+α and one has
θ = λ d(1 + α) ∈ (0, ∞).
We will explain in Section 2.2 how to construct coagulation measures in order to have the equivalences Φ(n) ∼ n→∞ dn β+1 with β > 0 or Φ(n) ∼ n→∞ dn(log n) β for β ∈ (1, ∞).
The paper is organized as follows. In Section 2, we provide some background on exchangeable random partitions, recall the definition of an EFC process, and in particular explain its Poissonian construction. We focus then on simple exchangeable coalescents and simple EFC processes. We show in Section 2.3 that the number of blocks (#Π(t), t ≥ 0) has the same dynamics as explained in the introduction. Section 3 is devoted to the proof of Theorem 1.1. The proof is based on several couplings on the space of partitions. We show Corollary 1.2, Corollary 1.4, Proposition 1.6, Proposition 1.8 in Section 4.
2. Background on exchangeable fragmentation-coalescence processes.
2.1. Exchangeable random partitions and EFC processes. We refer to Bertoin's book [Ber06, Section 2.3 in Chapter 2] for background on exchangeable random partitions. For any n, m ∈ N such that n ≤ m, the integer interval between n and m is denoted by [|n, m|]. For any n ∈ N ∪ {∞}, we set [n] = [|1, n|] and call partition of [n], a collection π = {π 1 , π 2 , · · · } of subsets of N satisfying π i ∩ π j = ∅ when i = j and ∪ ∞ i=1 π i = [n]. The blocks of the partition π are listed in the order of their least element. Namely, if π j is the j-th block of π, then for any i ≤ j, min π i ≤ min π j . Recall that #π denotes the number of non-empty blocks of π. By convention, if #π = m < ∞ then we set π = (π 1 , · · · , π m , ∅, · · · ) where (∅, · · · ) denotes a countably infinite collection of empty sets. The space of partitions of [n] is denoted by P n . In particular, P ∞ is the set of partitions of [∞] = N. Any partition π ∈ P n can also be represented as an equivalence relation ∼ For any m ≥ n and π ∈ P m , we denote by π |[n] the restricted partition (π i ∩ [n], i ≥ 1). Note that for any partition π, (#π |[n] , n ≥ 1) increases towards #π as n goes to ∞. We endow P ∞ , with the compact metric
(2.7) d(π, π ) = max{n ≥ 1, π |[n] = π |[n] } −1 .
For any n ∈ N ∪ {∞}, set 0 [n] := {{1}, · · · , {n}, ∅, · · · } and 1 [n] := {[n], ∅, · · · } where we have denoted by ∅, · · · a countable collection of empty sets. We introduce now the operations of coagulation and fragmentation.
DEFINITION 2.1. Let n ∈ N ∪ {∞} and m ∈ N ∪ {∞}, π ∈ P n and π ∈ P m and k ∈ N.
• If #π ≤ m, a coagulation of π by π , denoted by Coag(π, π ), is a partition of [n] defined by
Coag(π, π ) := { ∪ j∈π i π j ; i ≥ 1}.
• If #π ≥ k, a fragmentation of the k-th block of π by π , denoted by Frag(π, π , k), is the collection of sets
Frag(π, π , k) := {π i ; i ∈ [|1, #π|] \ {k}} ∪ {π k ∩ π j , j ≥ 1} ↓
where the notation (. . . ) ↓ means that we are reindexing by their least element the collection of sets formed by the sub-blocks of π k according to π and all π i for i = k. For any π ∈ P n , π ∈ P m with m ≥ #π, the partition Coag(π, π ) is coarser than π. For any
k ≤ #π, when m ≥ max π k , π k ∩[m] = π k and ∪ i≥1 Frag(π, π , k) i = ∪ #π i=1 i =k π i ∪(π k ∩ [m]) =
[n], so that Frag(π, π , k) is also a partition of [n], which is finer than π. Lastly, for any partition π, for which it makes sense, one has Coag(π, 0 [n] ) = Coag(0 [n] , π) = π and Frag(π, 1 [n] , j) = π for any j ∈ [|1, #π|].
Let σ be a permutation of N with finite support. Namely there is n ∈ N such that for any m ≥ n, σ(m) = m. The permutation σ acts on P ∞ as follows: we define the partition σπ by the equivalence relation i ∼ σπ j if and only if σ(i) ∼ π σ(j). We now recall some elements about exchangeable random partitions. From now on, we shall work on the space P ∞ equipped with the Borelian σ-field generated by d.
DEFINITION 2.2. A random partition π of N is said to be exchangeable if for any permutation σ with finite support the random partitions σπ and π have the same law.
A generic example of exchangeable random partition is the so-called paint-box. Define the space of mass-partitions
P m := (s 1 , s 2 , ...); s 1 ≥ s 2 ≥ ... ≥ 0, ∞ i=1 s i ≤ 1 .
Let s ∈ P m and set
s 0 = 1 − ∞ i=1 s i . Partition the interval [0, 1 − s 0 ] into subintervals of length (s i , i ≥ 1). Let (U i , i ≥ 1) be an i.i.d sequence of uniform random variables over [0, 1].
The s-paintbox is the random partition π defined by letting i and j in the same block if and only if U i and U j fall into a same subinterval of [0, 1 − s 0 ]. When U i falls into the dust, namely [1 − s 0 , 1], the integer i forms a singleton block of the partition π (see Figure 1).
0 1 1 − s 0 U 5 U 2 U 4 U 1 U4 U 3 π |[5] = {{1, 3}, {2, 5}, {4}} FIG 1. paint-box
We denote by ρ s the law of the random partition π. Clearly this random partition is exchangeable, and its law ρ s does not depend on the locations of the subintervals of [0, 1], (including the subinterval [1 − s 0 , 1]). Reciprocally, Kingman has shown in [Kin78] that any exchangeable random partition has the same law as a mixture of paint-boxes. Namely if π is a random exchangeable partition then P(π ∈ ·) = Pm ρ s (·)ν(ds)
where ν is a probability measure over P m . The probability measure ν corresponds to the law of the ranked asymptotic frequencies of π: |π| ↓ = (|π i |, i ≥ 1) ↓ with
|π i | = lim n→∞ #(π i ∩ [n])
n a.s.
We refer to [Ber06, Proposition 2.8 and Theorem 2.1 page 100] for fundamental properties of random exchangeable partitions. We shall remind us the following generic properties of an s-paint box. If s 0 = 0, then π has no singleton block and each block has infinitely many elements. If moreover, s i > 0 for all i ≥ 1, then π has infinitely many blocks. Lastly, if there is k such that for all i ≥ k + 1, s i = 0, then the partition has at most k blocks. If s 0 > 0, infinitely many random variables U i will fall in [1 − s 0 , 1] almost surely and there are infinitely many singletons (the so-called dust). Last, recall that if π and π are two independent exchangeable random partitions then Coag(π, π ) is also exchangeable. Similarly, if one chooses uniformly at random a block k among those of π (in a loose sense, since the partition π might have infinitely many blocks) and splits it with π , the random partition Frag(π, π , k) is exchangeable. This preservation of the exchangeability property naturally leads to consider the following class of processes. DEFINITION 2.3 (Definition 1 in [Beres04]). An exchangeable Fragmentation-Coagulation process is a process (Π(t), t ≥ 0) valued in P ∞ satisfying the properties:
• For any t ≥ 0, Π(t) is exchangeable.
• For any n ∈ N, the process (Π |[n] (t), t ≥ 0) is a càdlàg Markov chain valued in P n evolving by fragmentation of one of its block or by coagulation.
More precisely, when the process Π |[n] is in state π ∈ P n , it can only jump to a partition π such that either π = Coag(π, π ) for some partition π or π = Frag(π, π , k) for some partition π and some k ≥ 1.
It is shown in [Beres04, Proposition 4] that any EFC process is characterized in law by four parameters (c k , c e , ν Coag , ν Disl ) where c k ≥ 0, c e ≥ 0 and ν Coag , ν Disl are positive σfinite measures on P m , respectively called the coagulation and dislocation measures. Those measures satisfy the following conditions: ν Coag ({0, 0, · · · }) = 0, ν Disl ({1, 0, · · · }) = 0 and
Pm i≥1 s 2 i ν Coag (ds) < ∞ and Pm 1 − i≥1 s 2 i ν Disl (ds) < ∞.
The coefficients c k and c e are called the Kingman coefficient and erosion coefficient.
We now briefly explain the construction of EFC processes. We refer the reader to [Beres04, Section 3.2] for more details.
Poisson construction. For every pair (i, j) ∈ N 2 with i < j, we write K(i, j) for the partition of N whose blocks consist of the pair {i, j} and the singletons {k} for k = i, j. Denote by # the counting measure over N. Consider two independent Poisson point processes PPP C = t>0 δ (t,π c ) and PPP F = t>0 δ (t,π f ,k) respectively on R + × P ∞ and R + × P ∞ × N with intensity dt ⊗ µ Coag (dπ) and dt ⊗ µ Frag (dπ) ⊗ #(dk) respectively. Let π be an exchangeable random partition independent of PPP C and PPP F . For any n ≥ 1, set Π n (0) = π |[n] and construct the process (Π n (t), t ≥ 0) as follows:
• Coalescence: at an atom (t, π c ) of PPP C such that π c
|[n] = 0 [n] : Π n (t) = Coag(Π n (t−), π c |[n] ).
• Fragmentation: at an atom (t, π f , k) of PPP F , such that π f |[n] = 1 [n] and k ≤ n − 1,
Π n (t) = Frag(Π n (t−), π f |[n]
, k). The sequence of Markov chains (Π n (t), t ≥ 0, n ≥ 1) is compatible in the sense that for any t ≥ 0 and any m ≥ n,
(Π m (t) |[n] , t ≥ 0) = (Π n (t), t ≥ 0) almost surely.
This compatibility property entails the existence of a process (Π(t), t ≥ 0), taking values in the uncountable state space P ∞ , such that almost surely for any n ≥ 1,
Π |[n] (t) = Π n (t) for all t ≥ 0.
Among other results, Berestycki [Beres04, Corollary 6, Theorem 8] has established that the process (Π(t), t ≥ 0) is a càdlàg Feller process satisfying Definition 2.3.
The jump rates of the EFC process (Π(t), t ≥ 0) are prescribed by those of its restrictions (Π |[n] , n ≥ 1). They are easily derived from the Poisson construction in terms of µ Coag and µ Frag as follows. Let n ∈ N and π ∈ P n . Let π c , π f be such that π c |[n] = 0 [n] and π f |[n] = 1 [n] . Let k ≤ #π.
• If π = 1 [n] , the process Π |[n] jumps from π to Coag(π, π c ) at rate:
µ Coag ({π ∈ P ∞ ; π |[n] = π c |[n] }). • If π = 0 [n]
, the process Π |[n] jumps from π to Frag(π, π f , k) at rate:
µ Frag ({π ∈ P ∞ ; π |[n] = π f |[n] })
. Note that the jump rates above do no depend on the partition π.
The main objective of this article is to study the block-counting process (#Π(t), t ≥ 0) and the possibility for the process to leave the boundary ∞. Two behaviors at ∞ are possible.
DEFINITION 2.4. Assume #Π(0) = ∞ a.s. We say that • the process stays infinite if ∀t ≥ 0; #Π(t) = ∞ almost surely,
• the process comes down from infinity if
∃t > 0; #Π(t) < ∞ almost surely.
Similarly as for pure coalescent processes, the following zero-one law holds.
LEMMA 2.5 (Zero-one law). Assume #Π(0) = ∞. Set τ ∞ := inf{t > 0; #Π(t) < ∞}. If µ Coag ({π, #π < ∞}) = 0, then either P(τ ∞ = 0) = 1 or P(τ ∞ = ∞) = 1.
REMARK 2.6. The assumption µ Coag ({π, #π < ∞}) = 0 ensures that there are no coagulation events merging infinitely many blocks into finitely many.
PROOF. The proof is similar as that given by Schweinsberg for pure exchangeable coalescents, see [Sch00a, Lemma 31, p39-40]. We provide some details as Lemma 2.5 will play a crucial role later. The random time τ ∞ is a stopping time for the completed natural filtration of (Π(t), t ≥ 0). Since (Π(t), t ≥ 0) is a Feller process, then by Blumenthal's zero-one law, one has P(τ ∞ = 0) ∈ {0, 1}. It remains to show that the event {τ ∞ ∈ (0, ∞)} has probability zero. Consider first the event
{0 < τ ∞ < ∞, #Π(τ ∞ −) = ∞ and #Π(τ ∞ ) < ∞}.
On this event, τ ∞ must be an atom of PPP C at which infinitely many blocks merge into finitely many. Since by assumption µ Coag ({π ∈ P ∞ , #π < ∞}) = 0 and all partitions atoms of PPP C have infinitely many blocks, the latter event has probability zero. We now show that the event
{τ ∞ ∈ (0, ∞), #Π(τ ∞ −) < ∞}
has also probability zero. For any b ∈ N, set λ b+1 := µ Coag ({π; π |[b+1] = 0 [b+1] }) < ∞, the rate at which a coalescence involving b + 1 blocks occurs. Fix b and n 1 < n 2 < · · · < n b in N. Consider the event
I(b, n 1 , · · · , n b ) := {τ ∞ ∈ (0, ∞), #Π(τ ∞ −) = b, min Π i (τ ∞ −) = n i , for all i ∈ [b]}.
Let T 0 = 0, and choose an integer p 1 ≥ 2 such that p 1 ∼
Π(0) n i for all i ∈ [b] (such a p 1 exists since #Π(0) = ∞). Let T 1 := inf{t > T 0 ; p 1 ∼ Π(t) n i for some i ∈ [b]},
this is a coalescence time between the particular block containing p 1 and at most b other possible blocks. The rate of T 1 is thus at most λ b+1 . Since by assumption, infinitely many blocks cannot coagulate into finitely many by a single jump, T 1 < τ ∞ a.s and thus #Π(T 1 ) = ∞ a.s. Recursively, we can choose an integer p m such that p m ∼
Π(Tm−1) n i for all i ∈ [b] and define T m := inf{t > T m−1 ; p m ∼ Π(t) n i for some i ∈ [b]}.
By construction T m−1 ≤ T m < τ ∞ a.s. for any m ≥ 1 and T m − T m−1 is stochastically greater than an exponential random variable with parameter λ b+1 . We deduce that on
I(b, n 1 , · · · , n b ), τ ∞ ≥ ∞ m=1 (T m − T m−1 ) = lim m→∞ T m = ∞ a.
s. Therefore, I(b, n 1 , · · · , n b ) has probability zero, as well as the event
{τ ∞ ∈ (0, ∞), #Π(τ ∞ −) < ∞} = b∈N,n1<n2<···<nb I(b, n 1 , · · · , n b ).
We conclude that P(τ ∞ ∈ (0, ∞)) = 0. The remark above indicates that it will be necessary to shed some light on the Markov property of the process (#Π(t), t ≥ 0), as well as on the regularity of its paths. We postpone this discussion for simple EFC processes to Section 2.3, see the forthcoming Proposition 2.11 and Remark 2.14.
2.2.
Exchangeable coalescent processes and their number of blocks. Pure exchangeable coalescent processes, namely EFC processes with µ Frag ≡ 0, have received a lot of attention. We refer to the seminal papers of Pitman [Pit99], Schweinsberg [Sch00a], Sagitov [Sag99] and Möhle and Sagitov [MS01]. See also Berestycki's book [Beres09, Chapters 3 and 4] for a recent account on fine properties of the so-called Λ-coalescents.
It is worth noticing that the number of blocks in any pure exchangeable coalescent has decreasing sample paths. If the coalescent comes down from infinity, in the sense of Definition 2.4, then it stays finite a.s. after it has comed down. This is a striking difference with the block counting process of an EFC process whose sample paths are not monotone.
Only sufficient conditions entailing coming down from infinity are known for a general coagulation measure ν Coag carried over P m (Ξ-coalescents), see Herriger and Möhle [MH12].
For the sake of simplicity, we focus now on coalescences in which there are no simultaneous multiple collisions and no coagulation of all blocks at once. Namely those satisfying
(2.10) c k ≥ 0, ν Coag ({s ∈ P m ; s 2 > 0}) = 0, ν Coag ({s ∈ P m ; s 1 = 1}) = 0.
Since ν Coag is carried over {s ∈ P m ; s 2 = 0}, the measure ν Coag can be considered as a measure on [0, 1] and the atoms of the Poisson point process PPP C have partitions with only one non-singleton block. When moreover c k = 0, it is often useful to describe the coalescent part of Section 2.1 as follows. Associate to each atom (t, π c ) of PPP C , the sequence of random variables (X k ) k≥1 defined by
X k = 1 if {k}
is not a singleton block of π c and X k = 0 otherwise.
By assumption (2.10) and by definition, we have
k ∼ π c ⇐⇒ X k = X = 1.
Given |π c | ↓ = x ∈ (0, 1), (X k ) k≥1 is a sequence of i.i.d Bernoulli random variables with parameter x. The coalescence event occurring at time t:
Π |[n] (t) = Coag(Π |[n] (t−), π c |[n] )
can now be described as follows: all blocks of Π |[n] (t−) whose index k ≤ #Π |[n] (t−) satisfies X k = 1, merge together. We refer for instance to [Beres09, Theorem 3.2 and Corollary 3.1], in particular to see how to incorporate binary coalescences when c k > 0.
A coalescent process whose coagulation measure satisfied (2.10) is often called in the literature a Λ-coalescent. The prefix Λ stands for the finite measure Λ := c k δ 0 + x 2 ν Coag (dx), which characterizes the law of the process. More precisely, the process (Π(t), t ≥ 0) is characterized in law by the jump rates of its restrictions, namely by the sequence (λ n,k , 2 ≤ k ≤ n) n≥2 defined by λ n,k := µ Coag ({π; the non-singleton block of π |[n] has k elements})
= c k 1 {k=2} + 1 0 x k (1 − x) n−k ν Coag (dx). (2.11)
As recalled in the Introduction, Schweinsberg [Sch00b] has established a necessary and sufficient condition for coming down from infinity of Λ-coalescents. Recall Φ(n) defined in (1.2). Some binomial calculations, see [Sch00b], yield the following other expression of Φ(n). For any n ≥ 2,
(2.12) Φ(n) = c k n 2 + 1 0 (nx − 1 + (1 − x) n ) ν Coag (dx).
It is not difficult to verify, from this identity, that (Φ(n)/n, n ≥ 1) is non-decreasing. One can also check analytically that Φ(n) ∼ n→∞ Ψ(n) with Ψ the function of the Lévy-Khintchine form : One other interest in using Ψ instead of Φ is that we can easily apply Tauberian theorems to find an equivalent of Ψ (and then of Φ) when the measure ν Coag has some properties of regular variation. The following is a direct application of the Tauberian theorem (see e.g. [Ber96, Page 10]). For any x ∈ (0, 1], setν Coag (x) := ν Coag ([x, 1]) and
(2.13) Ψ(u) = c k 2 u 2 + 1 0 e −xu − 1 + ux ν Coag (dxν Coag (x) := 1 xν Coag ([v, 1])dv. Ifν Coag (x) ∼ x→0 x ρ−1 L(x)
Γ(ρ) for some ρ ∈ (0, ∞) and L is a slowly varying function then
(2.14) Ψ(u) ∼ u→∞ u 2−ρ L(1/u). For instance, assume that ν Coag (dx) = f (x)dx with f such that f (x)x 2+β −→ x→0 c > 0 with β ∈ (0, 1), thenν Coag (x) ∼ x→0 c β(β+1) x −β and by taking ρ = 1 − β and L(x) = Γ(1 − β)c for all x ∈ [0, 1] in (2.14), one gets Ψ(n) ∼ n→∞ c Γ(1−β) β(β+1) n β+1 and therefore Φ(n) ∼ n→∞ dn β+1 with d = c Γ(1−β) β(β+1)
. Applying now the Tauberian theorem with ρ = 1 and L(x) = log(1/x) β gives that any coagulation measure ν Coag for whichν Coag
(x) ∼ x→0 c log(1/x) β satisfies Φ(n) ∼ n→∞ cn(log n) β .
The conditions that bear on the function Φ of Proposition 1.6 and Proposition 1.8 are therefore satisfied for the coagulation measures constructed above.
REMARK 2.8. In general, the Lévy-Khintchine function Ψ may have different upper and lower indices at ∞. We refer to Bertoin's lecture notes [Ber99,Chapter 5] for the definition of these indices and for seeing how to construct a Lévy measure ν Coag providing such a function Ψ. In this case, the parameters θ and θ might not coincide, but we shall not consider further this question here.
2.3. Simple EFC processes. Recall (2.10) and its meaning in terms of coalescence. We now introduce the so-called simple EFC processes. Since any block in an exchangeable random partition of N is either singleton or infinite, the condition (1.1) is equivalent to the assumption that µ Frag is supported by partitions with no singletons. DEFINITION 2.9. An EFC process is called simple if its coagulation measure satisfies (2.10) and if its fragmentation measure has finite total mass and is supported by partitions with no singletons.
REMARK 2.10. The name coined "simple" follows Bertoin's terminology for the Λcoalescents, see [Ber06,Section 4.4]. Beside the fact that there is no formation of dust, a simple EFC process has no simultaneous multiple coagulations and can only fragmentate a single block at a time.
According to (2.9), the first assumption on the fragmentation measure, µ Frag (P ∞ ) < ∞, is equivalent to c e = 0 (no erosion coefficient) and ν Disl (P m ) < ∞. The second assumption on its support (1.1) is equivalent to having a dislocation measure supported by ∪ k∈N∪{∞} P k m where for any k ∈ N ∪ {∞}
P k m := s ∈ P m ; s i > 0, ∀i ∈ [k + 1], k+1 i=1 s i = 1 .
By Kingman's paint-box representation, see Figure 1, if π f is an atom of PPP F then, on the event {|π f | ↓ ∈ P k m }, one has #π f = k + 1 almost surely. We stress that we allow the value k = ∞, so that fragmentation into infinitely many pieces are possible.
Since µ Frag is assumed to be finite, the process (#Π(t), t ≥ 0) restricted to N evolves as a classical continuous-time process with no instantaneous integer state. In order to take into account instantaneous coming down from infinity and possible explosion, we consider as state-space, the one-point compactification of N, which we denote byN. The restricted state-space N is thus endowed with the discrete topology, and for any m ≥ 1, {∞} ∪ [|1, m|] c forms a neighborhood of ∞.
PROPOSITION 2.11. Let (Π(t), t ≥ 0) be a simple EFC process. The block-counting pro- cess (#Π(t), t ≥ 0) is a right-continuous process valued inN. Moreover, at any time t > 0 such that #Π(t−) < ∞, lim h→0 + #Π(t − h) = #Π(t−) a.
s. The process (#Π(t), t < ζ) started from n and stopped at its first explosion time ζ := inf{t > 0; #Π(t−) = ∞ or #Π(t) = ∞} is Markov and its generator L acts on
(2.15) D := g :N → R; ∀n ∈N, k∈N∪{∞} |g(n + k)|µ(k) < ∞ ,
as follows: for any g ∈ D
Lg := L c g + L f g with, for any n ∈ N (2.16) L c g(n) := n k=2 n k λ n,k [g(n − k + 1) − g(n)]
and (2.17)
L f g(n) := n ∞ k=1 µ(k)[g(n + k) − g(n)] + nµ(∞)[g(∞) − g(n)].
REMARK 2.12. Since the splitting measure µ is finite, D contains all bounded functions onN. In particular, any continuous function onN, i.e. any function g defined on N ∪ {∞} such that lim n→∞ g(n) = g(∞) < ∞ belongs to D. Moreover, if µ(∞) = 0 then D contains all function g defined over N such that the series ∞ k=1 |g(n + k)|µ(k) converges for all n ∈ N.
PROOF. By assumption, µ Frag (P ∞ ) < ∞, and the fragmentations occur at finite rate. Moreover starting from a partition π with finitely many blocks, there are only finitely many possibilities of coagulations, each with a finite rate. Partitions with finitely many blocks are therefore holding points for the process (Π(t), t ≥ 0). We now study the left-limits. Let t > 0 and assume that #Π(t−) < ∞. By the assumption (2.10), blocks cannot all coagulate at once. Since #Π(t−) < ∞, the partition reached by the process before the state Π(t−) has also finitely many blocks. Hence, almost surely, for h small enough Π(t − h) = Π(t−) and therefore, lim h→0 #Π(t − h) = #Π(t−). The right-continuity at any time t such that #Π(t) < ∞ is obtained along similar arguments, using the fact that the process stays an exponential time at the partition Π(t). Assume now
#Π(t) = ∞, since Π is right-continuous, d Π(t + h), Π(t) −→ h→0 0, and one has #Π(t + h) −→ h→0 #Π(t) = ∞, see Remark 2.7.
The form of the generator will be deduced from the Poisson construction. The part with negative jumps corresponds to the generator of the number of blocks in a Λ-coalescent started from a partition with n blocks. We refer for instance to [Ber06,page 203] and focus on the positive jumps. Let n ∈ N and assume #Π(0) = n. Let m ∈N. Consider an atom (t, π f , j) of PPP F . The atom (t, π f , j) is seen by
Π |[m] (t−) if π f |[m] = 1 [m] and j ≤ #Π |[m] (t−)
. By definition of the fragmentation operator, see Definition 2.1, we have
(2.18) #Π |[m] (t) = #Π |[m] (t−) − 1 + #{Π j (t−) ∩ π f i ∩ [m], 1 ≤ i ≤ #π f }.
Let m = ∞, first observe that if #Π(t−) = ∞ then #Π(t) = ∞ and t is not a jump time (by independence of the Poisson point processes it cannot be a coalescence time). Note that when #Π(t−) < ∞, each block of the partition Π(t−) is infinite a.s, otherwise the partition Π(t), just after the fragmentation, would contain blocks of finite size, which is not possible, since Π(t) is exchangeable and has no dust.
Notice that by the Poisson construction, Π j (t−) is independent of π f . Assume first #π f = k + 1 with k ∈ N. So that, for any i ∈ [k + 1], |π f i | > 0 almost surely. Let (U , ≥ 1) be a sequence of i.i.d uniform random variables on [0, 1], independent of Π j (t−). By the paintbox representation of π f , and since #Π j (t−) = ∞, one has for any
1 ≤ i ≤ #π f P Π j (t−) ∩ π f i = ∅ = P ∀ ∈ Π j (t−), U is not in a subinterval of length |π f i | = 0. Therefore #{Π j (t−) ∩ π f i , 1 ≤ i ≤ #π f } = #π f = k + 1
, and the right hand side of (2.18) equals n + k a.s. Recall the intensity of PPP F , µ Frag (dπ) ⊗ #(dj). When #Π(t−) = n, the process jumps to n + k at rate
nµ Frag ({π, #π f = k + 1}) = nν Disl (P k m ) = nµ(k)
.
Suppose now #π f = ∞. By assumption 2.3, |π f i | > 0 for all i ≥ 1, and the same argument shows that the r.h.s in (2.18) is infinite. In this case, the process jumps from n to ∞. Finally, one sees that the rate of jump from n to ∞ is given by nν Disl (P ∞ m ) where we recall P ∞ m := {s ∈ P m ; s i > 0 for all i ≥ 1 and ∞ i=1 s i = 1}. REMARK 2.14. Let t > 0. Conditionally on #Π(t) < ∞, if one denote by ζ • θ t the first explosion time (possibly infinite) of the process (#Π(t + s), s > 0), then Proposition 2.11 entails that (#Π(t + s), s < ζ • θ t ) is Markov and has the same law as (#Π(s), s < ζ) when started from n := #Π(t) < ∞. The Markov property of (#Π(t), t ≥ 0) is however more involved at the times t such that #Π(t) = ∞. Indeed, if by the Markov property of the EFC process, for any such time t, conditionally on F t (the natural filtration), (Π(s + t), s ≥ 0) has a law only depending on Π(t), a priori the law of the process (#Π(t + s), s ≥ 0) could depend on the shape of the blocks of Π(t). We shall circumvent this difficulty by only making use of the Markov properties of the processes (Π(t), t ≥ 0) and (#Π(t), t < ζ). We refer the reader interested in this question to Theorem 3.7 in Foucart and Zhou [FZ20+b], where it is established that (#Π(t), t ≥ 0) is actually a Feller Markov process onN, the one-point compactification of N. Heuristic. Recall the definition of Φ in (1.2), its meaning and the definitions of θ and θ in (1.4). Although typically the process (#Π(t), t ≥ 0) does not only move down one level at a time (except in the pure Kingman case), f (n) turns out to be a rather sharp measure of the time needed for the pure coalescent process to go from infinity to level n when n is large. Indeed, it is actually established in [BBL10] that the speed of coming down from infinity of the Λ-coalescent is precisely the inverse function of f . Keeping this in mind, the conditions θ < 1, θ > 1 for coming down from infinity and staying infinite, can be understood as follows: let Z be the number of new blocks after a fragmentation, this is a random variable with law µ(·)/µ(N). A simple application of Fubini's theorem ensures that for any n ∈ N,
∞ k=1 nμ(k) Φ(n + k) = nµ(N)E n+Z k=n+1 1 Φ(k) = E[f (n) − f (n + Z)] 1/µ(N)n .
Therefore, the condition θ < 1 (respectively θ > 1) can be seen as the condition under which for all large enough n, the mean time for the Λ-coalescent to go from n + Z to n, is stricly smaller (respectively larger) than 1/µ(N)n, the mean arrival time of a typical fragmentation (whose size is Z) when there are n blocks.
The proof of Theorem 1.1 will not make use of the speed of coming down from infinity previously mentioned. We establish the first part of Theorem 1.1, namely that (Π(t), t ≥ 0) comes down from infinity when θ < 1, in Subsection 3.2. The second part of Theorem 1.1 is shown in Subsection 3.3. We start by preliminaries on the process (#Π(t), t ≥ 0).
Preliminaries.
We first explain why we can restrict our study to simple EFC processes starting from partitions with blocks of infinite size.
LEMMA 3.1. Let (Π(t), t ≥ 0) be a simple EFC process whose coalescence measure Λ satisfies 1 0 x −1 Λ(dx) = ∞. Then, the process (Π(t), t ≥ 0) started from any exchangeable partition Π(0), has no dust i.e. no singleton blocks, at any strictly positive time, almost surely.
PROOF. The assumption
1 0 x −1 Λ(dx) = ∞ ensures that the pure Λ-coalescent process (Π C (t), t ≥ 0) has no dust. Namely at any time t > 0, Π C (t) has no singleton blocks, even though Π C (0) has some. We refer to [Beres09, Theorem 3.5]. Note also that since the fragmentation measure has finite mass and is carried over partitions with no singleton blocks, the pure fragmentation process (Π F (t), t ≥ 0), started from a partition with no singleton blocks, has no singleton blocks at any time t ≥ 0.
We now follow the arguments of the proof of [Beres04, Proposition 16], see Section 5.2, page 794-795. Consider the partition-valued process (Π(t), t ≥ 0) constructed from PPP C and PPP F as follows: at any fragmentation time (t, π f , k), the partition Π(t) is obtained from Π(t−) as for a classical EFC process; and at any coalescence time (t, π c ), the partition Π(t) is obtained by merging the blocks Π i (t−) and Π j (t−) if and only if min Π i (t) and min Π j (t−) (instead of i and j) belong to the same block of π c . In other words, recalling the definition of the i.i.d sequence (X k , k ≥ 1), see Section 2.2, at any coalescence time, the block with index k takes part to the coalescence if and only if X k := X min Πk(t−) = 1.
By independence of Π |[n] (t−) and π c |[n]
, and exchangeability of π c |[n] , for any n ≥ 1, we see that (X k , 1 ≤ k ≤ n) has the same law as (X k , 1 ≤ k ≤ n). The process (Π(t), t ≥ 0) is thus a simple EFC process with fragmentation measure µ Frag and coagulation measure Λ.
Denote by (Π C (t), t ≥ 0) the pure coalescent process obtained from PPP C by the modified Poisson construction as above. By definition the singleton block {i} takes part to a coalescence event at time t in Π c if and only ifX j = 1 where j is such that Π C j (t−) = {i}. By definitionX j = X i , thus X i = 1. Similarly the singleton block {i} takes part to a coalescence event at time t in the EFC process Π if and only ifX j = 1 where j is such that {i} = Π j (t−). HenceX j = X i and X i = 1. The coalescences of singletons are therefore coupled for both processes (Π(t), t ≥ 0) and (Π C (t), t ≥ 0). Since there is no singletons in Π C , there is also none in Π.
Recall that we work under the assumptions (1.3) and Λ({1}) = 0. The pure Λ-coalescent comes down from infinity instantaneously and has no singleton blocks. By Lemma 3.1, for any t > 0, the blocks of the EFC process at time t, Π(t), are also of infinite size. Moreover Lemma 2.5 ensures that the process Π stays infinite if and only if the EFC (Π(t + s), s ≥ 0) stays infinite for all t > 0. We can therefore suppose without loss of generality that #Π(0) = ∞ and that for any i ∈ N, #Π i (0) = ∞ a.s. From now on, we work with such an initial partition.
We will study the process (#Π(t), t ≥ 0) from a monotone coupling (Π (n) (t), t ≥ 0) of (Π(t), t ≥ 0) satisfying #Π (n) (0) = n and #Π (n) (t) ≤ #Π (n+1) (t) for all t a.s. See the forthcoming Lemma 3.4. Note that it is not so obvious, at a first glance, that such a monotone coupling exists. Firstly observe that for any partitions π, π ∈ P ∞ and any k ≥ 1, the operators Coag(·, π) and Frag(·, π, k), see Definition 2.1, still make sense when acting on a collection of disjoint subsets that are indexed in the order of their least elements: if B = (B 1 , · · · , B n ) , for some n ∈N, are disjoint subsets, with for any i ≤ j, min B i ≤ min B j , then the following are well-defined
Coag(B, π c ) := ∪ j∈π c i B j , i ≥ 1 and Frag(B, π f , k) := B j , j = k, B k ∩ π f i , i ≥ 1 ↓ .
As for partitions, #B denotes the number of non-empty subsets of any collection B and by convention when #B < ∞, B i := ∅ for any i > #B. We also stress that the space of ordered collections of disjoint subsets of N, call it S, is embedded in the space of partitions P ∞ . Indeed, if B ∈ S, then π :
= {B 1 , · · · , B n , (∪ n i=1 B i ) c } ↓ ∈ P ∞ .
The following simple lemma will play a role in our first coupling argument.
LEMMA 3.2. Let π c ∈ P ∞ . Let B 1 and B 2 be two finite collections of disjoint subsets of N ordered by their least element. If #B 1 ≤ #B 2 then #Coag(B 1 , π c ) ≤ #Coag(B 2 , π c ).
PROOF. Assume by contradiction that #Coag(B 1 , π c ) > #Coag(B 2 , π c ). Let i := #Coag(B 1 , π c ). On the one hand, ∪ j∈π c i B 2 j = ∅ and thus for all j ∈ π c i , B 2 j = ∅. Therefore #B 2 < min π c i . On the other hand, by definition ∪ j∈π c i B 1 j = ∅ and there exists j ∈ π c i such that B 1 j = ∅. This entails #B 1 ≥ j ≥ min π c i and leads to the contradiction #B 1 > #B 2 .
Recall the Poisson point processes PPP F and PPP C . Let n ≥ 1. We now construct a process (Π (n) (t), t ≥ 0) valued in S, started from (Π 1 (0), · · · , Π n (0)), which follows all fragmentations and coagulations involving integers belonging to ∪ n i=1 Π i (0).
For any m ∈ N, set Π (n),m (0) := (Π 1 (0) ∩ [m], · · · , Π n (0) ∩ [m]) and
• if (t, π c ) is an atom of PPP C such that π c |[m] = 0 [m] , then Π (n),m (t) = Coag(Π (n),m (t−), π c |[m] ),
• if (t, π f , k) is an atom of PPP F such that π f |[m] = 1 [m] , and k ≤ m − 1, then
Π (n),m (t) = Frag(Π (n),m (t−), π f |[m] , k).
We verify now the compatibility property of the processes (Π (n),m (·), m ≥ 1) for fixed n. and whose index k satisfies k ≤ m. Recall that by convention for any partition π, if #π < ∞ and i > #π then π i = ∅.
Assume first that t
f,(m+1) 1 < t c,(m+1) 1 . When t < t f,(m+1) 1 , we have Π (n),m+1 |[m] (t−) = Π (n),m+1 |[m] (0) = Π (n),m (0) and if t = t f,(m+1) 1 then Π (n),m+1 (t) |[m] = Frag(Π (n),m+1 (t−), π f , k) |[m] = {Π (n),m+1 i (t−) ∩ [m], i = k, Π (n),m+1 k (t−) ∩ π f j ∩ [m], j ≥ 1} ↓ = {Π (n),m i (t−), i = k, Π (n),m k (t−) ∩ π f j , j ≥ 1} ↓ = Frag(Π (n),m (t−), π f , k) = Π (n),m (t)
.
Π (n),m+1 (t) |[m] = Coag(Π (n),m+1 (t−), π c ) |[m] = {Π (n),m+1 i (t−) ∩ [m], i = j, ∪ i∈π c j ∩[m+1] Π (n),m+1 i (t−) ∩ [m]} = {Π (n),m i (t−), i = k, ∪ i∈π c j ∩[m] Π (n),m i (t−)} = Coag(Π (n),m (t−), π c |[m] ) = Π (n),m (t)
where the third equality holds since by the ordering of the subsets Π The process (N (n) t , 0 ≤ t < ζ (n) ) has the same law as (#Π(t), 0 ≤ t < ζ) started from n. Moreover almost surely, for all n ∈ N and all t ≥ 0, N PROOF. By the Poisson construction of (Π (n) (t), t ≥ 0), at an atom (t,
π f , j) of PPP F , if j ≤ #Π (n) (t−) = N (n) t− , the process (N (n) t , t ≥ 0) jumps from N (n) t− to (3.19) # Frag(Π (n) (t−), π f , j) = N (n) t− − 1 + #{Π (n) j (t−) ∩ π f , ≤ #π f }.
Since Π 1 (0), · · · , Π n (0) are assumed to be infinite, and the fragmentation measure is supported by partitions with no singletons, the set Π
1 (t−), · · · , Π (n) m (t−)}, π c ) = #{∪ i∈π c j ∩[m] Π (n) i (t−), Π (n) (t−), / ∈ π c j } and we see that N (n) t = N (n) t− − k + 1 with k := #(π c j ∩ [m])
. This occurs at rate m k λ m,k . We deduce that (N (n) t , 0 ≤ t < ζ (n) ) is Markov and has the same dynamics as (#Π(t), 0 ≤ t < ζ) started from n, stopped at its first explosion time.
Recall N (n) t = #Π (n) (t) for all t ≥ 0 and n ∈ N. We now show that for all n ≥ 1 and all
t ≥ 0, N (n+1) t ≥ N (n) t a.s. Let m ∈ N. We check first that #Π (n) |[m] (t) ≤ #Π (n+1) (t) for any t ≥ 0 a.s. Let t > 0. If #Π (n) |[m] (t−) ≤ #Π (n+1) (t−) then (i) if (t, π c )
is an atom of PPP C , by applying Lemma 3.2, we have
#Π (n) |[m] (t) = #Coal(Π (n) |[m] (t−), π c ) ≤ #Coal(Π (n+1) (t−), π c ) = #Π (n+1) (t), (ii) if (t, π f , j) is an atom of PPP F and further j ≤ #Π (n) |[m] (t−) then #Π (n) |[m] (t) = # Frag(Π (n) |[m] (t−), π f , j) = #Π (n) |[m] (t−) − 1 + #{Π (n) j (t−) ∩ π f i ∩ [m], i ≥ 1} ≤ #Π (n+1) |[m] (t−) − 1 + #π f ≤ #Π (n+1) (t). If j ≥ #Π (n) |[m] (t−) + 1, then #Π (n) |[m] (t) = #Π (n) |[m] (t−) ≤ #Π (n+1) (t−) ≤ #Π (n+1) (t).
Clearly #Π
(n+1) t , t ≥ 0), we get that #Π (n) |[m] (t) ≤ #Π (n+1) (t) = N (n+1) t for all t ≤ ζ (n+1)
a.s. Assume by contradiction that #Π (n+1) (t) < #Π 4). In all this section, we assume that θ < 1.
We outline here the scheme of the proof. Denote by τ (n) n0 and ζ (n) , respectively the first passage time below n 0 and the first explosion time of (N (n) t , t ≥ 0). We obtain in Lemma 3.5, an upper bound of the mean of τ (n) n0 ∧ ζ (n) , which is uniform in the initial value n. We shall also see in the proof of Lemma 3.5 from where the parameter θ comes from. Next, we define in Lemma 3.8, a sequence of processes (#Π m (t), t ≥ 0) m≥1 approaching from below (#Π(t), t ≥ 0). Those processes are not explosive and have the same dynamics as our initial process for a certain splitting measure µ m . We establish in Lemma 3.11, using the calculations in Lemma 3.5, that these processes are coming down from infinity, and get a bound for the mean of their first passage time below a certain state n 0 . The latter being uniform in m, we will be able to conclude that the process (#Π(t), t ≥ 0) itself goes below the level n 0 a.s. LEMMA 3.5. Let n ∈ N and let ζ (n) be the first explosion time of (N
(3.20) E[τ (n) n0 ∧ ζ (n) ] ≤ 2 1 − θ ∞ k=2 1 Φ(k) .
REMARK 3.6. The right-hand side in (3.20) is bounded uniformly in the initial state n.
PROOF. Recall D in (2.15) and that we work under the assumption (1.3). Define the function g onN by g(1) = 0 and g(n) = n j=2 1 Φ(j) when 2 ≤ n ≤ ∞. Note that g is bounded and thus belongs to D. Moreover g(n) −→ n→∞ g(∞) := ∞ j=2 1 Φ(j) < ∞. On the one hand we have for any n ≥ 2, g(n − k + 1) − g(n) = − n j=n−k+2 1 Φ(j) , and since Φ is non-decreasing, for all 2 ≤ j ≤ n, 1/Φ(j) ≥ 1/Φ(n). Therefore
(3.21) g(n − k + 1) − g(n) ≤ − k − 1 Φ(n) .
On the other hand we have for all n ≥ 1 and k ∈ N, (3.22)
g(n + k) − g(n) = n+k j=n+1 1 Φ(j) = ∞ j=n+1 1 {j≤n+k} 1 Φ(j) and g(∞) − g(n) = ∞ j=n+1 1 Φ(j) .
Plugging (3.21) and (3.22) in the generator L := L c + L f defined in Proposition 2.11 yields
Lg(n) ≤ − 1 Φ(n) n k=2 n k λ n,k (k − 1) =Φ(n) +n ∞ j=n+1 1 Φ(j) ∞ k=j−n µ(k) + µ(∞) . (3.23)
Hence, setting for any k ∈ N,μ(k) := µ({k, k + 1, · · · , ∞}), one has for all n ∈ N
(3.24) Lg(n) ≤ −1 + ∞ k=1 nμ(k) Φ(k + n) .
By assumption, θ := lim sup
n→∞ ∞ k=1 nμ(k) Φ(k+n) < 1. Let = 1−θ 2 > 0.
There exists a large enough integer n 0 such that for all n ≥ n 0 , ∞ k=1 nμ(k)
Φ(k+n) ≤ θ + = θ +1 2 and therefore Lg(n) ≤ −1 + θ +1 2 = θ −1 2 < 0. For any N > n, let τ + N := inf{t ≥ 0; N (n) t
> N }, the stopped process (N t∧τ + N , t ≥ 0) has generator L N g(n) := Lg(n)1 {n≤N } . Since g and L N g are bounded, by Dynkin's formula for continuous-time Markov chains, for any fixed k > 0 and any n ≥ n 0 ,
E[g(N (n) τ (n) n 0 ∧k∧τ + N )] = g(n) + E τ (n) n 0 ∧k∧τ + N 0 Lg(N (n) s )ds ≤ g(n) + θ − 1 2 E[τ (n) n0 ∧ k ∧ τ + N ]. Hence (3.25) E[τ (n) n0 ∧ k ∧ τ + N ] ≤ 2 1 − θ g(n) − E[g(N (n) τ (n) n 0 ∧k∧τ + N )] ≤ 2 1 − θ g(n).
For any n ≥ n 0 , since τ + N increases towards the explosion time of (N (n) t , t ≥ 0), ζ (n) as N goes to ∞ almost surely, we obtain by letting k to ∞ and N to ∞ in (3.25)
(3.26) E[τ (n) n0 ∧ ζ (n) ] ≤ 2 1 − θ n k=2 1 Φ(k) ≤ 2 1 − θ ∞ k=2 1 Φ(k) < ∞.
We now build a monotone coupling on the space of partitions. The main idea is to introduce a partition-valued process (Π m (t), t ≥ 0), in which every fragmentations creating more than m + 1 new blocks in the process (Π(t), t ≥ 0), are creating at most m new blocks in (Π m (t), t ≥ 0). For any m ∈ N, define the map r m : π → (π 1 , ..., π m , ∪ ∞ i=m+1 π i ). By definition r m maps P ∞ into partitions with at most m+1 blocks. Set µ m Frag := µ Frag •r −1 m . Let n ≥ 1. We call respectively (Π m (t), t ≥ 0) and (Π m,(n) (t), t ≥ 0), the P ∞ -valued process, started from Π(0), and the S-valued Markov process, started from (Π 1 (0), · · · , Π n (0)), that are constructed in a Poisson way, as (Π(t), t ≥ 0) and (Π (n) (t), t ≥ 0), but with PPP C and the image of PPP F by r m . The hypothesis θ < 1 is not needed for the next two Lemmas 3.7 and 3.8 to hold true. They are included in this section as they will be used only for the coming down from infinity. LEMMA 3.7. For any m ≥ 1, (Π m (t), t ≥ 0) and (Π(t), t ≥ 0) jump simultaneously.
PROOF. By construction, the atoms of coalescence are exactly those of PPP C and those of fragmentation are the images of the atoms of PPP F by r m , that is to say,
(3.27) r m (π f ) |[n] = (π f 1 ∩ [n], ..., π f m ∩ [n], ∪ ∞ i=m+1 π f i ∩ [n])
, for any n ∈N.
On the one hand, if #π f ≤ m then r m (π f ) = π f and #r m (π f ) = #π f . On the other, if #π f ≥ m + 1, then #r m (π f ) = m + 1. One also easily checks from (3.27) that for any m ∈ N and any n ∈ N, r m (π f ) |[n] = 1 [n] if and only if π f |[n] = 1 [n] . Therefore, the processes (Π m (t), t ≥ 0) and (Π(t), t ≥ 0) jump simultaneously.
Recallμ(m) = µ({m, · · · , ∞}) and denote by L c the coalescent part of the generator L defined in (2.16). REMARK 3.9. The process (N (∞) m (t), t ≥ 0) := (#Π m (t), t ≥ 0) does not explode and has the same law (since it has the same generator) as the block-counting process of any simple EFC process whose coagulation measure is Λ and whose dislocation measure satisfies ν Disl (P k m ) = µ m (k) for all k ∈ [m].
PROOF. Since by assumption µ Frag is supported by partitions whose blocks have infinite size, the blocks of any atom π f of PPP F are of infinite size, and the partition r m (π f ) has thus also blocks of infinite size. Similarly as in Lemma 3.4, replacing π f by r m (π f ) in (3.19), this guarantees that the process (N (n) m (t), t ≥ 0) is Markov. One also plainly checks that it has the same negative jumps rates as (N m (t), t ≥ 0) stays below a discrete branching process whose reproduction measure µ m has finite support, it cannot explode. Lemma 3.2 entails that for any fixed m ∈ N, N
#Coag(Π m,(n) (t−), π c ) ≤ #Coag(Π m+1,(n) (t−), π c ) and N (n) m (t) ≤ N (n) m+1 (t).
At all jumps, the order is preserved and (3.28) is true for all t almost surely. One can check, similarly as in the proof of Lemma 3.4, that N (n) m (t) increases towards #Π m (t) for any t ≥ 0, as n goes to ∞ almost surely. Letting n to ∞, in (3.28) provides also #Π m (t) ≤ #Π m+1 (t) for any t ≥ 0 and any m ≥ 1.
Last, we show now that m (s). Let (s ϕ(n) , n ≥ 1) be a subsequence such that for all n ≥ 1, s ϕ(n) ∈ (s − η n , s + η n ) ∩ [0, t] . Then, since ϕ(n) ≥ n, n0,m = τ n0,m a.s. The convergence lim m→∞ τ n0,m = τ n0 will follow from similar arguments. Note first that lim m→∞ τ n0,m ≤ τ n0 . As previously, assume that lim m→∞ τ n0,m < t < τ n0 . One can find a convergent sequence (s m ) m≥1 such that 0 < τ n0,2 ≤ s m < t for any m ∈ N and #Π m (s m ) ≤ n 0 . Let s := lim m→∞ s m , we have s ∈ [τ n0,2 , t] and by the assumption #Π m (s) < ∞. Therefore, there exists η m > 0 such that for all u ∈ (s − η m , s + η m ), #Π m (u) = #Π m (s). By choosing a subsequence (s ϕ(m) , m ≥ 1) such that s ϕ(m) ∈ (s − η m , s + η m ) ∩ [τ n0,2 , t], we see that
N (n) m (s) = N (n) m (s ϕ(n) ) ≤ N (ϕ(n)) m (s ϕ(n) ) ≤ n 0 .#Π m (s) = #Π m (s ϕ(m) ) ≤ #Π ϕ(m) (s ϕ(m) ) ≤ n 0 .
Recall Lemma 3.8 and that #Π m (s) −→ m→∞ #Π(s) a.s. We conclude as before by the contradiction τ n0 ≤ s and τ n0 > t.
We are now ready to finish the proof. Recall θ := lim sup n→∞ ∞ k=1 nμ(k) Φ(n+k) and the assumption θ < 1.
LEMMA 3.11. There exists a large enough integer n 0 such that
E(τ n0 ) ≤ 2 1 − θ ∞ k=2 1 Φ(k) < ∞.
PROOF. As in the proof of Lemma 3.5, consider n 0 large enough such that for all n ≥ n 0 , ∞ k=1 nμ(k) Φ(k+n) ≤ θ + 1−θ 2 = θ +1 2 . According to Lemma 3.8, (N
L m g(n) ≤ −1 + ∞ k=1 nμ m (k) Φ(n + k) = −1 + m k=1 nμ(k) Φ(n + k) ≤ −1 + ∞ k=1 nμ(k) Φ(n + k) ≤ θ − 1 2 < 0.
Therefore, for any m ≥ 1 and n ≥ n 0 , E[τ
(n) n0,m ∧ ζ (n) m ] ≤ 2 1−θ ∞ k=2 1 Φ(k) with ζ (n) m := inf{t > 0; NE[τ (n) n0,m ] ≤ 2 1 − θ ∞ k=2 1 Φ(k) .
By Lemma 3.10, τ (n) n0,m increases towards τ n0,m := inf{t ≥ 0, #Π m (t) ≤ n 0 } as n goes to ∞. By monotone convergence, we see that for any m ≥ 1,
(3.30) E[τ n0,m ] ≤ 2 1 − θ ∞ k=2 1 Φ(k)
.
Therefore the process (#Π m (t), t ≥ 0) comes down from infinity and since it does not explode, we have that #Π m (s) < ∞ for any m ≥ 1 and all s > 0 a.s. By Lemma 3.10, lim m→∞ τ n0,m = τ n0 a.s. and we obtain by letting m to ∞ in (3.30)
E[τ n0 ] ≤ 2 1 − θ ∞ k=2 1 Φ(k) ,
where we recall τ n0 = inf{t ≥ 0, #Π(t) ≤ n 0 }. This achieves the proof.
REMARK 3.12. If one drops the assumption that the fragmentation measure is supported by partitions with no singleton blocks, then the process (#Π m (t), t ≥ 0), defined in Lemma 3.8, is not Markov. Indeed, at an atom (t, π f , k) of fragmentation, the number of blocks in Π m evolves as follows
#Π m (t) − #Π m (t−) = −1 + #{Π m k (t−) ∩ r m (π f ) j , 1 ≤ j ≤ #r m (π f )}.
If π f has singletons then the partition r m (π f ) would have (finitely many) singletons with positive probability. Thus, on the event {r m (π f ) j = {i} and i / ∈ Π m k (t−)}, the set Π m k (t−) ∩ r m (π f ) j is empty and the jump size of #Π m is not #r m (π f ) but depends on the constituent elements of the blocks of Π m (t−).
REMARK 3.13. The arguments involving the non-explosive processes (N In all this section, we assume that θ > 1. We shall establish the second part of Theorem 1.1, namely that the process (Π(t), t ≥ 0) stays infinite. We argue by contradiction and assume from now on that the process does come down from infinity.
We outline here the scheme of the proof. We shall see that when the process comes down from infinity, the first jump time at which the process loses a proportion p ∈ (0, 1) of its blocks is strictly positive a.s. (Lemma 3.15). Next, we make use of the function f (n) := ∞ j=n+1 1 Φ(j) , and find a martingale argument entailing that before this jump time, the process has actually infinitely many blocks (Lemmas 3.17 and 3.18). The contradiction will lie on the fact that the coming down from infinity is instantaneous (Lemma 2.5).
We need first the following lemmas (lifted from [Fou11, Lemmas 6.2 and 6.3]).
LEMMA 3.14. For any p ∈ (0, 1). There exists x p ∈ (0, 1) such that if x ∈ (0, x p ) and (X k , k ≥ 1) is a sequence of i.i.d Bernoulli random variables with parameter x then for any n 0 ≥ 1, there is a positive constant C p,n0 such that P there exists n ≥ n 0 , n k=1 X k ≥ np ≤ C p,n0 x n0p .
PROOF. By the Markov inequality,
P n k=1 X k ≥ np ≤ e −npt E[e t n k=1
Xk ] = e −n(pt−log(e t x+1−x)) .
When choosing t = log(1/x), we get the bound P ( n k=1 X k ≥ np) ≤ e −nh(x) with h(x) := p log(1/x) − log(2 − x).
In particular, since h(x) −→ x→0 ∞, there exists x p ∈ (0, 1) such that for any x ∈ (0, x p ), h(x) > 0 and we get
P ∃n ≥ n 0 , n k=1 X k ≥ np ≤ e −n0h(x) 1 − e −h(x) ≤ C p,n0 x n0p with C p,n0 = 2 n0 sup x∈(0,xp) 1/(1 − e −h(x) ) ∈ (0, ∞).
LEMMA 3.15. Assume that the process (Π(t), t ≥ 0) comes down from infinity. For any p ∈ (0, 1), the first jump which makes decrease (#Π(t), t ≥ 0) by a proportion of size at least p is strictly positive a.s. Namely
σ p := inf{t > 0; #Π(t) ≤ (1 − p)#Π(t−)} > 0 a.s. Moreover, setting σ (n) p := inf{t ≥ 0; N (n) t ≤ (1 − p)N (n) t− }, we have that σ (n) p −→ n→∞ σ p a.s.
PROOF. Obviously, only coalescence times can make decrease the number of blocks. Since the process Π is càdlàg, Π(0) = Π(0+) and 0 is not a jump time of (#Π(t), t ≥ 0). It remains to explain why σ p is not an accumulation of coalescence times near 0. The jumps that make decrease the number of blocks by a fraction p are atoms (t, π c ) of PPP C satisfying #Π(t−) < ∞ and Recall the i.i.d random variables (X k , k ≥ 1) defined in Section 2.2. By definition, for any n ∈ N, n k=1 X k equals #(π c i ∩ [n]), where π c i is the non-singleton block of π c . We see that jumps satisfying (3.31) occurring before 1 and τ n0 are elements of J p := (t, π c ); t ≤ 1 and ∃n ≥ n 0 ; n k=1 X k ≥ np .
Recall Lemma 3.14 and choose n 0 ≥ 2/p. By the compensation formula of Poisson point process
E PPP C (J p ) = 1 0 P ∃n ≥ n 0 ; n k=1 X k ≥ np ν Coag (dx) ≤ C p,n0 xp 0 x n0p ν Coag (dx) + 1 xp ν Coag (dx) ≤ C p,n0 xp 0 x 2 ν Coag (dx) + 1 x 2 p 1 xp x 2 ν Coag (dx) < ∞.
Finally, PPP C (J p ) < ∞ a.s. and there is only a finite number of jumps satisfying (3.31) before τ n0 . Since 0 is not one of them, we have σ p ∧ τ n0 > 0 a.s. which entails σ p > 0 a.s. As by the assumption (2.3), there are no coagulations of all blocks at once, (namely Λ has no mass at 1), necessarily #Π(σ p −) < ∞ a.s. Therefore, there exists > 0 such that for all u ∈ (σ p − , σ p ), #Π(u) = #Π(σ p −) and for all u ∈ (σ p , σ p + ), #Π(u) = #Π(σ p ) < ∞. Recall Lemma 3.4. As N (n) t increases towards #Π(t) a.s when n goes to ∞, there is a large enough n 0 such that, for all n ≥ n 0 and all t ∈ (σ p − , σ p + ), N (n) t = #Π(t). Thus, σ p = σ (n) p for all n ≥ n 0 and the last convergence statement is established.
LEMMA 3.16. For any p ∈ (0, 1), for large enough x,
Φ(x) Φ((1 − p)x) ≤ 1 1 − p 3 .
PROOF. Recall Ψ defined in (2.13). The function ϕ : x → Ψ(x)/x is the Laplace exponent of driftless subordinator and is therefore a concave function satisfying ϕ(0) = 0. Therefore
ϕ((1 − p)x + p.0) = Ψ((1 − p)x) (1 − p)x ≥ (1 − p)ϕ(x) + pϕ(0) = (1 − p) Ψ(x)
x .
Thus Ψ(x) Ψ((1−p)x) ≤ 1 1−p 2 and (3.32) Ψ(x) Ψ((1 − p)x) Φ((1 − p)x) Φ(x) ≤ 1 (1 − p) 2 Φ((1 − p)x) Φ(x) .
Recall that Ψ(x) ∼ x→∞ Φ(x). Therefore, the left hand side in (3.32) goes to 1 as x goes to ∞, and for large enough x,
1 − p ≤ 1 (1 − p) 2 Φ((1 − p)x) Φ(x) .
This enables us to conclude.
For any n ∈N, define the process (N LEMMA 3.17. There exists p ∈ (0, 1) and n 0 ∈ N such that for all n ≥ m ≥ n 0 ,
E f (N (n),p t∧ζ (n) ∧τ (n) m ) ≤ f (n)
where ζ (n) is the first explosion time of (N .
Notice that f is bounded and thus belongs to the domain of the generator L p (which matches with D in (2.15)). For any 2 ≤ k ≤ pn and j ≥ n − k + 2, since Φ is non-decreasing
Φ(j) ≥ Φ(n − k + 2) ≥ Φ((1 − p)n).
We obtain, for large enough n, (3.33)
L c,p f (n) = pn k=2 n k λ n,k n j=n−k+2 1 Φ(j) ≤ pn k=2 n k λ n,k k − 1 Φ((1 − p)n) ≤ Φ(n) Φ((1 − p)n) .
Applying Lemma 3.16 in the last inequality of (3.33), provides that for large enough n,
(3.34) L c,p f (n) ≤ 1 (1 − p) 3 .
We now apply the second part of the generator, L f , to the map f . Recallμ(j) = µ({j, j + 1, · · · , ∞}) for all j ∈ N. For any n ∈ N, one has
L f f (n) = ∞ k=2 nµ(k)(f (n + k) − f (n)) + nµ(∞)(f (∞) − f (n)) = − ∞ k=2 nµ(k) n+k j=n+1 1 Φ(j) − nµ(∞) ∞ j=n+1 1 Φ(j) = − k∈N j nµ(k) Φ(j) 1 {n+1≤j≤n+k} = − j≥n+1 k≥j−n k∈N nµ(k) 1 Φ(j) = − ∞ j=n+1 nμ(j − n) Φ(j) = − ∞ j=1 nμ(j) Φ(j + n)
. Recall that by assumption θ > 1. Assume first θ < ∞. Let > 0 small enough such that θ − > 1, there is n 0 such that for all n ≥ n 0 ,
L p f (n) = L c,p f (n) + L f f (n) ≤ 1 (1 − p) 3 − θ + . Since 1 (1−p) 3 −→ p→0+
1, one can choose a small enough p ∈ (0, 1) such that 1 (1−p) 3 ≤ θ − . Finally one gets for all n ≥ n 0
(3.36) L p f (n) ≤ 0.
Plainly, when θ = ∞, the inequality (3.36) holds also true for large enough n. By Dynkin's formula, for any n ≥ m ≥ n 0
E f (N (n),p t∧τ (n) m ∧ζ (n),p ) − f (n) = E t∧τ (n) m ∧ζ (n),p 0 L p f (N (n),p s )ds ≤ 0 It remains to see that E f (N (n),p t∧ζ (n),p ∧τ (n) m ) = E f (N (n),p t∧ζ (n) ∧τ (n) m
) . It suffices to check that We are now able to finish the proof by finding a contradiction.
ζ (n),p ∧ σ (n) p = ζ (n) ∧ σ(
LEMMA 3.18. If θ > 1, the process stays infinite.
PROOF. Recall that we assume that Π(0) has infinitely many blocks of infinite size. Since the process is assumed to come down from infinity, according to Lemma 2.5, the process (N t , t ≥ 0) := (#Π(t), t ≥ 0) leaves infinity instantaneously. Moreover, by Proposition 2.11, the process (N t , t ≥ 0) is Markov when lying in N. Consider an excursion from ∞ with length ζ (possibly infinite), such that ζ > τ m > t for some t > 0 and m ≥ n 0 . By the Markov property at time t, conditionally on N t , the process (N t+s , 0 ≤ s ≤ ζ − t) has the same law as the process started from N t and stopped at its first explosion time. According to Lemma 3.4, the latter has the same law as (N (Nt) s , s ≤ ζ (Nt) ) and by applying Lemma 3.17, we get
E[f (N (t+s)∧τm∧ζ∧σp )1 s+t<τm<ζ ] = E f (N (Nt) s∧τm∧σp )1 s<τ (N t ) m <ζ (N t ) ≤ E(f (N t )).
By the right-continuity of the process (N t , t ≥ 0), see Proposition 2.11, N t −→ t→0+ ∞ a.s. Since f is bounded and has limit 0 at ∞, by using Lebesgue's theorem, we get that
E(f (N t )) −→ t→0+ 0. Hence lim t→0+ E[f (N (s+t)∧τm∧ζ∧σp )1 s+t<τm<ζ ] = 0.
A second application of Lebesgue's theorem yields
E[f (N s∧τm∧σp )1 s≤τm<ζ ] = 0.
Since f is positive then f (N s∧τm∧σp ) = 0 a.s. on the event {s ≤ τ m < ζ}. This entails that if s ≤ τ m < ζ then N s∧τm∧σp = ∞ a.s. Recall that σ p > 0 a.s. One has therefore, for s ∈ (0, σ p ∧ τ m ), N s = ∞ a.s, this is a contradiction since according to the zero-one law stated in Lemma 2.5, if the process Π does not stay infinite then it leaves ∞ instantaneously a.s.
n ≥ 1, L f f (n) = − ∞ j=1 nμ(j) Φ(j+n) withμ(j) = µ({1, 2, · · · , ∞}) ≥ µ(∞) = λ. Hence for any n ≥ 1, L f f (n) ≤ − 2λ ck n ∞ j=n+1
1 j(j−1) = −θ. Therefore, Lf (n) ≤ 1 − θ ≤ 0 for any n ≥ 1 and assuming that the process comes down from infinity, the same reasoning as in the proof of Lemma 3.18 yields a contradiction. We conclude that the process stays infinite.
REMARK 3.20. We stress that in the proof of Proposition 3.19 the coupling between (Π(t), t ≥ 0) and (Π(t ∧ σ p ), t ≥ 0) is not used.
Examples.
We will establish in this section Corollary 1.2, Corollary 1.4 and Proposition 1.6. We start by Corollary 1.2 which is easily derived from Theorem 1.1.
A difficulty while dealing with the parameters θ and θ , lies in the fact that the variables n and k are not separated in formulas (1.4). We give some technical lemmas providing a general recipee for studying θ and θ and decide whether it is 0, ∞ or in (0, ∞).
4.1.
Analysis of the parameters. PROOF. We focus on θ . Arguments for θ are the same replacing lim sup by lim inf. We show that in general
(4.37) lim sup n→∞ n (n) Φ(2n) + n ∞ k=2n+1μ (k) Φ(k) ≤ θ ≤ lim sup n→∞ n (n) Φ(n) + n ∞ k=nμ (k) Φ(k) . We have ∞ k=1 nμ(k) Φ(n + k) = n k=1 nμ(k) Φ(n + k) + n ∞ k=n+1μ (k) Φ(n + k)
.
Since Φ is non-decreasing, one has 1 Φ(n+k) ≤ 1 Φ(n) for all 1 ≤ k ≤ n and 1 Φ(n+k) ≤ 1 Φ(k) for all k ≥ n + 1. Therefore
θ ≤ lim sup n→∞ n (n) Φ(n) + n ∞ k=nμ (k) Φ(k)
.
On the other hand,
n k=1 nμ(k) Φ(n + k) + n ∞ k=n+1μ (k) Φ(n + k) = 2n k=n+1 nμ(k − n) Φ(k) + n ∞ k=2n+1μ (k − n) Φ(k) ≥ n k=1 nμ(k) Φ(2n) + n ∞ k=2n+1μ (k) Φ(k) ≥ n (n) Φ(2n) + n ∞ k=2n+1μ (k) Φ(k) ,
and we obtain (4.37). As a first consequence, by replacing lim sup by lim inf in (4.37), we see that, if n (n) Φ(2n) −→ n→∞ ∞, then θ = ∞. Recall Ψ in (2.13) and that Ψ(n) ∼ n→∞ Φ(n). Since ϕ(x) := Ψ(x)/x is concave, for all x ≥ 0, ϕ(x/2) ≥ ϕ(x)/2 and we obtain with x = 2n, Ψ(2n) ≤ 4Ψ(n). Thus for large enough n, Φ(2n) ≤ 4Φ(n) and we see that if n (n) Φ(n) −→ n→∞ ∞, then n (n) Φ(2n) −→ n→∞ ∞ and θ = θ = ∞. The first statement (1) is thus established. Since Φ(2n) ≤ 4Φ(n) and Φ(2n) 2n ≥ Φ(n) n for large n, we get θ ≥ 1 4 θ and θ ≤ 1 2 θ . The inequality (4.37) readily yields θ ≥ θ and we see that if lim sup n→∞ n ∞ k=nμ (k) Φ(k) = 0, then θ ≤ θ .
We now establish (4). Assume that n (n) Φ(n) converges to 0, since Φ(2n) ≥ Φ(n), we have that and get an equality in (4.37).
As a first application of Lemma 4.1 we get the following. for any constants c 1 < 1 < c 2 , there exists k 0 large enough such that for all k ≥ k 0 , c 2 λ k α ≥μ(k) ≥ c 1 λ k α and c 1 d(n + k) 2−α ≤ Φ(n + k) ≤ c 2 d(n + k) 2−α for all n ≥ 1. The case where only the equivalenceμ(n) ∼ n→∞ λ(log n) α n holds, follows from an adaptation of the previous calculations. This ends the proof of Proposition 1.8.
We conclude this article with a few comments. When there are no sudden fragmentations into infinitely many blocks, i.e. µ(∞) = 0, the question whether (#Π(t), t ≥ 0) starting from a finite state n can reach ∞ has not been addressed. In particular, we mention that the condition θ > 0 does not imply the explosion in general. Explosion requires a study in its own right by designing other taylor-made criteria. This is investigated in the work of Foucart and Zhou [FZ20+a]. We mention also the work [FZ20+b] where certain Markov processes in duality with simple EFCs, called Wright-Fisher processes with selection, are studied. Other properties of the block-counting process, such as its Feller property, are stated in this latter work.
This is the rate at which the number of blocks is decreasing when the pure Λ-coalescent process starts from n blocks. The pure Λ-coalescent comes down from infinity if and only if
COROLLARY 1 . 2 .
12Assume that the measure Λ satisfies (1.3). Recall c k = Λ({0}) ≥ 0 and set λ := µ(∞) ≥ 0.
COROLLARY 1 . 4 .
14Assume µ(∞) = 0 and that the measure Λ satisfies (1.3).
and only if i, j belong to the same block of π.
For
instance, let π = {{1, 3}, {2, 5}, {4}}, π = {{1}, {2, 3}} and k = 1. Then, Coag(π, π ) = {{1, 3}, {2, 4, 5}}, and Frag(π, π , 1) = {{1, 3} ∩ {1}, {1, 3} ∩ {2, 3}, {2, 5}, {4}} ↓ = {{1}, {2, 5}, {3}, {4}}.
For any i ∈ N, let also e(i) be the partition {N \ {i}, {i}}. Define the σ-finite exchangeable measures (2.8) µ Coag (dπ) :s (·)ν Disl (ds).
REMARK 2 . 7 .
27We highlight that the map # : P ∞ → N ∪ {∞} is not continuous with respect to d. For instance, if for any k ∈ N, π (k) := {[k], {k +1}, · · · }, then lim k→∞ d(π (k) , 1 N ) = lim k→∞ 1/k = 0 and π (k) converges to 1 N , even though #π (k) = ∞ and #1 N = 1. However, it is important to note that if #π = ∞ and lim k→∞ d(π (k) , π) = 0, then (#π (k) , k ≥ 1) converges towards #π = ∞, as k goes to ∞. Indeed, recalling (2.7), if lim k→∞ d(π (k) , π) = 0, then n k := max{n ≥ 1, π (k) |[n] = π |[n] } −→ k→∞ ∞ and the result follows since for any k, #π (k) ≥ #π |[nk] .
REMARK 2. 13 .
13When m < ∞, the process (#Π |[m] (t), t ≥ 0) is not Markov in general. Indeed, it may occur that for some index j, Π j (t−) ∩ π f i ∩ [m] = ∅ for some i and therefore that #Π |[m] (t) depends on the constituent elements of the blocks of Π |[m] (t−) and not only on its number of blocks.
3 .
3Proof of Theorem 1.1. We start by presenting a heuristic argument enlightening Theorem 1.1. Recall that we work with a coalescence measure Λ satisfying Λ({1}) = 0 and Schweinsberg's condition (1.3): ∞ k=2 1 Φ(k) < ∞. This entails that the pure Λ-coalescent comes down from infinity instantaneously. The proof of Theorem 1This function appears in several previous works. We refer the reader to Bertoin's book [Ber06, Proposition 4.9, page 202] and Berestycki's book [Beres09, Chapter 3, page 70].
LEMMA 3. 3 .
3Let n ∈ N. For any m ≥ 1 and any t ≥ 0, Π (n),m+1 (t) |[m] = Π (n),m (t) a.s. PROOF. Let m ≥ 1 and (t c,(m+1) i , i ≥ 1) be the atoms of time of PPP C whose partitions verify π c |[m+1] = 0 [m+1] . Similarly, denote by (t f,(m+1) i , i ≥ 1) the atoms of time of PPP F whose partitions are such that π f |[m+1] = 1 [m+1]
,
then Π (n),m+1 (t) |[m] = Π (n),m+1 (t f,(m+1) 1 ) |[m] = Π (n),m (t).Denote by j the index such that j ≤ m and #(π c j ∩ [m + 1]) ≥ 2. We have for t = t c,(m+1) 1
similarly and by induction Π (n),m+1 (t) |[m] = Π (n),m (t) holds for any time t ≥ 0 a.s.The compatibility property established in Lemma 3.3 allows us to construct an S-valued process (Π (n) (t), t ≥ 0) by setting Π (n) (t) := ∪ m≥1 Π (n),m (t) for all t ≥ 0. Note that by definition, when n = ∞, (Π (∞) (t), t ≥ 0) = (Π(t), t ≥ 0) a.s. LEMMA 3.4. Assume that the initial partition Π(0) has blocks with infinite sizes. For any n ∈ N, set (N
j
(t−) is infinite. By using the same argument as in the proof of Proposition 2.11, for all ≤ #π f , Π
j
(t−) ∩ π f = ∅ almost surely. Hence, the state after time t−,(3.19) is N (n) t− + k with k = #π f − 1.Consider now (t, π c ) an atom of PPP C , and apply the operator Coag. Let j be the index of the non-singleton block in π c : namely #π c j ≥ 2, then, conditionally on {N
m] (0) ≤ #Π (n+1) (0) < ∞ a.s. By applying (i) and (ii) until the first explosion time of (N
≤
m] (t) for a certain t > ζ (n+1) . Denote by ζ the last instant s prior to t at which N (n+1) s− = ∞. The process #Π (n+1) has piecewise constant paths when lying in N, and by applying (i) and (ii), we see that necessarily for any ζ < s ≤ t, #Π (n+1) (s) < #Π(n) |[m] (s) ≤ m. Hence #Π (n+1) (ζ) ≤ #Π (n) |[m] (ζ) ≤ m. Since by definition #Π (n+1) (ζ−) = ∞, the time ζ should be a coalescence time at which infinitely many blocks coalesce into less than m blocks. This leads to a contradiction since by assumption Λ({1}) = 0 and those coalescences are not possible. Finally #Π (n) |[m] (t) ≤ #Π (n+1) (t) for all t ≥ 0 a.s. and since m is arbitrary, we have that for all t ≥ 0#Π(t) for all t almost surely by replacing Π (n+1) by Π in the arguments above. It remains to show that lim n→∞ N (n) t = #Π(t) a.s. Let m ∈ N. Choose n large enough such that [m] ⊂ ∪ n i=1 Π i (0). Then, Π (n) |[m] (0) = Π |[m] (0), and we see from the Poisson construction of (Π |[m] (t), t ≥ 0) that Π (n) |[m] (t) = Π |[m] (t) for all t ≥ 0 a.s. Hence, for any m, m] (t) = #Π |[m] (t). Letting m to infinity provides lim Coming down from infinity. Recall θ defined in (1.
≤
m}. Then, there exists n 0 such that if n ≥ n 0 then
LEMMA 3 . 8 .
38For any n ∈N and m ≥ 1, set (N
m
(t), t ≥ 0) := (#Π m,(n) (t), t ≥ 0).The process (N
m
(t), t ≥ 0) is a non-explosive Markov process started from n and has for generator, the operator L m acting on any function g : N → R + as followsL m g( ) := L c g( ) + m k=1 µ m (k)(g( + k) − g( )), for all ∈ N,where µ m (k) := µ(k) if k ≤ m − 1 and µ m (m) :=μ(m). Moreover, almost surely for any n, m ∈ N and all t ≥ 0, N
), #Π m (t) ≤ #Π m+1 (t) and
t
, t ≥ 0). At any fragmentation event, the block of Π m that is involved, can be splitted at most into m + 1 sub-blocks. Therefore the positive jumps are driven by the measure µ m defined over [|1, m|] by µ m (k) := µ(k) if k ≤ m − 1 and µ m (m) :=μ(m). In particular, since the process (N
m
(t) for any t ≥ 0 and any n ∈ N a.s. We now justify that for all n ∈ N and all m ∈ N,(3.28) N (n) m (t) ≤ N
(t), for all t ≥ 0. Both processes start from n, and by Lemma 3.7 make a positive jump at the same atoms of time of PPP F . Let t be such an atom of time. The jump of N of size at most m. On the other hand, at any atom of coalescence (t, π c ), if N
m
lim m→∞ #Π m (t) = #Π(t). Plainly, by construction for any t ≥ 0, #Π m (t) ≤ #Π(t) a.s. By definition of the map r m , it can be checked that for any partition π, and any n ≥ 1, if m ≥ #π |[n] then r m (π) |[n] = π |[n] . Since, there are only finitely many atoms of PPP C and PPP F on the interval of time [0, t] that are seen by the process Π m |[n] , one can definem n (t) := max{#π f |[n] : π f atom of PPP F in [0, t] such that π f |[n] = 1 [n] } < ∞.By construction, for any t ≥ 0, Π mn(t) |[n] (t) = Π |[n] (t) almost surely and thus #Π |[n] (t) = #Π mn(t) |[n] (t) ≤ #Π mn(t) (t) for any t a.s. By monotonicity, #Π mn(t) (t) ≤ #Π ∞ (t) := lim m→∞ #Π m (t) for any t ≥ 0 and we have that for any t ≥ 0 #Π |[n] (t) ≤ #Π ∞ (t) a.s. Letting n to ∞ in the inequality above yields #Π(t) ≤ #Π ∞ (t) a.s. which entails (3.29) #Π(t) = #Π ∞ (t) for any t a.s.For any m, n 0 ∈ N, consider the first entrance times τ (t) ≤ n 0 } and τ n0,m := inf{t > 0; #Π m (t) ≤ n 0 }. We study their limits as n and m goes to infinity respectively.LEMMA 3.10. For any n 0 ∈ N and any m ∈ N, lim n→∞ τ (n) n0,m = τ n0,m a.s. If moreover for any m ∈ N and any s > 0, #Π m (s) < ∞, then lim m→∞ τ n0,m = τ n0 a.s. PROOF. Let n 0 ∈ N. Recall that for all t ≥ 0, all m ∈ N, N
t, thus there exists a time s n ∈ (0, t) such that N (n) m (s n ) ≤ n 0 a.s. The sequence (s n ) n≥1 is bounded by t, and thus converges, up to a subsequence, to some s ∈ [0, t]. Since the process (N (n) m (u), u > 0) lies in N and has piecewise constant paths, there exists η n > 0 such that for any u ∈ (s − η n , s + η n ) ∩ [0,
m
(s) ≤ n 0 , thus τ n0,m ≤ s which contradicts the fact that τ n0,m > t. Hence,
m
(t), t ≥ 0) has for generator L m . Equation (3.23) applied to the process (N
m
(t), t ≥ 0) gives for all n ≥ n 0
m
(t−) = ∞}. By Lemma (3.8), the process (N
m
(t), t ≥ 0) does not explode and therefore ζ (n) m = ∞ a.s. Hence, we get
m
(t), t ≥ 0, n ≥ 1), approaching from below (N(n) t , t ≥ 0), in a monotone way, are reminiscent to those used in [Fou19, Section 7] for constructing logistic continuous-state branching processes reflected at ∞. 3.3. Staying infinite. Recall θ in (1.4) and the assumption (1.3): ∞ k=2 1 Φ(k) < ∞.
( 3 . 31 )
331#Π(t) = #Coag(Π(t−), π c ) ≤ #Π(t−)(1 − p).
j) for any n ≥ 1, and f (∞) = 0. Note that f (n) decreases towards f (∞) = 0 as n goes to ∞.
t
, t ≥ 0). PROOF. Let ζ (n),p be the first explosion time of (N ≤ t ≤ ζ (n),p ) is Markov and has for generatorL p f := L c,p f + L f f with L c,p f (n,k (f (n − k + 1) − f (n))
last equality and the definition of θ in (1.4), we see that lim sup n→∞ L f f (n) = −θ .
n) p a.s. On the one hand, on the event {ζ (n),p < σ (n) p }, ζ (n) = ζ (n),p a.s. thus ζ (n),p ∧ σ (n) p = ζ (n) ∧ σ (n) p . On the other hand, on {ζ (n),p > σ (n) p }, ζ (n),p = ∞ a.s. thus ζ (n),p ∧ σ (n) p = ζ (n) ∧ σ (n) p . This ends the proof.
We end this section by dealing with the critical boundary case θ = 1 in the particular case where only binary coagulations are allowed. Kyprianou et al's result [KPRS17, Theorem 1.1] is thus recovered in our framework and generalized to cases where the measure µ gives mass to N.
PROPOSITION 3 . 19 .
319Let c k > 0 and λ > 0. Assume Λ = c k δ 0 and µ(∞) = λ. If θ := 2λ ck ≥ 1, then the process stays infinite. In particular, the process stays infinite in the critical case θ = 1.PROOF. Since Λ = c k δ 0 , Φ(k) = c k k 2 for all k ≥ 2. Set f (n) := ∞k=n+1 1 Φ(k) for any n ≥ 1, one has f (n) = 2 ck 1 n and L c f (n) = 1 for all n ≥ 1. Moreover, recall (3.35), for any
LEMMA 4 . 1 . 2 .
412For all n ∈ N, set (n) := n k=1μ (k).1. If n (n) Φ(n) −→ n→∞ ∞, then θ = θ = ∞. Set
LEMMA 4 . 2 .
42Assume µ(∞) = 0, c k = 0 and Φ(n) ∼ n→∞ dn 1+β for some β ∈ (0, 1). If n 1−βμ (n) −→ n→∞ 0, then θ = 0 and the process comes down from infinity.PROOF. Assume Φ(n) ∼ n→∞ dn β+1 for a certain constant d > 0. Let > 0, by assumption there exists N such that for all k ≥ N ,μ(k) ≤ d k 1−β 2 .Hence, on the one hand, for large enough n, n −β N k=1μ (k) ≤ d 2 . On the other hand, for large enough n, we have n (n) Φ(n) ≤ d . It remains to study lim sup 1−βμ (n) for a certain constant c. By assumption, n 1−βμ (n) → 0 when n → ∞ and since Lemma 4.1-(4), we have that θ = θ = θ = 0.The following examples are easily investigated by applying Lemmas 4.1 and 4.2.
.constant d > 0 .< 1
01If Φ(n) ∼ n→∞ dn β+1 with β ∈ (0, 1) andμ(n) ∼ n→∞ λ log(n) α n with α ∈ R, then n 1−βμ (n) −→ n→∞ 0 and Lemma 4.2 ensures that the process comes down from infinity.• If Φ(n) ∼ n→∞ dn(log n) β with β > 1 andμ(n) ∼n→∞ λ n α with α ∈ (0, 1), then one can check that, for some constant c > 0, Proof of Corollary 1.2. Recall the statements of Corollary 1.2. Set λ := µ(∞) ≥ 0 and c k := Λ({0}). We first establish (1), namely we show that if c k > 0, then θ = θ = θ = 2λ ck ≥ 0. Recallμ(k) := µ({k, k + 1, . . . , ∞}). Let µ 0 be the restriction of the measure to N,for all k ∈ N, µ 0 (k) = µ(k) and µ 0 (∞) = 0. By definition of the parameters θ and θ in for θ replacing lim sup by lim inf. We study separately the two summands in (4.38). First observe that k) = 2λ/c k . It remains to study the second summand in (4.38). We apply Lemma 4.1-(4) to the process with splitting measure µ 0 . Set 0 (n) := nk=1μ 0 (k) for all n ≥ 1. Sinceμ 0 (k) we get θ = 2λ ck . Similar arguments provide θ = 2λ ck . Recall the statement (2) of Corollary 1.2. Note that if c k = 0 and λ > 0 then Φ(n)/n Proof of Corollary 1.4. Recall the statement of Corollary 1.4 and the definition of θ in (1.4). Recall that we work under the assumption (1.3). Assume ∞ k=2 kμ(k) Φ(k) < ∞. Recall(1.4)and that the sequence (k/Φ(k), k ≥ 1) is non-increasing. For any n ≥ 1, Proof of Proposition 1.6. Recall the assumptions of Proposition 1.6. Case (2) is a consequence of Lemma 4.2. Note that in cases (1) and (3), we necessarily have α ∈ (0Clearly, if α + β Lemma 4.37-(1), one gets θ = ∞. We now treat the critical case, α + β = 1. Assumē µ(k) ∼ k→∞ λ k α and Φ(n) ∼ n→∞ dn 2−α . One returns to the definition (1.4) of θ. By assumption
2 .= 0 .
20If β − α ≥ 1, then notice first that for some constant C > 0 and large enough nIn the case of equality, β − α = 1, Lemma 4.1-(2) provides θ ≤ λ d(1+α) . By definition of θ , for any r > 0,
).REMARK 1.7. Important examples of coagulation measures satisfying Φ(n) ∼
n→∞
dn 1+β
) .
)See for instance[MH12, Lemma 3.3]. A first consequence of this latter equivalence is that the condition of coming down from infinity (1.3) is equivalent to the integrability condition < ∞. See [Beres09, Section 4.3] for some probabilistic interpretation of this equivalence in the stable case.∞
2
du
Ψ(u)
Acknowledgements. I am grateful to Bastien Mallein for many insightful discussions. I would also like to thank Martin Möhle and Xiaowen Zhou to whom I spoke about this problem in 2014 and 2019 respectively. This research has been supported by LABEX MME-DII (ANR11-LBX-0023-01).Thus for k ≥ k 0 and n ≥ 1, we haven k α (n + k) 2−α .For any n ≥ 1, k0 k=1 nμ(k), the latter bound goes to 0 and we only need to focus on the limit as n goes to ∞ of the series ∞ k=k0 n k α (n + k) 2−α .A comparison with an integral providesBy factorizing n and doing the change of variable u = x n , we getWe deduce from (4.39) that.Since c 1 and c 2 can be chosen arbitrarily close to 1, we get θ = θ = λ d(1−α) .4.5.Proof of Proposition 1.8. We now consider some simple EFC processes with "slow" coalescence. Recall the assumptions. As n goes to ∞, Φ(n) ∼ dn(log n) β with β > 1 and µ(n) ∼ λ(log n) α /n with α ∈ R.If α < 0 then β − α > 1 and since nμ(n)Φ(k) < ∞. Applying Corollary 1.4 entails that θ = 0.We now focus on the case α > 0 and to simplify the calculations, we treat it with the assumptionμ(n) = λ(log n) α n for any n ≥ 2. Plainly since β > 1, then (1.3) holds and one can apply Theorem 1.1. We now compute θ. A comparison with integrals providesBoth integrands are equivalent to (log x) α /x as x goes to ∞ and we get n (n) ∼ n→∞ λ α+1 n(log n) α+1 . One checks (4.40) n (n) Φ(n) ∼ n→∞ λ d(α + 1) (log n) 1−(β−α) .We now apply Lemma 4.1.1. If β − α < 1, then n (n) Φ(n) −→ n→∞ ∞ and by Lemma 4.1-(1), θ = ∞.
Deterministic and stochastic models for coalescence (aggregation and coagulation): a review of the mean-field theory for probabilists. David J Aldous, Bernoulli. 51David J. Aldous, Deterministic and stochastic models for coalescence (aggregation and coagulation): a review of the mean-field theory for probabilists, Bernoulli 5 (1999), no. 1, 3-48.
Exchangeable fragmentation-coalescence processes and their equilibrium distribution. Julien Berestycki, Electr. J. Prob. Julien Berestycki, Exchangeable fragmentation-coalescence processes and their equilibrium distribu- tion, Electr. J. Prob (2004), 9-770.
The Λ-coalescent speed of coming down from infinity. Julien Berestycki, Nathanaël Berestycki, Vlada Limic, Ann. Probab. 381Julien Berestycki, Nathanaël Berestycki, and Vlada Limic, The Λ-coalescent speed of coming down from infinity, Ann. Probab. 38 (2010), no. 1, 207-233.
Recent progress in coalescent theory. Nathanaël Berestycki, Ensaios Matemáticos. 16Sociedade Brasileira de Matemática, Rio de JaneiroMathematical SurveysNathanaël Berestycki, Recent progress in coalescent theory, Ensaios Matemáticos [Mathematical Sur- veys], vol. 16, Sociedade Brasileira de Matemática, Rio de Janeiro, 2009.
Jean Bertoin, Lévy processes. CambridgeCambridge University Press121Jean Bertoin, Lévy processes, Cambridge Tracts in Mathematics, vol. 121, Cambridge University Press, Cambridge, 1996.
Jean Bertoin, Subordinators: examples and applications, Lectures on probability theory and statistics. Saint-Flour; BerlinSpringer1717Jean Bertoin, Subordinators: examples and applications, Lectures on probability theory and statistics (Saint-Flour, 1997), Lecture Notes in Math., vol. 1717, Springer, Berlin, 1999, pp. 1-91.
Random fragmentation and coagulation processes. Jean Bertoin, Cambridge Studies in Advanced Mathematics. 102Cambridge University PressJean Bertoin, Random fragmentation and coagulation processes, Cambridge Studies in Advanced Math- ematics, vol. 102, Cambridge University Press, Cambridge, 2006.
Self-similar scaling limits of Markov chains on the positive integers. Jean Bertoin, Igor Kortchemski, Ann. Appl. Probab. 264Jean Bertoin and Igor Kortchemski, Self-similar scaling limits of Markov chains on the positive integers, Ann. Appl. Probab. 26 (2016), no. 4, 2556-2595.
Distinguished exchangeable coalescents and generalized Fleming-Viot processes with immigration. Clément Foucart, Adv. Appl. Prob. 432Clément Foucart, Distinguished exchangeable coalescents and generalized Fleming-Viot processes with immigration, Adv. Appl. Prob. 43 (2011), no. 2.
The impact of selection in the Λ-Wright-Fisher model. Clément Foucart, Electron. Commun. Probab. 1872ppClément Foucart, The impact of selection in the Λ-Wright-Fisher model. Electron. Commun. Probab. 18 (2013), paper no. 72, 10 pp.
Continuous-state branching processes with competition: duality and reflection at infinity. Clément Foucart, Electron. J. Probab. 2433ppClément Foucart, Continuous-state branching processes with competition: duality and reflection at in- finity., Electron. J. Probab. 24 (2019), paper no. 33, 38 pp.
On the explosion of the number of fragments in simple exchangeable fragmentation-coalescence processes. Clément Foucart, Xiaowen Zhou, ArXiv eprint. 11173Clément Foucart and Xiaowen Zhou, On the explosion of the number of fragments in simple exchange- able fragmentation-coalescence processes, ArXiv eprint 2009.11173 (2020).
On the boundary classification of Λ-Wright-Fisher process with frequency-dependent selection. Clément Foucart, Xiaowen Zhou, ArXiv eprint 2012.08578Clément Foucart and Xiaowen Zhou, On the boundary classification of Λ-Wright-Fisher process with frequency-dependent selection, ArXiv eprint 2012.08578 (2020).
Duality and fixation in Ξ-Wright-Fisher processes with frequency-dependent selection. Dario Adrián González Casanova, Spanò, Ann. Appl. Probab. 281Adrián González Casanova, Dario Spanò, Duality and fixation in Ξ-Wright-Fisher processes with frequency-dependent selection. Ann. Appl. Probab. 28 (2018), no. 1, 250-284.
Kingman's coalescent with erosion. Félix Foutel-Rodier, Amaury Lambert, Emmanuel Schertzer, Electron. J. Probab. 255633Félix Foutel-Rodier, Amaury Lambert and Emmanuel Schertzer, Kingman's coalescent with erosion, Electron. J. Probab. 25 (2020), paper no. 56, 33 pp..
Branching processes with interactions: Subcritical cooperative regime. Adrián González Casanova, Juan Carlos Pardo, José Luis Perez, 10.1017/apr.2020.59Adv. Appl. Prob. 531Adrián González Casanova, Juan Carlos Pardo, José Luis Perez, Branching processes with interactions: Subcritical cooperative regime. Adv. Appl. Prob., 53(1) (2021), 251-278. doi:10.1017/apr.2020.59
The representation of partition structures. F C John, Kingman, J. London Math. Soc. 2John F. C. Kingman, The representation of partition structures, J. London Math. Soc. (2) 18 (1978), no. 2, 374-380.
A phase transition in excursions from infinity of the fast fragmentation-coalescence process. Andreas E Kyprianou, Steven W Pagett, Tim Rogers, Jason Schweinsberg, Ann. Probab. 456AAndreas E. Kyprianou, Steven W. Pagett, Tim Rogers, and Jason Schweinsberg, A phase transition in excursions from infinity of the fast fragmentation-coalescence process, Ann. Probab. 45 (2017), no. 6A, 3829-3849.
The branching process with logistic growth. Amaury Lambert, Ann. Appl. Probab. 15Amaury Lambert, The branching process with logistic growth, Ann. Appl. Probab. 15 (2005), 1506- 1535.
Second-order asymptotics for the block counting process in a class of regularly varying Λ-coalescents. Vlada Limic, Anna Talarczyk, Ann. Probab. 433Vlada Limic and Anna Talarczyk, Second-order asymptotics for the block counting process in a class of regularly varying Λ-coalescents, Ann. Probab. 43 (2015), no. 3, 1419-1455.
Conditions for exchangeable coalescents to come down from infinity, ALEA Lat. Martin Möhle, Philip Herriger, Am. J. Probab. Math. Stat. 9Martin Möhle and Philip Herriger, Conditions for exchangeable coalescents to come down from infinity, ALEA Lat. Am. J. Probab. Math. Stat. 9 (2012), 637-665.
A classification of coalescent processes for haploid exchangeable population models. Martin Möhle, Serik Sagitov, Ann. Probab. 294Martin Möhle and Serik Sagitov, A classification of coalescent processes for haploid exchangeable pop- ulation models, Ann. Probab. 29 (2001), no. 4, 1547-1562.
Coalescents with multiple collisions. Jim Pitman, Ann. Probab. 274Jim Pitman, Coalescents with multiple collisions, Ann. Probab. 27 (1999), no. 4, 1870-1902.
Combinatorial stochastic processes. Jim Pitman, Lectures from the 32nd Summer School on Probability Theory. Jean PicardBerlin; Saint-FlourSpringer-Verlag1875Jim Pitman, Combinatorial stochastic processes, Lecture Notes in Mathematics, vol. 1875, Springer- Verlag, Berlin, 2006, Lectures from the 32nd Summer School on Probability Theory held in Saint-Flour, July 7-24, 2002, With a foreword by Jean Picard.
The general coalescent with asynchronous mergers of ancestral lines. Serik Sagitov, J. Appl. Probab. 364Serik Sagitov, The general coalescent with asynchronous mergers of ancestral lines, J. Appl. Probab. 36 (1999), no. 4, 1116-1125.
Coalescents with simultaneous multiple collisions. Jason Schweinsberg, Electron. J. Probab. 5Paper no. 12, 50 pp. (electronicJason Schweinsberg, Coalescents with simultaneous multiple collisions, Electron. J. Probab. 5 (2000), Paper no. 12, 50 pp. (electronic).
A necessary and sufficient condition for the Λ-coalescent to come down from infinity. Jason Schweinsberg, Electron. Comm. Probab. 5electronicJason Schweinsberg, A necessary and sufficient condition for the Λ-coalescent to come down from infinity, Electron. Comm. Probab. 5 (2000), 1-11 (electronic).
| []
|
[
"The wedding of modified dynamics and non-exotic dark matter in galaxy clusters",
"The wedding of modified dynamics and non-exotic dark matter in galaxy clusters"
]
| [
"B Famaey \nIAA\nUniversité Libre de Bruxelles\nBvd du Triomphe\n1050BruxellesBelgium\n",
"G W Angus \nSUPA\nUniv. of St. Andrews\nKY16 9SSFifeUK\n",
"G Gentile \nUniv. of New Mexico\n800 Yale Blvd NE87131AlbuquerqueNew MexicoUSA\n",
"H Y Shan \nNational Astronomical Observatories\n100012BeijingPRC\n",
"H S Zhao \nSUPA\nUniv. of St. Andrews\nKY16 9SSFifeUK\n\nNational Astronomical Observatories\n100012BeijingPRC\n"
]
| [
"IAA\nUniversité Libre de Bruxelles\nBvd du Triomphe\n1050BruxellesBelgium",
"SUPA\nUniv. of St. Andrews\nKY16 9SSFifeUK",
"Univ. of New Mexico\n800 Yale Blvd NE87131AlbuquerqueNew MexicoUSA",
"National Astronomical Observatories\n100012BeijingPRC",
"SUPA\nUniv. of St. Andrews\nKY16 9SSFifeUK",
"National Astronomical Observatories\n100012BeijingPRC"
]
| []
| We summarize the status of Modified Newtonian Dynamics (MOND) in galaxy clusters. The observed acceleration is typically larger than the acceleration threshold of MOND in the central regions, implying that some dark matter is necessary to explain the mass discrepancy there. A plausible resolution of this issue is that the unseen mass in MOND is in the form of ordinary neutrinos with masses just below the experimentally detectable limit. In particular, we show that the lensing mass reconstructions of the rich clusters 1E0657-56 (the bullet cluster) and Cl0024+17 (the ring) do not pose a new challenge to this scenario. However, the mass discrepancy for cool X-ray emitting groups in which neutrinos cannot cluster pose a more serious problem, meaning that dark baryons could present a more satisfactory solution to the problem of unseen mass in MOND clusters. | 10.1142/9789812814357_0039 | [
"https://arxiv.org/pdf/0706.1279v2.pdf"
]
| 16,464,599 | 0706.1279 | c0006b4552a06c223d71838bd73a610ebbe18a53 |
The wedding of modified dynamics and non-exotic dark matter in galaxy clusters
26 Nov 2007 February 1, 2008
B Famaey
IAA
Université Libre de Bruxelles
Bvd du Triomphe
1050BruxellesBelgium
G W Angus
SUPA
Univ. of St. Andrews
KY16 9SSFifeUK
G Gentile
Univ. of New Mexico
800 Yale Blvd NE87131AlbuquerqueNew MexicoUSA
H Y Shan
National Astronomical Observatories
100012BeijingPRC
H S Zhao
SUPA
Univ. of St. Andrews
KY16 9SSFifeUK
National Astronomical Observatories
100012BeijingPRC
The wedding of modified dynamics and non-exotic dark matter in galaxy clusters
26 Nov 2007 February 1, 200810:25 WSPC -Proceedings Trim Size: 9in x 6in ws-procs9x6˙famaey 1gravitationdark mattergalaxy clustersgravitational lensing
We summarize the status of Modified Newtonian Dynamics (MOND) in galaxy clusters. The observed acceleration is typically larger than the acceleration threshold of MOND in the central regions, implying that some dark matter is necessary to explain the mass discrepancy there. A plausible resolution of this issue is that the unseen mass in MOND is in the form of ordinary neutrinos with masses just below the experimentally detectable limit. In particular, we show that the lensing mass reconstructions of the rich clusters 1E0657-56 (the bullet cluster) and Cl0024+17 (the ring) do not pose a new challenge to this scenario. However, the mass discrepancy for cool X-ray emitting groups in which neutrinos cannot cluster pose a more serious problem, meaning that dark baryons could present a more satisfactory solution to the problem of unseen mass in MOND clusters.
Introduction
Data on large scale structures point towards a Universe dominated by dark matter and dark energy. 1 Discovering the nature of these mysterious components of the Universe is, without a doubt, the major challenge of modern astrophysics, nay of physics as a whole. Nowadays, the dominant paradigm is that dark matter is actually made of non-baryonic weakly interacting massive particles, the so-called "cold dark matter" (CDM), and that the mysterious dark energy is well represented by a cosmological constant (Λ) in Einstein equations. The ΛCDM cosmological model has known a remarkable success in explaining and predicting diverse data sets corresponding to the Universe at its largest scales, including the CMB radiation, galaxy redshift surveys, distant supernovae data and absorption lines in the spectra of distant quasars. Nevertheless, a number of observations on galactic scales appear to be at variance with a number of CDM predictions. For instance, measurements of non-circular motions in the Milky Way have shown that there is actually very little room for dark matter inside the solar radius, 2 where CDM simulations predict a cuspy density profile. External galaxies have also been used to compare the predicted cuspy CDM density profiles with the observations, in particular rotation curves of dwarf and spiral galaxies show evidence for dark matter halos with a central constant density core 3 at odds with the CDM predictions. Another interesting problem faced by CDM on galactic scales is the overabundance of predicted satellite galaxies compared to the observed number in Milky Way-sized galaxies. 4 What is more, it is now well-documented that rotation curves suggest a correlation between the mass profiles of the baryonic matter (stars + gas) and dark matter. 5 Some rotation curves, like the one of NGC1560 6 even display obvious features (bumps or wiggles) that are also clearly visible in the stellar or gas distribution. A solution to all these problems, and especially the baryon-DM relation, could be a new specific interaction between baryons and some exotic dark matter made of, e.g., dipolar particles. 7,8 On the other hand, it could indicate that, on galaxy scales, the observed discrepancy rather reflects a breakdown of Newtonian dynamics in the ultra-weak field regime: this alternative explanation to solve the dark matter problem is known as the Modified Newtonian Dynamics (MOND 9 ) paradigm, which postulates that for accelerations below a 0 ≈ 10 −10 m s −2 the effective gravitational attraction approaches (g N a 0 ) 1/2 where g N is the usual Newtonian gravitational field. Without resorting to galactic dark matter, this simple prescription is known to reproduce galaxy scaling relations in spirals and ellipticals (Tully-Fisher, Faber-Jackson, fundamental plane) as well as the details of the rotation curves of individual spiral galaxies 10 over five decades in mass. In particular, the recent kinematic analysis of tidal dwarf galaxies belonging to the NGC 5291 system, 11 showing a mass discrepancy unexpected in the CDM context, strongly argues in favour of MOND. 12,13 Moreover, the paradigm successfully predicts the local galactic escape speed from the solar neighbourhood, 14,15 the statistical bar frequency in spirals, 16 as well as the velocity dispersions of satellite galaxies around their hosts. 17,18 Recent developments in the theory of gravity have also added plausibility to the case for modification of gravity through the advent of Lorentz-covariant theories of gravity yielding a MOND behaviour in the appropriate limit. [19][20][21] Although rather fine-tuned and still being a far cry from a fundamental theory underpinning the MOND paradigm, these theories remarkably allow for new predictions regarding cosmology [22][23][24][25] and gravitational lensing. 26,27 Hereafter we notably investigate the weak-lensing properties of some galaxy clusters in MOND.
The modified dynamics in galaxy clusters
While having an amazing predictive power on galactic scales, the simple MOND prescription badly fails in galaxy clusters without an additional unseen component. Indeed, in rich clusters of galaxies, the observed acceleration is typically larger than a 0 in the central regions, meaning that the MOND prescription is not enough to explain the observed discrepancy between visible and dynamical mass there, 28-30 a conclusion that can be reached by computing the centripetal gravity as a function of radius in the cluster (and thus the corresponding enclosed MOND mass) from the density and temperature profiles of X-ray gas and from the assumed hydrostatic equilibrium of the cluster.
At very large radii, the discrepancy is about a factor of two, meaning that there should be as much dark matter (mainly in the central parts) as observed baryons in MOND clusters. The main characteristic of this MOND dark matter is thus that it should cluster at galaxy cluster scales but not at galaxy scales. An ideal candidate, whose free-streaming length is known to be high, is at the same time the only dark matter particle that we know for sure to exist, the neutrino. We know that ordinary neutrinos have mass 31 and that they have a number density comparable to photons, meaning that they indeed contribute to the mass budget of the Universe. However, in order to reach the densities needed to account for the MOND missing mass in galaxy clusters, they should have a mass at the limit of their experimental detection, i.e. 2 eV. This idea 29 has the great advantage of naturally reproducing most cluster scaling relations including the luminosity-temperature relation, 30 while accounting for the bulk of the missing mass in galaxy clusters. Moreover, in their modelling of the CMB anisotropies, Skordis et al. 22 showed that such a significant non-baryonic component (with Ω n ≃ 0.15) was actually helpful to prevent the MOND Universe from accelerating too much, keeping Ω = 1 as a constraint on the amount of dark energy (although MOND might have the ability to drive late-time acceleration without resorting to dark energy 32 ).
On the other hand, given that, in the global baryon inventory at low redshift, about 20% of the baryons are still missing, and that the observed baryons in clusters only account for 5 to 10% of those produced during Big Bang nucleosynthesis, [33][34][35] there is plenty of room for this dark matter to be baryonic in MOND, since there should be as much dark matter (mainly in the central parts) as observed baryons in MOND clusters. Knowing exactly how many baryons hide in the Warm-Hot Intergalactic Medium (WHIM) is thus imperative if one wants to exclude this hypothesis.
The bullet cluster 1E0657-56
Keeping in mind this known discrepancy between the observable and dynamical masses of galaxy clusters in MOND, it is then useful to ask which new challenge is posed to the MOND paradigm by the gravitational lensing map of the bullet cluster 36,37 (see M. Bradac's contribution to these proceedings). In this extremely interesting object, the collisionless component (galaxies and a hypothesised collisionless dark matter component) and the fluid-like X-ray emitting plasma have been spatially segregated due to the collision of the two progenitor galaxy clusters. However, the lensing convergence map is centered on the minor baryonic collisionless component (galaxies) rather than on the dominant baryonic X-ray emitting gas component: this was argued 36 to be the first direct empirical proof of the existence of dark matter, independently of the validity of General Relativity at galaxy cluster scales. However, while the linear relation between the matter density and the gravitational potential implies that the convergence parameter is a direct measurement of the projected surface density in General Relativity, this is not the case anymore in MOND due to the non-linearity of the modified Poisson equation. Actually, it has been shown that, in MOND, it is possible to have a non-zero convergence along a line of sight where there is zero projected matter. 38 However, in the specific case of the bullet cluster, solving the non-linear Poisson equation for the observed matter density in various line-of-sight configurations showed that the convergence map always tracks the dominant baryonic component: 39 this means that non-linear effects, being capable of counteracting this trend, turn out to be very small. The presence of large amounts of collisionless dark matter in this cluster is thus necessary in MOND.
However, by applying a simple potential-density approach, we 40 have been able to estimate the needed quantities of such collisionless dark matter in the bullet cluster, finding that the central densities around the galaxies were in accordance with the maximum density of 2 eV neutrinos, from the Tremaine-Gunn 41 limit for a 9 keV (∼ 10 8 K) cluster:
ρ max ν = 7 × T (keV) 3/2 × 10 −5 M ⊙ pc −3(1)
∼ 2 × 10 −3 M ⊙ /pc 3 . However, a problem might exist from strong lensing data at the center of the collisionless component of the least massive cluster, a problem similar to the one discussed in section 5. We however conclude that the weak-lensing map of the bullet cluster in itself is not a new challenge to the "MOND+neutrinos" hypothesis, meaning that the amount of dark matter required is globally consistent with that suggested by the previous analyses 29 from hydrostatic equilibrium of X-ray emitting clusters. However, if it turns out that the MOND dark matter should rather be in baryonic form, then the bullet cluster provided the interesting constraint that it should be of collisionless nature (e.g. MACHO's or dense clumps of cold gas, but see also Mahdavi et al. 42 for a counter-example). We finally note that possible non-trivial contributions from the vector field of relativistic MOND theories in non-stationary configurations [23][24][25] were neglected, which could only decrease the need for dark matter in this system (but not in other clusters close to a steady-state equilibrium), and that the high-speed encounter of the clusters making up the bullet could actually be a standard manifestation of MOND long-range interaction. 43
The ring in Cl0024+17
Recently, a comprehensive weak lensing mass reconstruction of the rich galaxy cluster Cl0024+17 at z = 0.4 44 has been argued to have revealed the first dark matter structure that is offset from both the gas and galaxies in the cluster. This structure is ringlike, located between r ∼ 60 ′′ and r ∼ 85 ′′ . It was argued to be the result of a collision along the line-of-sight of two massive clusters 1-2 Gyr in the past. It has also been argued 44 that this offset was hard to explain in MOND.
Assuming that this ringlike structure is real and not caused by instrumental bias or spurious effects in the weak lensing analysis (due e.g. to the unification of strong and weak-lensing), and that cluster stars and galaxies do not make up a high fraction of the mass in the ring (which would be too faint to observe anyway), is this really hard to explain in MOND?
First of all, it has recently been shown 45 that, considering the boost of the gravitational field in MOND as the effect of some virtual dark matter (which makes it easier to compare with Newtonian and General Relativistic predictions), a peak in this virtual matter distribution generically appears close to the transition radius of MOND r t = (GM/a 0 ) 1/2 , especially when most of the mass of the system is well-contained inside this radius (which is the case for the cluster Cl0024+17). This means that the ring in Cl0024+17 could be the first manifestation of this pure MOND phenomenon. However, the sharpness of this virtual dark matter peak strongly depends on the choice of the µ-function, controlling the transition from the 1/r 2 Newtonian regime to the 1/r MOND regime. 8 A sharp transition of the µ-function is needed to reproduce the ringlike structure observed in Cl0024+17, meaning that if the simple µ-function 2,46 recently used to fit many galaxy rotation curves is chosen, the ring cannot be adequately reproduced by this pure MOND phenomenon.
In this case, a collisional scenario would be needed in MOND too, in order to explain the feature as a peak of cluster dark matter. Indeed, as explained above, we already know that there is a mass discrepancy in MOND clusters, and we know that this dark matter must be in collisionless from (e.g., neutrinos or dense clumps of cold gas). So the results of the simulation with purely collisionless dark particles 44 would surely be very similar in MOND gravity. In case the missing mass in clusters is in baryonic form, we do not really have a quantitative limit on the density of MOND dark matter that would be allowed in the ring. But since we know that the "MOND + neutrinos" hypothesis works fine in other similar rich clusters, we can follow the approach of Angus et al. 40 and test this hypothesis in Cl0024+17. If the missing mass is in the form of dark baryons, this is an effective way to compare the dark density to what should be expected in similar clusters in MOND.
Let us note that this cluster was already studied 47 in the framework of MOND, however this was prior to the detection of the ringlike structure. The cluster was found to be marginally consistent with 2 eV neutrinos, using a Hernquist profile with a total mass of 3.5 × 10 14 M ⊙ and a core radius of 0.3 Mpc. In a latter version, a cored model was tried, including also the strong lensing data, and a model consistent with a neutrino mass of 4 eV was found. However, they assumed a simple spherical model without any line-of-sight structure, contrary to the spirit of the collision scenario invoked to explain the ringlike feature. Given the uncertainty of the density models, it is unclear if existing data for this system actually rule out the 2 eV neutrinos. We hereafter rather focus on the newly discovered ringlike structure to see if it presents a new challenge to the "MOND+neutrinos" hypothesis.
The main limit on the neutrino ability to collapse in clusters comes from the Tremaine-Gunn limit, 41 stating that the phase space density must be preserved during collapse. Assuming the same temperature for the neutrino fluid as for the baryons, the maximum density of a mixture of all neutrino types all having a 2 eV mass for a cluster of a given temperature T (in keV) is then given by Eq. (1). This means that for Cl0024+17 whose mean emission weighted temperature is T = 4.25 +0. 40 −0.35 keV, 44 the Tremaine-Gunn limit for the density of neutrinos is ρ max ν = 6.1 +0. 9 −0.7 × 10 −4 M ⊙ pc −3 . A detailed simulation of Cl0024+17 would involve numerically solving the non-linear Poisson equation of MOND. However since observationally consistent relativistic MOND theories 19,20 always enhance the gravitational lensing, the surface density of the ring derived from General Relativity is always an upper limit to the actual density in MOND. Moreover, the gravity at the position of the ring is of the order of ∼ 2a 0 , meaning that MOND effects just start to be important (except for the peculiar mechanism discussed earlier in the case of a sharp transition 45 ). This means that, as a first-order approximation, we can simply consider the density of the ring in General Relativity as an upper limit on the MOND density, and compare it to the Tremaine-Gunn limit. The convergence parameter is κ = 0.69 in the ring, 44 but the background is estimated 44 to contribute up to κ = 0.65, which would be the convergence if no ring was present, meaning that the convergence due to the ring itself is κ r = 0.04. Adopting the effective distance D eff = D l D ls /D s = 0.9 Gpc (where D s , D l , and D ls are the distance from the observer to the source, from the observer to the lens, and from the lens to the source, respectively), we find that the MONDian upper limit of the surface density of the ring is Σ = κ r × Σ c = 70M ⊙ pc −2 . Given that the ring is 25 ′′ wide, i.e. 0.15 Mpc wide for a distance of 1.2 Gpc, it is sensible to consider that its depth along the line-of-sight is of the same order of magnitude leading to ρ = Σ/(0.15 Mpc)= 4.6 × 10 −4 M ⊙ pc −3 , i.e. significantly less (at more than 2σ) than the Tremaine-Gunn limit. We thus conclude that the ringlike structure in Cl0024+17, if real and not caused by spurious effects in the weak lensing analysis, does not pose a new challenge to MOND in galaxy clusters.
Low temperature X-ray emitting groups
While we have shown that the widely advertised lensing analysis of the clusters 1E0657-56 and Cl0024+17 do not pose any new challenges to the "MOND + neutrinos" hypothesis, we show hereafter that low-mass X-ray emitting groups do provide a much more serious problem. Indeed, Eq. (1) implies that 2 eV neutrinos would stop contributing significantly to the mass density in cooler clusters or groups, since their maximum density is proportional to T 3/2 . The pure "MOND + neutrinos" hypothesis thus predicts that the MOND mass discrepancy should decrease with decreasing temperature. However, when analyzing the hydrostatic equilibrium of Xray emitting groups with 0.6 keV < T < 2 keV, in which neutrinos cannot cluster, one finds 48 a mass discrepancy that cannot be explained by neutrinos. This of course does not mean that 2 eV neutrinos cannot be present to alleviate the mass discrepancy in rich clusters, but it means that there is more MOND hidden mass than just neutrinos, especially in cool groups.
Conclusion
We thus conclude that, while having an amazing predictive power on galactic scales, the simple MOND prescription fails at present in galaxy clusters, where some dark matter is needed. If this dark matter is assumed to be in the form of 2 eV neutrinos (at the limit of experimental detection), then the bulk of the problem can be solved in rich clusters, including the bullet cluster and the ringlike feature observed in Cl0024+17. However, neutrinos cannot cluster in cool groups with 0.6 keV < T < 2 keV, where a discrepancy is still observed. One solution could then be that dark matter in MOND is in the form of a 4th sterile neutrino with a mass around 6-10 eV. Another possibility is that the new fields that are invoked in relativistic versions of MOND might behave as a dark matter fluid in galaxy clusters. 49 However these explanations seem to be slightly acts of the last resort, whilst another, more elegant, possibility would be that the MOND cluster dark matter is simply in the form of cold gas clouds or MACHO's, since there are enough missing baryons at low redshift to account for all the MOND hidden mass in galaxy groups and clusters (except if more baryons are detected in WHIM in between). An interesting possibility is then that this baryonic dark matter is in the form of dense clumps of cold gas of only a Jupiter mass and a temperature of a few Kelvins, 50 which would behave in a collisionless way. In any case, one should understand why this MOND dark matter component vanishes for systems with T < 0.6 keV. As a final remark, it should be highlighted that this additional unseen component in MOND only appears in systems with an abundance of ionised gas and Xray emission, whatever consequence this might have on the nature of this dark matter.
. D Spergel, ApJS. 170377D. Spergel et al., 2007, ApJS, 170, 377
. B Famaey, J Binney, MNRAS. 363603B. Famaey, J. Binney, 2005, MNRAS, 363, 603
. G Gentile, MNRAS. 351903G. Gentile et al. 2004, MNRAS, 351, 903
. B Moore, ApJ. 52419B. Moore et al., 1999, ApJ, 524, L19
. F Donato, MNRAS. 35317F. Donato et al., 2004, MNRAS, 353, L17
. A H Broeils, A&A. 25619A. H. Broeils, 1992, A&A, 256, 19
. L Blanchet, CQGra. 243529L. Blanchet, 2007, CQGra, 24, 3529
. B Famaey, Phys.Rev. 7563002B. Famaey et al., 2007, Phys.Rev.D75, 063002
. M Milgrom, ApJ. 270365M. Milgrom, 1983, ApJ, 270, 365
. R H Sanders, S S Mcgaugh, ARA&A. 40263R.H. Sanders, S.S. McGaugh, 2002, ARA&A 40 263
. F Bournaud, Science. 3161166F. Bournaud et al., 2007, Science, 316, 1166
. G Gentile, A&A. 47225G. Gentile et al., 2007, A&A, 472, L25
. M Milgrom, ApJ. 66745M. Milgrom, 2007, ApJ, 667, L45
. B Famaey, MNRAS. 37779B. Famaey et al. 2007, MNRAS, 377, L79
. X Wu, ApJ. 665101X. Wu et al., 2007, ApJ, 665, L101
. O Tiret, F Combes, A&A. 464517O. Tiret, F. Combes, 2007, A&A, 464, 517
. G W Angus, arXiv:0709.1966MNRAS. in pressG.W. Angus et al., 2007, MNRAS, in press, arXiv:0709.1966
. O Tiret, arXiv:0710.4070A&A. in pressO. Tiret et al., 2007, A&A, in press, arXiv:0710.4070
. J Bekenstein, Phys.Rev. 7083509J. Bekenstein, 2004, Phys.Rev.D70, 083509
. T G Zlosnik, Phys.Rev. 7544017T.G. Zlosnik et al., 2007, Phys.Rev.D75, 044017
. J.-P Bruneton, G Esposito-Farèse, arXiv:0705.4043J.-P. Bruneton, G. Esposito-Farèse, 2007, arXiv:0705.4043
. C Skordis, Phys.Rev.Lett. 9611301C. Skordis et al., 2006, Phys.Rev.Lett.96, 011301
. S Dodelson, M Liguori, Phys.Rev.Lett. 97231301S. Dodelson, M. Liguori, 2006, Phys.Rev.Lett.97, 231301
. T G Zlosnik, arXiv:0711.0520T.G. Zlosnik et al., 2007, arXiv:0711.0520
. A Halle, H S Zhao, arXiv:0711.0958A. Halle, H.S. Zhao, 2007, arXiv:0711.0958
. H S Zhao, MNRAS. 368171H.S. Zhao et al., 2006, MNRAS, 368, 171
. D Xu, arXiv:0710.4935D. Xu et al., 2007, arXiv:0710.4935
. A Aguirre, ApJ. 561550A. Aguirre et al., 2001, ApJ, 561, 550
. R H Sanders, MNRAS. 342901R.H. Sanders, 2003, MNRAS, 342, 901
. R H Sanders, MNRAS. 380331R.H. Sanders, 2007, MNRAS, 380, 331
. Y Fukuda, Phys.Rev.Lett. 811562Y. Fukuda et al., 1998, Phys.Rev.Lett.81, 1562
. L M Diaz-Rivera, Phys.Rev. 7383503L.M. Diaz-Rivera et al., 2006, Phys.Rev.D73, 083503
. M Fukugita, ApJ. 503518M. Fukugita et al., 1998, ApJ, 503, 518
. J Silk, arXiv:astro-ph/0603209J. Silk, 2006, arXiv:astro-ph/0603209
. S S Mcgaugh, arXiv:0707.3795S.S. McGaugh, 2007, arXiv:0707.3795
. D Clowe, ApJ. 648109D. Clowe et al., 2006, ApJ, 648, L109
. M Bradac, ApJ. 652937M. Bradac et al., 2006, ApJ, 652, 937
. G W Angus, MNRAS. 371138G.W. Angus et al., 2006, MNRAS, 371, 138
. M Feix, arXiv:0707.0790A&A. in pressM. Feix et al., 2007, A&A, in press, arXiv:0707.0790
. G W Angus, ApJ. 65413G.W. Angus et al., 2007, ApJ, 654, L13
. S Tremaine, J E Gunn, Phys.Rev. 4240710S. Tremaine, J.E. Gunn, 1979, Phys.Rev.D42, 40710
. A Mahdavi, ApJ. 668806A. Mahdavi et al., 2007, ApJ, 668, 806
. G W Angus, S S Mcgaugh, arXiv:0704.0381MNRAS. in pressG.W. Angus, S.S. McGaugh, 2007, MNRAS, in press, arXiv:0704.0381
. M J Jee, ApJ. 661728M.J. Jee et al., 2007, ApJ, 661, 728
. M Milgrom, R H Sanders, arXiv:0709.2561M. Milgrom, R.H. Sanders, 2007, arXiv:0709.2561
. H S Zhao, B Famaey, ApJ. 6389H.S. Zhao, B. Famaey, 2006, ApJ, 638, L9
. R Takahashi R, T Chiba, arXiv:astro-ph/0701365ApJ. in pressR. Takahashi R., T. Chiba, 2007, ApJ, in press, arXiv:astro-ph/0701365
. G W Angus, arXiv:0709.0108G.W. Angus et al., 2007, arXiv:0709.0108
. H S Zhao, arXiv:0710.3616ApJ. in pressH.S. Zhao, 2007, ApJ, in press, arXiv:0710.3616
. D Pfenniger, F Combes, A&A. 28594D. Pfenniger, F. Combes, 1994, A&A, 285, 94
| []
|
[
"GRB 110530A: Peculiar Broad Bump and Delayed Plateau in Early Optical Afterglows",
"GRB 110530A: Peculiar Broad Bump and Delayed Plateau in Early Optical Afterglows"
]
| [
"Shu-Qing Zhong \nGXU-NAOC Center for Astrophysics and Space Sciences\nDepartment of Physics\nGuangxi University\n530004NanningChina\n\nGuangxi Key Laboratory for the Relativistic Astrophysics\n530004NanningChina\n",
"Li-Ping Xin \nKey Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences\n100012BeijingChina\n",
"En-Wei Liang \nGXU-NAOC Center for Astrophysics and Space Sciences\nDepartment of Physics\nGuangxi University\n530004NanningChina\n\nGuangxi Key Laboratory for the Relativistic Astrophysics\n530004NanningChina\n\nKey Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences\n100012BeijingChina\n\nPurple Mountain Observatory\nChinese Academy of Sciences\n210008NanjingChina\n",
"Jian-Yan Wei ",
"Yuji Urata \nKey Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences\n100012BeijingChina\n\nInstitute of Astronomy\nNational Central University\nChung-Li 32054Taiwan\n\nAcademia Sinica Institute of Astronomy and Astrophysics\nTaipei 106Taiwan\n",
"Kui-Yun Huang \nDepartment of Mathematics and Science\nNational Taiwan Normal University\nLin-kou District24449New Taipei CityTaiwan\n",
"Yu-Lei Qiu \nKey Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences\n100012BeijingChina\n",
"Can-Min Deng \nGXU-NAOC Center for Astrophysics and Space Sciences\nDepartment of Physics\nGuangxi University\n530004NanningChina\n\nGuangxi Key Laboratory for the Relativistic Astrophysics\n530004NanningChina\n",
"Yuan-Zhu Wang \nPurple Mountain Observatory\nChinese Academy of Sciences\n210008NanjingChina\n",
"Jin-Song Deng \nKey Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences\n100012BeijingChina\n"
]
| [
"GXU-NAOC Center for Astrophysics and Space Sciences\nDepartment of Physics\nGuangxi University\n530004NanningChina",
"Guangxi Key Laboratory for the Relativistic Astrophysics\n530004NanningChina",
"Key Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences\n100012BeijingChina",
"GXU-NAOC Center for Astrophysics and Space Sciences\nDepartment of Physics\nGuangxi University\n530004NanningChina",
"Guangxi Key Laboratory for the Relativistic Astrophysics\n530004NanningChina",
"Key Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences\n100012BeijingChina",
"Purple Mountain Observatory\nChinese Academy of Sciences\n210008NanjingChina",
"Key Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences\n100012BeijingChina",
"Institute of Astronomy\nNational Central University\nChung-Li 32054Taiwan",
"Academia Sinica Institute of Astronomy and Astrophysics\nTaipei 106Taiwan",
"Department of Mathematics and Science\nNational Taiwan Normal University\nLin-kou District24449New Taipei CityTaiwan",
"Key Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences\n100012BeijingChina",
"GXU-NAOC Center for Astrophysics and Space Sciences\nDepartment of Physics\nGuangxi University\n530004NanningChina",
"Guangxi Key Laboratory for the Relativistic Astrophysics\n530004NanningChina",
"Purple Mountain Observatory\nChinese Academy of Sciences\n210008NanjingChina",
"Key Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences\n100012BeijingChina"
]
| []
| We report our very early optical observations of GRB 110530A and investigate its jet properties together with its X-ray afterglow data. A peculiar broad onset bump followed by a plateau is observed in its early R band afterglow lightcurve. The optical data in the other bands and the X-ray data are well consistent with the temporal feature of the R band lightcurve. Our joint spectral fits of the optical and X-ray data show that they are in the same regime, with a photon index of ∼ 1.70. The optical and X-ray afterglow lightcurves are well fitted with the standard external shock model by considering a delayed energy injection component. Based on our modeling results, we find that the radiative efficiency of the GRB jet is ∼ 1% and the magnetization parameter of the afterglow jet is < 0.04 with the derived extremely low ǫ B (the fraction of shock energy to magnetic field) of (1.64 ± 0.25) × 10 −6 . These results indicate that the jet may be matter dominated. Discussion on delayed energy injection from accretion of late fall-back material of its pre-supernova star is also presented. | 10.3847/0004-637x/831/1/5 | [
"https://arxiv.org/pdf/1607.08454v1.pdf"
]
| 119,183,516 | 1607.08454 | 7b11d53d7c553f081e876e0fcf1e944f21dd608e |
GRB 110530A: Peculiar Broad Bump and Delayed Plateau in Early Optical Afterglows
28 Jul 2016
Shu-Qing Zhong
GXU-NAOC Center for Astrophysics and Space Sciences
Department of Physics
Guangxi University
530004NanningChina
Guangxi Key Laboratory for the Relativistic Astrophysics
530004NanningChina
Li-Ping Xin
Key Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences
100012BeijingChina
En-Wei Liang
GXU-NAOC Center for Astrophysics and Space Sciences
Department of Physics
Guangxi University
530004NanningChina
Guangxi Key Laboratory for the Relativistic Astrophysics
530004NanningChina
Key Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences
100012BeijingChina
Purple Mountain Observatory
Chinese Academy of Sciences
210008NanjingChina
Jian-Yan Wei
Yuji Urata
Key Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences
100012BeijingChina
Institute of Astronomy
National Central University
Chung-Li 32054Taiwan
Academia Sinica Institute of Astronomy and Astrophysics
Taipei 106Taiwan
Kui-Yun Huang
Department of Mathematics and Science
National Taiwan Normal University
Lin-kou District24449New Taipei CityTaiwan
Yu-Lei Qiu
Key Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences
100012BeijingChina
Can-Min Deng
GXU-NAOC Center for Astrophysics and Space Sciences
Department of Physics
Guangxi University
530004NanningChina
Guangxi Key Laboratory for the Relativistic Astrophysics
530004NanningChina
Yuan-Zhu Wang
Purple Mountain Observatory
Chinese Academy of Sciences
210008NanjingChina
Jin-Song Deng
Key Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences
100012BeijingChina
GRB 110530A: Peculiar Broad Bump and Delayed Plateau in Early Optical Afterglows
28 Jul 2016arXiv:1607.08454v1 [astro-ph.HE]Subject headings: Gamma-ray burst: general
We report our very early optical observations of GRB 110530A and investigate its jet properties together with its X-ray afterglow data. A peculiar broad onset bump followed by a plateau is observed in its early R band afterglow lightcurve. The optical data in the other bands and the X-ray data are well consistent with the temporal feature of the R band lightcurve. Our joint spectral fits of the optical and X-ray data show that they are in the same regime, with a photon index of ∼ 1.70. The optical and X-ray afterglow lightcurves are well fitted with the standard external shock model by considering a delayed energy injection component. Based on our modeling results, we find that the radiative efficiency of the GRB jet is ∼ 1% and the magnetization parameter of the afterglow jet is < 0.04 with the derived extremely low ǫ B (the fraction of shock energy to magnetic field) of (1.64 ± 0.25) × 10 −6 . These results indicate that the jet may be matter dominated. Discussion on delayed energy injection from accretion of late fall-back material of its pre-supernova star is also presented.
Introduction
It is generally believed that cosmic gamma-ray bursts (GRBs) are from ultra relativistic jets powered by newly-born black holes or pulsars during collapses of massive stars or mergers of compact stars (e.g., Colgate 1974, Paczynski 1986Eichler et al. 1989;Narayan et al. 1992;Woosley 1993;MacFadyen & Woosley 1999;Zhang et al. 2003; see reviews by Mészáros 2002Mészáros , 2006Zhang & Mészáros 2004;Piran 2004;Woosley & Bloom 2006;Kumar & Zhang 2015). Their prompt gamma-ray emission may be from internal shocks in an erratic, unsteady, relativistic fireball (e.g., Rees & Mészáros 1992;Mészáros & Rees 1993;Rees & Mészáros 1994), a dissipative photosphere (e.g., Beloborodov 2010;Vurm et al. 2011;Giannios 2008;Ioka 2010), or a Poynting-flux-dominated outflow ( Zhang & Yan 2011 and reference therein). The broad band observations with the Fermi mission sharpen debating on the radiation mechanisms and the composition of the GRB jets (e.g., Abdo et al. 2009;Zhang et al. 2009Zhang et al. 2013;Lyu et al. 2014).
Long-lived afterglows in the X-ray, optical and radio bands following the prompt gammarays were discovered in the BeppoSAX mission era (van Paradijs et al. 2000 and references therein). They are well explained with the synchrotron emission from external shocks when GRB fireballs propagate into the circumburst medium (e.g., Mészáros & Rees 1997;Sari et al. 1998). Afterglow observations was revolutionized by the Swift mission thanks to the promptly slewing and precisely localizing capacities of its X-ray telescope (XRT) (Gehrels et al. 2004;Burrows et al. 2005b). The number of GRBs that have optical and X-ray afterglow detections rapidly increases and the sample of well-sampled lightcurves are also growing quickly (Gehrels et al. 2009;Kann et al. 2010). Excluding the tail emission of the prompt gamma-rays and erratic flares from the canonical XRT lightcurves (Nousek et al. 2006;Zhang et al. 2006), the X-ray afterglow lightcurves are generally consistent with the predictions of the external shock model by adding an extra energy injection Liang et al. 2007). Statistical analysis of the optical afterglow lightcurves observed from Feb, 1997to Nov., 2011 shows that about 1/3 of the optical afterglow lightcurves well agree with the prediction of the external shock model in the thin shell case, and another 1/3 may require an extra energy injection to the external shocked medium (Li et al. 2012;Liang et al. 2013). An extensive analysis of the X-ray and optical afterglow data by Wang et al. (2015) shows that the standard external shock models are good for explaining the data by elaborately considering various effects, such as long-lasting reverse shock, structured jets, circumburst medium density profile.
Well-sampled multi-wavelength lightcurves in broad temporal coverage from very early to late epochs are valuable for modeling the lightcurves and revealing the properties of the GRB jets and even the GRB central engines as well as the progenitors (e.g., Xin et al. 2016). This paper reports our very early optical observations for GRB 110530A and detailed modeling for the optical and X-ray afterglow lightcurves. Observations and data reductions are reported in §2. We present joint temporal and spectral analysis for the optical and X-ray afterglow data in §3, and present our modeling results in §4. Discussion on the possible implications for its jet composition and progenitor star are available in §5. Conclusions are presented in §6. Throughout, the notation Q n = Q/10 n in cgs units are adopted.
Observations and Data Reduction
XRT and UV-optical Telescope (UVOT) on board Swift began observed the X-ray and optical afterglows of GRB 110530A at 446 seconds and 438 seconds after the Swift Burst Alert Telescope (BAT) trigger, respectively (D'Avanzo et al. 2011a, b). Our optical follow-up observations began much earlier than the first detections of XRT and UVOT (Marshall et al. 2011). The TNT (0.8-m Tsinghua University -National Astronomical Observatory of China Telescope) at Xinglong Observatory 1 promptly slewed to the burst position 133 seconds after the Swift/BAT trigger, and the optical counterpart was clearly detected in all images in the white(W ) and R bands. The early optical afterglows of GRB 110530A was also observed with AZT-33IK telescope of Sayan observatory (Mondy) and well-sampled lightcurve was obtained (Volnova et al. 2011). Our observations with Lulin One-meter Telescope ( LOT ) at Taiwan started at about 30 min after the burst, and the optical counterpart was also clearly detected in the g, r, and i bands. The optical counterpart was also detected with the 2.5m NOT telescope at Roque de los Muchachos Observatory (La Palma, Spain) at 6.8 hours after the burst. It faded down to R ∼ 21.3 mag (De Cia et al. 2011). Spectroscopic observations with NOT does not show any evident absorption lines, and a limit of the redshift z < 2.7 is placed by the non-detection of Lyman alpha absorption in the spectra (De Cia et al. 2011). We assume that z = 1 for our analysis.
We process our optical data by following the standard routine in the IRAF package 2 . Point spread function (PSF) photometry was applied with the DAOPHOT tool in the IRAF package to obtain the instrumental magnitudes. For the white band data, we simply take them as R band data (Xin et al. 2010). All TNT optical data were calibrated by USNO B1.0 R2 mag with 11 nearby reference stars. The data observed with the LOT telescope was calibrated with the transformation of Jordi et al. (2006) 3 with USNO B1.0 mag. Our optical observations are reported in Table 1 and the optical afterglow lightcurves are shown in Figure 1. The reference stars for calibration is presented in Table 2.
The Swift/XRT lightcurve and spectrum are extracted from the UK Swift Science Data Centre at the University of Leicester (Evans et al. 2009) 4 . The XRT lightcurve with 30 counts per bin is also shown in Figure 1.
The duration of prompt emission in the BAT band is T 90 = 19.6 s. we extract the prompt gamma-rays spectrum following the standard BAT data processing routine. It is well known the GRB spectrum in the keV-MeV band is empirically fit by the Band function with typical photon indices Γ 1 = −1 and Γ 2 = −2.3 breaking at E b (Band et al. 1993;Preece et al. 2000). The peak energy of the νf ν spectrum is given by E p = (1 + Γ 1 ) if Γ 2 < −2. E p value may vary from tens to thousands keVs among GRBs. Since BAT energy band is only 15-150 keV, the GRB spectrum observed with BAT is usually adequately fitted with a single power-law, and an empirical relation between E p and Γ γ is proposed, i.e., log E p = (2.76 ± 0.07) − (3.61 ± 0.26) log Γ (Zhang et al. 2007b). Fitting the BAT spectrum of GRB 110530A with a single power-law, we get Γ γ = 2.04 ± 0.21, and its fluence in BAT energy band is 3.3 × 10 −7 erg cm −2 in this spectral model. With the empirical relation between E p and Γ γ , we have E p ∼ 45 keV. Correcting the E γ,iso in the BAT band to 1 − 10 4 keV band with the spectral parameter Γ 1 = −1, Γ 2 = −2.3, and E p = 45 keV, we obtain E c γ,iso = 1.92 × 10 51 erg assuming z = 1. With the spectral parameters, we also obtain the peak luminosity in the 1 − 10 4 keV band as L c γ,iso = (2.81 ± 0.71) × 10 50 erg s −1 .
Data Analysis
As shown in Figure 1, well-sampled lightcurve in the R band is observed with the TNT. We empirically fit the lightcurve with a multiple broken power-law model. Each broken power-law function is read as,
F = F 0 t t b ωα 1 + t t b ωα 2 1/ω ,(1)
where t b is the break time,α 1 and α 2 are decay indices before and after the break, respectively, and ω describes the sharpness of the break. Our fit yields five phases, as shown in the right panel of Figure 1. The R band lightcurve smoothly onsets with a slope of 2.6 ± 0.4 (Phase I) and peaks at 275 ± 22 seconds. The flux keeps almost constant from 275 seconds to 1300 seconds (the first plateau, Phase II), then decays with a power-law index of -1.2 (Phase III). Subsequently, the flux keeps almost constant (the second plateau, Phase IV) and transits to a normal decay with a power-law of -1.2 again (Phases V). Flickering is shown up at around T 0 + 460 and T 0 + 1200 seconds during the first optical plateau. By re-scaled the multi-band optical data and XRT data, it is clear shown that both otherwavelength optical data and X-ray data are well consistent with the temporal feature of the R band lightcurve, even the optical flickering feature is also clearly shown up in the X-ray band. These results confidently indicate that the optical and X-ray afterglows are from the same emission component. Such a light curve shape has been seen before in other GRB afterglows, though with a less pronounced early plateau, such as GRB 071025 (Perley et al. 2010), GRB 091024 (Virgili et al. 2013), and GRB 110213A (Cucchiara et al. 2011). While it is lacking a second hump, an early rise-plateau-decay was also recently reported for GRB 141221A (Bardho et al. 2016).
To investigate the spectral properties of the afterglow data, we extract the joint optical and X-ray spectra of the afterglows in five time intervals, i.e., 0.6-0.9 ks, 0.9-1.37 ks, 1.37-2.5 ks, 6-9 ks and 9-14 ks. The X-ray data in each time intervals are grouped with a criterion of 10 counts per bin. The selected time intervals are for the Phase II-V and late epoch of Phase V. Spectral analysis for Phase I could no be made since no X-ray data is available. The optical data is corrected by the extinction of our Galaxy, which are A g = 0.182,A r = 0.126,A R = 0.119 and A i = 0.093 at the burst direction (Schlegel et al. 1998). The equivalent hydrogen column density of our Galaxy is N H = 6.78 × 10 20 cm −2 . We use the Xspec package to analyze the spectral data. The extinction laws of host galaxy is taken as that of Large Magellanic Cloud (LMC; R V = 3.16) and Small Magellanic Cloud (SMC; R V = 2.93). The N H of the host galaxy is derived from the time integrated X-ray afterglow spectrum. It is N host H ∼ 1.0 × 10 21 cm −2 , which is fixed at this value in our timeresolved spectral fits. Considering hydrogen absorptions and extinctions of both our Galaxy and host galaxy, we fit the spectra with a single power-law function. Our results are reported in Table 3 and shown in Figure 2. The derived photon indices range from 1.67 to 1.72. The extinction by the host galaxy is negligible for both the LMC and SMC extinction laws 5
Modeling the optical and X-ray afterglow lightcurves
Our temporal and spectral analysis shows that the optical and X-ray afterglows are from the same emission component. The clear detection of the smoothly onset feature in the early optical data is well consistent with the expectation of the standard external shock model in the thin shell case (Sari & Piran. 1999;Liang et al. 2010Liang et al. , 2013. The observed first plateau seems to be shaped by the broadening of the onset bump with the superimposed flares (or flickering), which may be due to fluctuations of the external shock region or due to flares from late internal shocks (e.g., Burrows et al. 2005a;Fan et al. 2005;Zhang et al. 2006;Dai et al. 2006;Liang et al. 2006). We do not consider these erratic flares in our modeling. The second plateau from 2400 s to 3000 s could be attributed to delayed energy injection to the afterglow jet. Therefore, we model the lightcurves with the standard afterglow models by considering the late energy injection effect. We adopt the standard afterglow model by Sari et al. (1998) and Huang et al. (1999). We describe our model fitting strategy as following.
• Constraining the medium property and the power-law index of the radiating electrons with the closure relation of the forward shock model. With the decay slope and spectral index of the normal decay phase (Phase V), we find that the afterglows are radiated in the spectral regime of ν m < ν < ν c . In this regime we have p = 2β + 1, where β = Γ − 1. We therefore obtain p = 2.44 ± 0.06. We fix p = 2.4 in our analysis without considering the uncertainty of p. Note that the slope of the afterglow onset (Phase I) is α 1 = 2.6 ± 0.4, which well agrees with the expectation for a constant density interstellar medium (ISM). The medium density in our fit then is set as a constant n.
• Describing the energy injection as L in = L 0 t q during a period from the starting (t s ) to the ending (t e ) time in order to explain the Phase IV.
• Adopting the Markov Chain Monte Carlo (MCMC) technique to search for the parameter set that can best represent the data. The parameters of our model include the initial Lorentz factor (Γ 0 ), the fraction of shock energy to electron (ǫ e ), the fraction of shock energy to magnetic field(ǫ B ), the medium density (n), the isotropic kinetic energy (E K,iso ), the jet opening angle (θ j ), and the parameters of the energy injection (L 0 , q, t s , and t e ). We calculate the χ 2 and measure the goodness of the fits for each parameter set with a normalized probability p f ∝ e −χ 2 /2 . Note that the lightcurves are composed of some flares. With the MCMC technique we search for the parameter set that have the minimum χ 2 (hence the largest p f value). The uncertainty of a parameter in the best parameter set is evaluated by fixing the other parameters.
With this strategy, the best parameters and their uncertainty (1σ confidence level) are Γ 0 = 91 ± 8, ǫ e = 0.086 ± 0.008, ǫ B = (1.64 ± 0.25) × 10 −6 , n = 13.3 ± 2.6 cm −3 , E K,iso = (2.28 ± 0.27) × 10 53 erg, t s ∼ 2400 s, t e = 2997 ± 546 s, L 0 = (4.0 ± 2.5) × 10 50 erg/s, and q = −0.18 +0.05 −0.07 . The jet opening angle is poorly constrained and we have θ j > 0.15 rad. Figure 3 shows our best fit to the data with our model. The χ 2 of the fit is 1.605. The large χ 2 is due to flares/fluctuations in the optical and X-ray bands. The derived ǫ e is generally consistent with previous results (Wijers & Galama 1999;Panaitescu & Kumar 2001;Yost et al. 2003;Liang et al. 2004), but ǫ B is much lower than the typical value, i.e., 10 −2 (e.g., Panaitescu & Kumar 2001). Further discussion on ǫ B is presented in §5.1. The Γ 0 of GRB 110530A is at the lower end of the Γ 0 distribution for a sample of GRBs whose Γ 0 values are calculated with the deceleration time in their optical afterglow lightcurves (see Figure 12 of Liang et al. 2013).
Note that the redshift of GRB 110530A is unknown, we set z = 1 in our lightcurve modeling 7 . We also check the dependence of the model parameters on the burst distance by setting z = 0.5 and z = 2.0. It is found that ǫ e , ǫ B , n, q do not change with redshift. Being due to large uncertainties of t e ,t s and θ j , we also do not find clear dependence of these parameters on redshift. However, Γ 0 , E K,iso , and L 0 are getting larger as z increases. 7 Liang et al. (2015) found a tight correlation among L γ,iso , Γ 0 , and E p in the burst frame, i.e., log L γ,iso /10 52 erg s −1 = (−6.38 ± 0.35) + (1.34 ± 0.14) × log(E p (1 + z)) + (1.32 ± 0.19) × log Γ 0 . By setting z = 1 and using E p = 45 keV and Γ 0 = 91, we get log L γ,iso /erg s −1 = 50.82 ± 0.35 in the energy band of 1 − 10 4 keV, where the error is measured only for the systematical error of the relation without considering the observed errors of E p and Γ 0 since no E p error is available. This is roughly consistent with the observed luminosity by correcting to the same energy band, i.e., log L c iso,obs /erg s −1 = 50.45 ± 0.11 5. Discussion 5.1. Baryonic or Magnetized Jet?
The issue that GRB jets are baryonic or magnetized is under debating (e.g., Zhang 2011). The GRB radiative efficiency, which is defined as η γ = E γ,iso /(E γ,iso + E K,iso ), is an essential quantity to understand the nature of the bursts (e.g., Zhang et al. 2006). The standard internal shock models predict a GRB efficiency of ∼ 1% (Kumar 1999;Panaitescu et al. 1999). E K should be the kinetic energy of the fireball that produces the observed gamma-ray energy and it would be estimated at the fireball deceleration time. Assuming that the early optical bump is due to the fireball deceleration by the ambient medium, one then can derive the E K of the fireball at the deceleration time (t dec ) by eliminating the possible late energy injection. In this analysis, we get E K,iso = (2.28 ± 0.27) × 10 53 erg. The corrected gamma-ray energy in 1 − 10 4 keV band is E c γ,iso = 1.92 × 10 51 erg. Therefore, the internal shock radiation efficiency of GRB 110530A is 0.83%. The total energy injection from 2390/(1+z) to 2997/(1+z) seconds derived from our model fit is ∼ 3.39×10 52 erg. Including the delayed energy injection, the efficiency is η = 0.73%. This is also consistent with the prediction of the internal shock models. Zhang et al. (2007a) found that some bursts have a low efficiency throughout, and these GRBs usually have an X-ray afterglow light curve that smoothly joins the prompt emission light curve without a distinct steep decay component or an extended shallow decay component. Fan & Piran (2006) suggested that the gammaray efficiency is moderate and does not challenge the standard internal shock model. GRB 110530A is consistent with that reported by Zhang et al. (2007a). The low efficiency well agrees with the prediction of the standard internal shock models, likely implying that the outflow for the prompt emission could be baryonic.
The jet composition in the afterglow phase is also of interest. The ǫ B value derived from our model fit is much smaller than the typical value reported in the literature. For a constant density medium, the cooling frequency of synchrotron emission frequency is given by ν c = 6.3 × 10 15 Hz(1 Sari et al. 1998;Yost et al. 2003), where Y is the Inverse Compton scattering parameter and t d is the observer's time in unit of days. One can see ν c is sensitive to ǫ B . As time increases, ν c is getting smaller. The extremely low ǫ B ensures that both the optical and X-ray emission is still in the regime ν < ν c for the the time at several days. The magnetic field strength of the afterglow jet in the co-moving frame is given by B = (32πm p ǫ B n) 1/2 Γ 0 c, and the power carried by the magnetic field can be derived from P B = πR dec cB 2 /8π, where R dec = 2.25 × 10 16 (Γ 0 /100) 2 (t p,z /100s) is the deceleration radius of the fireball, m p is the proton mass, and c is the speed of light. We obtain B = 0.165 G and P B ∼ 5.55 × 10 44 erg/s for GRB 110530A. Assuming that the electron energy is full radiated and the X-ray luminosity is a good representative of the bolometric afterglow luminosity, we estimate the kinetic power of the afterglow jet at the deceleration time with L K = L X (1−cos θ j )/ǫ e , which gives L K > 1.33×10 46 erg/s. Therefore, the magnetization of the afterglow jet is σ = P B /L K < 0.04, suggesting that the afterglow jet is baryonic. It is also interesting that the derived B and σ are comparable to the typical values of the jets in BL Lacs, which are suggested to be matter dominated (Zhang et al. 2013)
+ z) −1/2 (1 + Y ) −2 ǫ −3/2 B,−2 E −1/2 K,iso,52 n −1 t −1/2 d (
Possible Sources of the Delayed Energy Injection
A plateau phase is usually observed in the XRT lightcurves O'Brien et al. 2006;Liang et al. 2007) and in about one-third of optical lightcurves for long-duration GRBs (Li et al. 2012;Liang et al. 2013). Such a feature can be well explained with the long-lasting energy injection from a constant magnetic-dipole-radiation luminosity within the spin-down timescale of a magnetar (Dai & Lu 1998;Zhang & Mészáros 2001;Lü & Zhang 2014). The injection behavior in this scenario is continuous and starts at a very early epoch. With clear detection of the afterglow onset bump, we propose that the energy injection could be happened post the deceleration time of the fireball. In addition, as shown above, the jet in the prompt gamma-ray phase and afterglow phase seem to be matter dominated. These results possibly disfavor the scenario pulsar wind injection 8 . We suggest that the injection may caused by a slower shallow that is ejected at the same epoch as that of the shells for producing the prompt gamma-rays (Zhang & Mészáros 2002) or delayed ejecta from late accretion activity (Geng et al. 2013). The time delay of the rear shells/ejecta for catching up with the decelerated fireball may result in the delayed energy injection. On the other hand, the energy transfer time from fireball ejecta to ambient medium typically extends to thousands of seconds, which may also broaden the onset peak in the thin shell case (Kobayashi & Zhang 2007).
In the scenario of a black hole accretion system, the energy flow from the fall-back accretion may be delayed for a fall-back time t fb and produce giant bumps in the optical bands (Geng et al. 2013). In this scenario, one may place some constraint on the progenitor stars. The radius of the fall-back material can be estimated with R fb ∼ 6.85 × 10 10 cm(M BH /3M ⊙ ) 1/3 (t fb /10 3 s) 2/3 . We estimate the minimum and maximum radii of the fallback material with the t s and t e in the burst frame and have R fb,min ∼ 7.71×10 10 cm(M BH /3M ⊙ ) 1/3 and R fb,max ∼ 8.98 × 10 10 cm(M BH /3M ⊙ ) 1/3 . Woosley & Weaver (1995) derived the mass density profile as a function of radius R with simulations for a pre-supernova star with mass of 25M ⊙ (see also Janiuk & Proga 2008), as shown in Figure 5. The mass density of the shell R ∈ [R fb,min , R fb,max ] is about 1.7 × 10 −2 g cm −3 , and the mass in this shell is 9.62 × 10 −3 M ⊙ (corresponding to an energy of 1.71 × 10 52 erg), assuming that M BH = 3M ⊙ . The total energy injection from 2390 seconds to 2997 seconds derived from our model fit is ∼ 3.39 × 10 52 erg, corresponding to a geometrically-corrected injection energy of 3.81 × 10 50 erg by taking θ j = 0.15 rad. The jet radiation is only a small fraction (2.23%) of the fall-back mass. By simplifying the mass density profile as a power-law function, log ρ/g cm −3 = 30.47 − 3.24 log R/cm within R < R fb,max , as shown in Figure 5, the mass within R < R fb,max is ∼ 7.5M ⊙ . If all the mass within R < R fb,max is collapsed to form a newly-born black hole and its accretion disk, the total collapsed/fall-back mass is about a fraction of 30% of the progenitor star, and the rest mass in other outer layers would be broken out as a supernova.
Conclusions
We have reported our very early optical observations for GRB 110530A and investigate its jet properties together with its X-ray afterglow data. A broad bump with significant flares is observed in the optical lightcurve at t < 2000 seconds, which is followed by a plateau with transition to a normal decaying segment at t = 3000 seconds. The X-ray afterglow lightcurve shows almost the same feature. Our joint spectral fits of the optical and X-ray data show that they are in the same regime, with a photon index of ∼ 1.70. The extinction of the host galaxy is negligible, but the equivalent hydrogen column density of host galaxy is approximately 1.0×10 21 cm −2 . We model the optical and X-ray lightcurves with the standard external shock model by considering delayed energy injection and assuming its redshift as 1. Our best parameters derived from a MCMC approach are Γ 0 = 91 ± 8, ǫ e = 0.086 ± 0.008, ǫ B = (1.64±0.25)×10 −6 , n = 13.3±2.6 cm −3 , E K,iso = (2.28±0.27)×10 53 erg, and θ j ∼ 0.15 rad. The energy injection can be described as L in /10 50 erg s −1 = (4.0 ± 2.5) × t −0.18 , which starts at ∼ 2390 seconds and lasts only about 700 seconds. Based on our modeling results, the radiative efficiency of the GRB fireball is ∼ 1%, the magnetic field strength and the magnetization parameter of the afterglow jet are B = 0.165 G and σ < 0.04, respectively. We propose that the jet would be matter dominated and possible sources of the delayed energy injection are also discussed.
The most striking observation of GRB 110530A is its early broad bump following by a plateau in its R band afterglow lightcurve. We have shown that the standard forward shock model with a delayed injection can roughly fit the global feature of the lightcurves. We address the flickerings in the optical and X-ray lightcurves as superimposed flares that may have internal origins. We should note that these flickering, especially the significant flickering at around 3000 seconds in the X-ray band, may be also due to the delayed energy injection. Zhang & Mészáros (2001) analyzed the energy injection and corresponding signature that could be shown up in afterglow lightcurves. They showed that injection by a Poynting-fluxdominated shell that has an energy comparable to that of the initial fireball would lead to a gradual achromatic bump. In the case when the injection is kinetic-energy-dominated, the results depend on the situation of the collision between the injected (rear) shells and initial (leading) shells. If the collision is mild, the signature showed in the lightcurves may be analogous to the Poynting-flux-dominated injection case. In case of a violent collision a significant flare-like bump may be observed (see Figure 5 of Zhang & Mészáros 2001). In the case that the delayed energy injection is fed by the fall-back materials, the delayed energy would also cause a notable rise to the Lorentz factor of the external shock, which will generate a bump in the multiple band afterglows as seen in GRB 081029 and GRB 100621A (Nardini et al. 2011;Greiner et al. 2013;Geng et al. 2013). We also acknowledge the use of the public data from the Swift data archive. Fig. 3.-Fits to the optical and X-ray afterglow lightcurves using the standard external shock model by considering a delayed energy injection behaving as L in = L 0 t q . The model parameter derived from the MCMC technique are Γ 0 = 91 ± 8, ǫ e = 0.086 ± 0.008, ǫ B = (1.64 ± 0.25) × 10 −6 , n = 13.3 ± 2.6 cm −3 , E K,iso = (2.28 ± 0.27) × 10 53 erg, t s ∼ 2400 s, t e = 2997 ± 546 s, L 0 = (4.0 ± 2.5) × 10 50 erg/s, q = −0.18 +0.05 −0.07 , and θ j > 0.15 rad. The flare-like X-ray data at around 10 3 are not included in our fits. (Woosley & Weaver 1995). The vertical and horizonal dashed lines mark the radii and the corresponding density of fall-back materials for feeding the late accretion in this analysis.The solid red line is power-law fit to the density profile for R < 9 × 10 10 cm.
Acknowledgement
This work is supported by the National Basic Research Program of China (973 Program, grant No. 2014CB845800), the National Natural Science Foundation of China (Grant No. 11533003, 11103036, U1331202, U1231115 and U1331101), the Strategic Priority Research Program The Emergence of Cosmological Structures of the Chinese Academy of Sciences (grant XDB09000000), the Guangxi Science Foundation (Grant No. 2013GXNSFFA019001).
Fig. 1 .
1-Observed optical and X-ray afterglow lightcurves of GRB 110530A (left panel) and our empirical fit with multiple smooth broken power-laws for the R band lightcurves (Right panel). The optical data in the bands and XRT data in the Right panel are re-scaled in order to show their consistency of the temporal feature with the R band lightcurve. Phases identified from our empirical fit are also marked. The early optical afterglow data observed with AZT-33IK telescope of Sayan observatory (Mondy) read fromVolnova et al. (2011) was also illustrated for comparison.
Fig. 2 .
2-Joint spectral fits for the optical and X-ray afterglows with a single power-law function in selected five time intervals. The Olive dashed lines shows that the intrinsic power-law spectrum derived from the joint fits. The photon indices are also marked.
Fig. 4 .
4-Probability distributions of the forward shock adding the delayed energy injection model parameters along with our Gaussian fits (solid red lines).The dashed black vertical lines mark the 1σ standard deviations. Our fit gives a lower limit on θ j only.
Fig. 5 .
5-Mass density profile as a function of radius R derived form simulations for a presupernova star with mass of 25M ⊙
They are set in the following ranges, Γ 0 ∈ [50, 150],ǫ e ∈ [0.01, 0.5], ǫ B ∈ [10 −7 , 10 −4 ] 6 , n ∈ [0.1, 25], E K,iso ∈ [10 51 , 10 54 ] erg, θ j ∈ [0.01, 0.5] rad, t s ∈ [1000, 3000] seconds, t e ∈ [3000, 5000] seconds, L 0 ∈ [10 49 , 10 52 ] erg/s, and q ∈ [−0.3, −0.1].
Table 1 .
1Optical Afterglow Photometry Log of GRB 110530AT-T0(mid,second) Exposure (sec) Mag a
σ a
Filter Telescope
144
20
19.24 0.34
W
TNT
167
20
19.39 0.24
W
TNT
190
20
18.99 0.22
W
TNT
213
20
19.01 0.12
W
TNT
235
20
18.86 0.18
W
TNT
258
20
18.59 0.14
W
TNT
281
20
18.63 0.14
W
TNT
303
20
18.54 0.14
W
TNT
326
20
18.56 0.14
W
TNT
349
20
18.54 0.10
W
TNT
372
20
18.60 0.12
W
TNT
394
20
18.54 0.15
W
TNT
417
20
18.66 0.16
W
TNT
440
20
18.53 0.13
W
TNT
463
20
18.27 0.12
W
TNT
485
20
18.41 0.11
W
TNT
508
20
18.43 0.14
W
TNT
531
20
18.36 0.13
W
TNT
553
20
18.48 0.12
W
TNT
605
60
18.62 0.09
R
TNT
684
60
18.47 0.08
R
TNT
763
60
18.45 0.07
R
TNT
841
60
18.49 0.10
R
TNT
919
60
18.37 0.08
R
TNT
998
60
18.60 0.10
R
TNT
1076
60
18.64 0.12
R
TNT
1155
60
18.49 0.08
R
TNT
1233
60
18.62 0.11
R
TNT
1312
60
18.67 0.12
R
TNT
1390
60
18.76 0.13
R
TNT
1469
60
18.83 0.11
R
TNT
1547
60
19.17 0.18
R
TNT
1625
60
18.92 0.14
R
TNT
Table 1 -
1ContinuedT-T0(mid,second) Exposure (sec) Mag a
σ a
Filter Telescope
1704
60
19.08 0.14
R
TNT
1782
60
19.01 0.16
R
TNT
1861
60
19.03 0.14
R
TNT
1939
60
19.27 0.23
R
TNT
2018
60
19.17 0.17
R
TNT
2096
60
19.53 0.24
R
TNT
2297
300
19.33 0.08
R
TNT
2614
300
19.46 0.08
R
TNT
2932
300
19.40 0.07
R
TNT
3250
300
19.44 0.08
R
TNT
3567
300
19.36 0.07
R
TNT
3885
300
19.44 0.09
R
TNT
4203
300
19.53 0.09
R
TNT
4520
300
19.65 0.09
R
TNT
4838
300
19.50 0.08
R
TNT
5156
300
19.84 0.10
R
TNT
5473
300
19.70 0.10
R
TNT
6109
300
19.68 0.09
R
TNT
6427
300
19.82 0.10
R
TNT
6744
300
19.73 0.10
R
TNT
7062
300
20.04 0.11
R
TNT
7380
300
20.13 0.12
R
TNT
7697
300
20.27 0.14
R
TNT
8015
300
20.26 0.14
R
TNT
8333
300
20.19 0.13
R
TNT
8650
300
20.23 0.15
R
TNT
8968
300
20.34 0.15
R
TNT
9286
300
20.09 0.12
R
TNT
9604
300
20.11 0.12
R
TNT
10371
600
20.49 0.10
R
TNT
10557
300
20.30 0.14
R
TNT
11624
900
20.51 0.10
R
TNT
13177
1500
20.66 0.09
R
TNT
Table 1 -
1ContinuedT-T0(mid,second) Exposure (sec)
Mag a
σ a
Filter Telescope
14166
900
21.00
0.20
R
TNT
15719
1500
21.17
0.46
R
TNT
79442
3000
>21.91
-
R
TNT
Table 2 .
2Reference stars for magnitude calibration 48:22.675 +61:56:37.35 J2000 16.49 16.10 15.47 Note. -Reference stars for the calibration in this work. B2, R2 and I-band magnitudes are extracted from USNO B1.0 catalog.RA
DEC
Epoch
B2
R2
I
18:48:17.785 +61:55:56.69 J2000 18.63 17.07 16.08
18:48:15.583 +61:56:13.39 J2000 17.48 16.08 14.86
18:48:15.951 +61:56:25.06 J2000 18.41 17.23 17.02
18:48:10.257 +61:55:43.25 J2000 17.49 16.05 15.08
18:48:08.206 +61:55:40.41 J2000 17.09 16.88 16.30
18:48:05.743 +61:54:51.29 J2000 17.41 16.93 16.37
18:48:19.011 +61:54:43.98 J2000 15.15 14.03 13.20
18:48:23.664 +61:55:10.04 J2000 16.11 15.34 14.80
18:48:27.258 +61:55:12.79 J2000 16.17 15.36 14.30
18:48:26.344 +61:56:20.09 J2000 16.24 15.77 15.58
18:
Table 3 .
3Spectral analysis of the Optical and X-ray Afterglows in selected time intervals 9k-1.37k LMC*PL(15.79/13 = 1.214) 1.68 ± 0.13 SMC*PL(15.79/13 = 1.215) 1.68 ± 0.13 1.37k-2.5k LMC*PL(33.83/22 = 1.Interval(s)
Model(χ 2 /dof)
PhoIndex(Γ)
0.6k-0.9k
LMC*PL(7.54/7 = 1.08)
1.70 ± 0.02
SMC*PL(7.54/7 = 1.08)
1.70 ± 0.02
0.538)
1.72 ± 0.04
SMC*PL(33.83/22 = 1.538)
1.72 ± 0.04
6k-9k
LMC*PL(31.58/11 = 2.871)
1.67 ± 0.02
SMC*PL(31.59/11 = 2.872)
1.67 ± 0.02
9k-14k
LMC*PL(3.03/4 = 0.758)
1.72 ± 0.03
SMC*PL(3.03/4 = 0.758)
1.72 ± 0.03
TNT is a 0.8-m telescope and runs by a custom-designed automation system for GRB follow-up observations at Xinglong Observatory. A PI 1300 × 1340 CCD and filters in the standard Johnson Bessel system are equipped for TNT(Zheng et al. 2008).2 IRAF is distributed by NOAO, which is operated by AURA, Inc. under cooperative agreement with NSF.
http://classic.sdss.org/dr6/algorithms/sdssUBVRITransform.html#Jordi2006 4 http://www.swift.ac.uk/results.shtml
Note that the redshift of GRB 110530A is unknown and we have only an upper limit of z < 2.7(De Cia et al. 2011). Our dust modelings may be insecure since the LMC and SMC extinction curves, especially the LMC dust curve, have features which become relevant in this redshift range.
Some recent statistical analysis suggests a low ǫ B , i.e., [10 −6 , 10 −3 ] (e.g.,Wang et al. 2015; Gao et al. 2015; Japelj et al. 2014). Therefore, we set ǫ B ∈ [10 −7 , 10 −2 ]. We find that a reasonable parameter set that can roughly represent the optical and XRT lightcurves requires ǫ B < 10 −4 . We then finalize our fit by setting ǫ B ∈ [10 −7 , 10 −4 ].
It was also proposed that the magnetic-dipole-radiation luminosity of a magnetar can dramatically increase with time, which may lead to a significant bump in the afterglow lightcurves, if the magnetar is spun up by the accretion matter(Dai & Liu 2012). In this scenario, the energy injection in early epoch is not significant and may feature as delayed energy injection in late epoch.
. A A Abdo, M Ackermann, M Arimoto, Science. 3231688Abdo, A. A., Ackermann, M., Arimoto, M., et al. 2009, Science, 323, 1688
. D Band, J Matteson, L Ford, ApJ. 413281Band, D., Matteson, J., Ford, L., et al. 1993, ApJ, 413, 281
. O Bardho, B Gendre, A Rossi, arXiv:1602.09014Bardho, O., Gendre, B., Rossi, A., et al. 2016, arXiv:1602.09014
. D N Burrows, P Romano, A Falcone, Science. 3091833Burrows, D. N., Romano, P., Falcone, A., et al. 2005a, Science, 309, 1833
. D N Burrows, J E Hill, J A Nousek, Space Sci. Rev. 120165Burrows, D. N., Hill, J. E., Nousek, J. A., et al. 2005b, Space Sci. Rev., 120, 165
. S A Colgate, ApJ. 187333Colgate, S. A. 1974, ApJ, 187, 333
. A Cucchiara, S B Cenko, J S Bloom, ApJ. 743154Cucchiara, A., Cenko, S. B., Bloom, J. S., et al. 2011, ApJ, 743, 154
P D'avanzo, GRB Coordinates Network. 120581D'Avanzo, P. 2011, GRB Coordinates Network, 12058, 1
P D'avanzo, S D Barthelmy, A P Beardmore, GRB Coordinates Network. 120461D'Avanzo, P., Barthelmy, S. D., Beardmore, A. P., et al. 2011, GRB Coordinates Network, 12046, 1
. Z G Dai, R.-Y Liu, ApJ. 75958Dai, Z. G., & Liu, R.-Y. 2012, ApJ, 759, 58
. Z G Dai, T Lu, A&A. 33387Dai, Z. G., & Lu, T. 1998, A&A, 333, L87
. Z G Dai, X Y Wang, X F Wu, B Zhang, Science. 3111127Dai, Z. G., Wang, X. Y., Wu, X. F., & Zhang, B. 2006, Science, 311, 1127
A De Cia, P Vreeswijk, D Xu, J Telting, P Jakobsson, GRB Coordinates Network. 120541De Cia, A., Vreeswijk, P., Xu, D., Telting, J., & Jakobsson, P. 2011, GRB Coordinates Network, 12054, 1
. D Eichler, M Livio, T Piran, D N Schramm, Nature. 340126Eichler, D., Livio, M., Piran, T., & Schramm, D. N. 1989, Nature, 340, 126
. P A Evans, A P Beardmore, K L Page, MNRAS. 3971177Evans, P. A., Beardmore, A. P., Page, K. L., et al. 2009, MNRAS, 397, 1177
. Y Z Fan, D M Wei, MNRAS. 36442Fan, Y. Z., & Wei, D. M. 2005, MNRAS, 364, L42
. Y Fan, T Piran, MNRAS. 369197Fan, Y., & Piran, T. 2006, MNRAS, 369, 197
. N Gehrels, G Chincarini, P Giommi, ApJ. 6111005Gehrels, N., Chincarini, G., Giommi, P., et al. 2004, ApJ, 611, 1005
. N Gehrels, E Ramirez-Ruiz, D B Fox, ARA&A. 47567Gehrels, N., Ramirez-Ruiz, E., & Fox, D. B. 2009, ARA&A, 47, 567
. J J Geng, X F Wu, Y F Huang, Y B Yu, ApJ. 77928Geng, J. J., Wu, X. F., Huang, Y. F., & Yu, Y. B. 2013, ApJ, 779, 28
. J Greiner, T Krühler, M Nardini, A&A. 56070Greiner, J., Krühler, T., Nardini, M., et al. 2013, A&A, 560, A70
. Y F Huang, Z G Dai, T Lu, MNRAS. 309513Huang, Y. F., Dai, Z. G., & Lu, T. 1999, MNRAS, 309, 513
. A Janiuk, D Proga, ApJ. 675519Janiuk, A., & Proga, D. 2008, ApJ, 675, 519
. K Jordi, E K Grebel, K Ammon, A&A. 460339Jordi, K., Grebel, E. K., & Ammon, K. 2006, A&A, 460, 339
. D A Kann, S Klose, B Zhang, ApJ. 7201513Kann, D. A., Klose, S., Zhang, B., et al. 2010, ApJ, 720, 1513
. S Kobayashi, B Zhang, ApJ. 655973Kobayashi, S., & Zhang, B. 2007, ApJ, 655, 973
. P Kumar, ApJ. 523113Kumar, P. 1999, ApJ, 523, L113
. P Kumar, B Zhang, Phys. Rep. 5611Kumar, P., & Zhang, B. 2015, Phys. Rep., 561, 1
. H.-J Lü, B Zhang, ApJ. 78574Lü, H.-J., & Zhang, B. 2014, ApJ, 785, 74
. L Li, E.-W Liang, Q.-W Tang, ApJ. 75827Li, L., Liang, E.-W., Tang, Q.-W., et al. 2012, ApJ, 758, 27
. E W Liang, Z G Dai, X F Wu, ApJ. 60629Liang, E. W., Dai, Z. G., & Wu, X. F. 2004, ApJ, 606, L29
. E W Liang, B Zhang, P T O'brien, ApJ. 646351Liang, E. W., Zhang, B., O'Brien, P. T., et al. 2006, ApJ, 646, 351
. E.-W Liang, L Li, H Gao, ApJ. 77413Liang, E.-W., Li, L., Gao, H., et al. 2013, ApJ, 774, 13
. E.-W Liang, T.-T Lin, J Lü, ApJ. 813116Liang, E.-W., Lin, T.-T., Lü, J., et al. 2015, ApJ, 813, 116
. E.-W Liang, S.-X Yi, J Zhang, ApJ. 7252209Liang, E.-W., Yi, S.-X., Zhang, J., et al. 2010, ApJ, 725, 2209
. E.-W Liang, B.-B Zhang, B Zhang, ApJ. 670565Liang, E.-W., Zhang, B.-B., & Zhang, B. 2007, ApJ, 670, 565
. F Lyu, E.-W Liang, Y.-F Liang, ApJ. 79336Lyu, F., Liang, E.-W., Liang, Y.-F., et al. 2014, ApJ, 793, 36
. P Mészáros, Reports on Progress in Physics. 692259Mészáros, P. 2006, Reports on Progress in Physics, 69, 2259
. P Mészáros, ARA&A. 40137Mészáros, P. 2002, ARA&A, 40, 137
. P Mészáros, M J Rees, ApJ. 476232Mészáros, P., & Rees, M. J. 1997, ApJ, 476, 232
. A I Macfadyen, S E Woosley, ApJ. 524262MacFadyen, A. I., & Woosley, S. E. 1999, ApJ, 524, 262
. F E Marshall, P Avanzo, GRB Coordinates Network120571Marshall, F. E., & D'Avanzo, P. 2011, GRB Coordinates Network, 12057, 1
. P Meszaros, M J Rees, ApJ. 405278Meszaros, P., & Rees, M. J. 1993, ApJ, 405, 278
. M Nardini, J Greiner, T Krühler, A&A. 53139Nardini, M., Greiner, J., Krühler, T., et al. 2011, A&A, 531, A39
. R Narayan, B Paczynski, T Piran, ApJ. 39583Narayan, R., Paczynski, B., & Piran, T. 1992, ApJ, 395, L83
. J A Nousek, C Kouveliotou, D Grupe, ApJ. 642389Nousek, J. A., Kouveliotou, C., Grupe, D., et al. 2006, ApJ, 642, 389
. P T O'brien, R Willingale, J Osborne, ApJ. 6471213O'Brien, P. T., Willingale, R., Osborne, J., et al. 2006, ApJ, 647, 1213
. B Paczynski, ApJ. 30843Paczynski, B. 1986, ApJ, 308, L43
. A Panaitescu, P Kumar, ApJ. 56049Panaitescu, A., & Kumar, P. 2001, ApJ, 560, L49
. A Panaitescu, M Spada, P Mészáros, ApJ. 522105Panaitescu, A., Spada, M., & Mészáros, P. 1999, ApJ, 522, L105
. D A Perley, J S Bloom, C R Klein, MNRAS. 4062473Perley, D. A., Bloom, J. S., Klein, C. R., et al. 2010, MNRAS, 406, 2473
. T Piran, Reviews of Modern Physics. 761143Piran, T. 2004, Reviews of Modern Physics, 76, 1143
. R D Preece, M S Briggs, R S Mallozzi, ApJS. 12619Preece, R. D., Briggs, M. S., Mallozzi, R. S., et al. 2000, ApJS, 126, 19
. M J Rees, P Meszaros, ApJ. 43093Rees, M. J., & Meszaros, P. 1994, ApJ, 430, L93
. M J Rees, P Meszaros, MNRAS. 25841Rees, M. J., & Meszaros, P. 1992, MNRAS, 258, 41P
. R Sari, T Piran, ApJ. 520641Sari, R., & Piran, T. 1999, ApJ, 520, 641
. R Sari, T Piran, R Narayan, ApJ. 49717Sari, R., Piran, T., & Narayan, R. 1998, ApJ, 497, L17
. D J Schlegel, D P Finkbeiner, M Davis, ApJ. 500525Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525
. J Van Paradijs, P J Groot, T Galama, Nature. 386686van Paradijs, J., Groot, P. J., Galama, T., et al. 1997, Nature, 386, 686
. F J Virgili, C G Mundell, V Pal'shin, ApJ. 77854Virgili, F. J., Mundell, C. G., Pal'shin, V., et al. 2013, ApJ, 778, 54
. A Volnova, E Klunko, A Pozanenko, GRB Coordinates Network120621Volnova, A., Klunko, E., & Pozanenko, A. 2011, GRB Coordinates Network, 12062, 1
. X.-G Wang, B Zhang, E.-W Liang, ApJS. 2199Wang, X.-G., Zhang, B., Liang, E.-W., et al. 2015, ApJS, 219, 9
. R A M J Wijers, T J Galama, ApJ. 523177Wijers, R. A. M. J., & Galama, T. J. 1999, ApJ, 523, 177
. S E Woosley, ApJ. 405273Woosley, S. E. 1993, ApJ, 405, 273
. S E Woosley, J S Bloom, ARA&A. 44507Woosley, S. E., & Bloom, J. S. 2006, ARA&A, 44, 507
. S E Woosley, T A Weaver, ApJS. 101181Woosley, S. E., & Weaver, T. A. 1995, ApJS, 101, 181
. L P Xin, W K Zheng, J Wang, MNRAS. 401Xin, L. P., Zheng, W. K., Wang, J., et al. 2010, MNRAS, 401, 2005
. L.-P Xin, Y.-Z Wang, T.-T Lin, ApJ. 817152Xin, L.-P., Wang, Y.-Z., Lin, T.-T., et al. 2016, ApJ, 817, 152
. S A Yost, F A Harrison, R Sari, D A Frail, ApJ. 597459Yost, S. A., Harrison, F. A., Sari, R., & Frail, D. A. 2003, ApJ, 597, 459
. B.-B Zhang, E.-W Liang, B Zhang, ApJ. 6661002Zhang, B.-B., Liang, E.-W., & Zhang, B. 2007a, ApJ, 666, 1002
. B.-B Zhang, B Zhang, E.-W Liang, ApJ. 730141Zhang, B.-B., Zhang, B., Liang, E.-W., et al. 2011, ApJ, 730, 141
. B Zhang, Comptes Rendus Physique. 12206Zhang, B. 2011, Comptes Rendus Physique, 12, 206
. B Zhang, Y Z Fan, J Dyks, ApJ. 642354Zhang, B., Fan, Y. Z., Dyks, J., et al. 2006, ApJ, 642, 354
. B Zhang, P Mészáros, International Journal of Modern Physics A. 192385Zhang, B., & Mészáros, P. 2004, International Journal of Modern Physics A, 19, 2385
. B Zhang, P Mészáros, ApJ. 566712Zhang, B., & Mészáros, P. 2002, ApJ, 566, 712
. B Zhang, P Mészáros, ApJ. 552Zhang, B., & Mészáros, P. 2001, ApJ, 552, L35 -15 -
. B Zhang, A Pe'er, ApJ. 70065Zhang, B., & Pe'er, A. 2009, ApJ, 700, L65
. W Zhang, S E Woosley, A I Macfadyen, ApJ. 586356Zhang, W., Woosley, S. E., & MacFadyen, A. I. 2003, ApJ, 586, 356
. J Zhang, E.-W Liang, X.-N Sun, ApJ. 7745Zhang, J., Liang, E.-W., Sun, X.-N., et al. 2013, ApJ, 774, L5
. B Zhang, B.-B Zhang, E.-W Liang, ApJ. 65525Zhang, B., Zhang, B.-B., Liang, E.-W., et al. 2007b, ApJ, 655, L25
. B Zhang, H Yan, ApJ. 72690Zhang, B., & Yan, H. 2011, ApJ, 726, 90
. W.-K Zheng, J.-S Deng, M Zhai, Chinese J. Astron. Astrophys. 8693Zheng, W.-K., Deng, J.-S., Zhai, M., et al. 2008, Chinese J. Astron. Astrophys., 8, 693
Exposure" is the exposure time for each data in second. T-T0" is the middle time in second for each data. σ" means the uncertainty of the magnitudeThis preprint was prepared with the AAS L A T E X macros v5.2. "T-T0" is the middle time in second for each data. "Exposure" is the exposure time for each data in second. "σ" means the uncertainty of the magnitude.
| []
|
[
"arXiv:astro-ph/0312009v2 12 May 2005 Bulk Viscosity in Hybrid Stars",
"arXiv:astro-ph/0312009v2 12 May 2005 Bulk Viscosity in Hybrid Stars"
]
| [
"A Drago \nDipartimento di Fisica\nUniversità di Ferrara\nINFN\nSezione di Ferrara\n44100FerraraItaly\n",
"A Lavagno \nDipartimento di Fisica\nPolitecnico di Torino and INFN\nSezione di Torino\n10129TorinoItaly\n",
"G Pagliara \nDipartimento di Fisica\nUniversità di Ferrara\nINFN\nSezione di Ferrara\n44100FerraraItaly\n"
]
| [
"Dipartimento di Fisica\nUniversità di Ferrara\nINFN\nSezione di Ferrara\n44100FerraraItaly",
"Dipartimento di Fisica\nPolitecnico di Torino and INFN\nSezione di Torino\n10129TorinoItaly",
"Dipartimento di Fisica\nUniversità di Ferrara\nINFN\nSezione di Ferrara\n44100FerraraItaly"
]
| []
| We compute the bulk viscosity of a mixed quark-hadron phase. In the first scenario to be discussed, the mixed phase occurs at large densities and we assume that it is composed of a mixing of hyperonic matter and quarks in the Color Flavor Locked phase. In a second scenario, the mixed phase occurs at lower densities and it is composed of a mixing of nucleons and unpaired quark matter. We have also investigated the effect of a non-vanishing surface tension at the interface between hadronic and quark matter. In both scenarios, the bulk viscosity is large when the surface tension is absent, while the value of the viscosity reduces in the second scenario when a finite value for the surface tension is taken into account. In all cases, the r-mode instabilities of the corresponding hybrid star are suppressed. | 10.1103/physrevd.71.103004 | [
"https://export.arxiv.org/pdf/astro-ph/0312009v2.pdf"
]
| 55,291,862 | astro-ph/0312009 | 9a954690cbca5388c00686ea800eab2489cee423 |
arXiv:astro-ph/0312009v2 12 May 2005 Bulk Viscosity in Hybrid Stars
A Drago
Dipartimento di Fisica
Università di Ferrara
INFN
Sezione di Ferrara
44100FerraraItaly
A Lavagno
Dipartimento di Fisica
Politecnico di Torino and INFN
Sezione di Torino
10129TorinoItaly
G Pagliara
Dipartimento di Fisica
Università di Ferrara
INFN
Sezione di Ferrara
44100FerraraItaly
arXiv:astro-ph/0312009v2 12 May 2005 Bulk Viscosity in Hybrid Stars
numbers: 9760Jd2660+c2575Nq0430Dg
We compute the bulk viscosity of a mixed quark-hadron phase. In the first scenario to be discussed, the mixed phase occurs at large densities and we assume that it is composed of a mixing of hyperonic matter and quarks in the Color Flavor Locked phase. In a second scenario, the mixed phase occurs at lower densities and it is composed of a mixing of nucleons and unpaired quark matter. We have also investigated the effect of a non-vanishing surface tension at the interface between hadronic and quark matter. In both scenarios, the bulk viscosity is large when the surface tension is absent, while the value of the viscosity reduces in the second scenario when a finite value for the surface tension is taken into account. In all cases, the r-mode instabilities of the corresponding hybrid star are suppressed.
I. INTRODUCTION
The discovery by Andersson, Friedman and Morsink of r-mode instabilities in neutron stars put rather severe limits on the highest rotation frequency of pulsars [1,2]. These constraints can be incompatible with the existence of millisecond pulsars, if the instability is not suppressed by a sufficiently large viscosity. Actually, for a star composed only of neutrons and protons and for temperatures larger than roughly 10 10 K, the bulk viscosity due to the modified Urca process is large enough to damp the instability [3]. On the other hand, when the star cools down to lower temperatures, the instability is not suppressed and the star is forced to loose angular momentum via emission of gravitational waves [4]. More recently, it has been noticed that the existence of viscous boundary layers at the interface between the fluid core and the crust can stabilize the star [29] if the rotation period is longer than ∼ 1.5 ms [5,6]. At very low temperatures, below 10 8 K, shear viscosity becomes large and it allows older stars to increase their angular velocity by mass accretion.
Actually, compact stars can be constituted by a larger variety of particles than just neutrons and protons. One possibility is that hyperons form at the center of the star. It has been shown that, due to non-leptonic weak reactions, bulk viscosity can be rather large for an hyperonic star [7,8], which therefore can emit gravitational waves only if its temperature is ∼ 10 10 K and its frequency is larger than 10-30% of its Keplerian frequency.
The formation of quark matter inside a compact star has been discussed extensively in the literature. Bulk viscosity of non-interacting strange quark matter is very large [9,10,11]. On the other hand, recent studies taking into account quark-quark interaction revealed the possible existence of color superconducting phases in which quarks form Cooper pairs with gaps as large as 100 MeV [12]. In that case, bulk viscosity is strongly suppressed by the large energy gaps, r-mode instabilities are not damped and pure color-superconducting quark stars seem therefore to be ruled out by the pulsar data [13].
In the present paper we are interested in studying the viscosity and stability, respect to r-modes, of an Hybrid Star (HyS), for which no quantitative analysis has been performed so far. If the star is made of hyperonic matter and of non-interacting quarks, it is rather obvious that the viscosity of the HyS should be large, due to the large viscosity of its constituents. The real question concerns the case in which color-superconducting quark matter is considered, since its viscosity is negligible. There are in principle (at least) two sources of viscosity for HySs. One originates at the interface between the crust and the fluid interior [13] and it is similar to the one discussed above in the case of purely hadronic stars [5,6]. In our work we will not discuss that possibility and we will instead concentrate on a quantitative evaluation of the bulk viscosity in a mixed phase (MP) composed either of hyperons and color-superconducting quark matter or of nucleons and non-interacting quarks. We have also investigated the effect of the existence of finite-size structures in the MP. These geometrical structures can exist if the surface tension at the interface between hadronic and quark matter is not vanishing. We will show that the bulk viscosity of the MP is in general rather large, particularly so when the surface tension is negligible. Therefore HySs are possible candidates for young millisecond pulsars.
II. EQUATION OF STATE
We construct the Equation of State (EOS) of matter at high density modeling the Hadronic phase by a relativistic non-linear Walecka type model [14] with the inclusion of Hyperons [15]. Concerning quark matter, we use an MIT-bag like EOS in which quarks can pair to form a Color Flavor Locked (CFL) phase [16,17,18]. CFL is considered to be the energetically favored type of pairing pattern, at large densities, in the case of β-stable, electric-charge neutral quark matter. At lower densities, a two-flavor Color Superconducting (2SC) phase can form, with a smaller energy gap [19].
Two scenarios are possible, depending on the value of the critical density separating hadronic matter from MP.
In the first case the critical density is large, therefore hyperons start being produced at a density smaller than the critical one. We will call this phase "hyperon-quark" MP [20]. This is the first case discussed below and it corresponds to the upper panel of Fig. 1. In the second scenario, the MP starts at a density lower than the hyperonic threshold and the hyperon density is much smaller than in the previous case. We will explore in particular the situation in which hyperons are completely eaten-up by quarks and they do not appear in the "nucleon-quark" MP (see lower panel of Fig. 1). In this second scenario we will assume that in the MP the density is so low that the CFL pairing cannot form and, for simplicity, we will consider unpaired quarks in the MP, since anyway the bulk viscosity of the 2SC quark phase is similar to the viscosity of unpaired quark matter [13]. In both scenarios, we describe the composition of the MP assuming a first order transition and imposing Gibbs conditions [30]. Beta stability and charge neutrality are satisfied in every density region.
III. BULK VISCOSITY
Bulk viscosity is the dissipative process in which a perturbation of the pressure of a fluid element is converted to heat. A small variation of the pressure of the system can be treated as a perturbation on the densities of the different species of particles bringing the system out of β-equilibrium. The reactions between the different particles drive the system back to an equilibrium configuration, with a delay which depends on the characteristic time scale of the interactions. Following the formalism of Ref. [8], we classify all the reactions either as fast or slow. Slow processes produce bulk viscosity, because their time scale is comparable with the period of the perturbation. Fast processes, instead, put constraints on the variation of the densities of the different particles. In the following we generalize the formalism of Ref. [8] in order to compute the bulk viscosity of MP. The equations needed in the computation are rather different in the two scenarios discussed above and we will have to deal with all the different cases separately. Moreover, if the effect of a non-vanishing surface tension is taken into account, the formalism needed to compute the bulk viscosity has to be further modified. It is well known that a finite value for the surface tension allows the formation of finite size structures whose geometrical shape is determined by the interplay of the various contribution to the energy, including the Coulomb term [21,22]. A precise estimate of surface tension σ is unfortunately still lacking, and values ranging from a few MeV/fm 2 to a few tens MeV/fm 2 have been discussed in the literature. We can roughly divide this range of values in three windows. Values larger than ∼ 30 MeV/fm 2 would not allow the formation of the mixed phase since it would not be energetically favored [17,22]. Our analysis is therefore restricted to smaller values. In the following we will discuss both the case in which the surface tension is so small that it can be neglected and the case in which it is non-vanishing.
A. Negligible surface tension
Let us first discuss the case in which the surface tension is so small that the perturbations of the star, associated e.g. to r-modes, can break the finite-size structures present in the MP. We have estimated that this case corresponds to values of the surface tension σ 1 MeV/fm 2 , since to r-modes excitations is associated an energy per baryon of a few MeV.
First scenario
We start our analysis from the first scenario discussed in Sec. II, namely from the case in which hyperons are present in the MP. We will discuss in particular a parameter set in which only Λ and Σ − particles are produced in the MP. The only slow processes generating viscosity are non-leptonic reactions between hadrons [31], since the large gap prevents weak reactions between quarks. As in Ref. [8], we consider the reactions
n + n HW ←→ p + Σ − ,(1)n + p HW ←→ p + Λ .(2)
Following [8], we have not taken into account the reaction n + n HW ←→ n + Λ, since the corresponding reaction rate cannot be easily estimated. In Ref. [7] it has been argued that this rate can be one order of magnitude larger that the one associated with reaction (2). In the first scenario, our results have therefore to be considered as upper limits for the viscosity, as it will be clarified in the following. Notice anyway that, concerning the damping of r-modes instabilities, to be discussed in the following, the most important region of the star corresponds to low-moderate densities, while the Λ is produced at larger densities as shown in Figs. (1) and (5).
Concerning fast processes, they come from the following reactions mediated by the strong interaction
n + Λ HS ←→ p + Σ − ,(3)Λ + Λ HS ←→ 2(uds) CF L .(4)
The last process describes the only possible "melting" of hadrons into CFL phase, since the pairing forces the number of up, down and strange quarks to be equal. This process is fast because of the vanishing value of σ. As we will see later the "melting" process is instead forbidden if the value of σ is not negligible. Concerning mechanical equilibrium, elastic scattering due to strong interactions, as well as melting processes like the one described in Eq.(4), are responsible for (rapid) momentum transfer between the two phases. The mechanical equilibrium between the two phases is reached in a time scale t s determined by strong interaction, t s ∼ 10 −23 s. The full system is, instead, out of mechanical equilibrium on a time scale of the order of the period of the perturbation t p ∼ 10 −3 s. Therefore, during a fluctuation the two components of the fluid remain in mutual mechanical equilibrium. The variations of the pressure in the two phases have therefore to satisfy the constraint:
δP h = δP CF L .(5)
The variations δρ i of the densities of the various particles and the variation δχ of the quark fraction are constrained by the following linearized equations where
0 = (1 − χ)(δρ n + δρ p + δρ Λ + δρ Σ ) + χ(δρ q ) + δχ(ρ q − ρ n − ρ p − ρ Λ − ρ Σ ) ,(6)0 = (1 − χ)(δρ p − δρ Σ ) − δχ(ρ p − ρ Σ ) ,(7)0 = {H} p H δρ H − p q δρ q ,(8)0 = β n δρ n + β p δρ p + β Λ δρ Λ + β Σ δρ Σ ,(9)0 = α Λn δρ n + α Λp δρ p + α ΛΛ δρ Λ + α ΛΣ δρ Σ − 3α qq δρ q ,(10)α ij = (∂µ i /∂ρ j ) ρ k ,k =j ,(11)β i = α ni + α Λi − α pi − α Σi ,(12)p i = ∂P/∂ρ i .(13)
Eqs. (6) The relaxation time τ associated to the weak processes reads
1 τ = Γ Λ δµ + 2 Γ Σ δµ δµ δρ n .(14)
Here Γ Λ and Γ Σ are the rates of the weak interactions [33]. They have been calculated in Eqs. (4.21), (4.28), (4.29) of Ref. [8], where it has been shown that the main dependence of the rate on the temperature is a quadratic one. Moreover, if hyperon superfluid gaps as the ones displayed in Fig. 2 do develop, they exponentially suppress the rates as it will be discussed later. The chemical potential unbalance δµ is given by
δµ ≡ δµ n − δµ Λ = 2δµ n − δµ p − δµ Σ ,(15)
where the variations of the chemical potentials can be expressed in terms of δρ i as δµ i = j α ij δρ j . A unique δµ appears in Eq. (15), since the variations δµ Λ and δµ Σ are constrained by the fast process given in Eq. for damping r-modes instabilities, finally reads
ζ = P (γ ∞ − γ 0 )τ 1 + (ωτ ) 2 ,(16)
where γ ∞ and γ 0 are the "infinite" frequency adiabatic index and the "zero" frequency adiabatic index and ω is the angular velocity of the perturbation. It is interesting to remark that there are two asymptotic behaviors of the viscosity as a function of the frequency ω. In the high frequency case (ωτ ≫ 1), the viscosity scales as 1/τ , while in the low frequency limit the viscosity is proportional to τ . In the parameter and temperature ranges explored in our paper it results that we always remain in the low frequency limit, as it can be seen from Fig. 3 noticing that the viscosity decreases with the temperature. In this regime the addition of another weak decay (e.g. n + n HW ←→ n + Λ) decreases the relaxation time and as consequence the viscosity too. In this sense our results for the viscosity in the first scenario must be considered as upper limits.
We can observe from Fig. 3 that, in the "hyperonquark" MP scenario discussed so far, MP bulk viscosity is comparable to bulk viscosity of purely hyperonic matter even though the density of hyperons is very small in the MP (see upper panel of Fig. 1) [34]. Notice that in the small window of baryonic densities in which both Σ and Λ hyperons are present, the value of bulk viscosity of the MP and of the pure hadronic phase are essentially identical. On the other hand, in the baryonic density windows in which only one species of hyperons are present in the MP, the bulk viscosity of MP is larger than the bulk viscosity of the pure hadronic phase because only one decay channel is open (Σ decay of Eq. (1) or Λ decay of Eq. (2)) and therefore the relaxation time is larger. It is interesting to compute the bulk viscosity in this scenario also including the effect of hyperon superfluidity, as done in Ref. [8]. If the energy gaps ∆ H associated with the hyperons are non vanishing, the decay rates are suppressed by the factor e −∆H /T where the gaps have a typical shape shown in Fig. 2. For low temperatures the viscosity displayes the characteristic features shown in the lower panel of Fig. 3, while these features disappear at larger temperatures or for vanishing ∆ H . Concerning the appreciable difference between the viscosity of MP and of pure hadronic matter for T = 10 9.5 K, this is due to the Fermi momentum dependence of ∆ H , which implies that the gaps are suppressed for large hyperon densities. In the MP, the density of hyperons is lower than in pure hadronic matter and therefore the effect of the gaps shows out more dramatically.
Let us briefly discuss another possible source of bulk viscosity related to the formation of a boson condensate. Since all quarks in the CFL phase are gapped, the low energy excitations are the Nambu-Goldstone bosons associated with the spontaneous symmetry breaking of the global symmetries [16]. In particular, a condensate of π − or K − can appear, in the MP, as proposed in Ref. [18]. The role played by these boson condensates in the cooling of a CS has been studied in Ref. [23] and the condensates can play a role also in the calculation of the bulk viscosity of the MP. The weak processes involving these bosons are:
π − HW ←→ e − +ν e ,(17)K − HW ←→ e − +ν e .(18)
In the absence of these bosons, the variation of the density of electrons δρ e can be neglected at low temperature (T < 10 10 K) because, as already remarked, all leptonic reaction rates are much smaller than those associated with non-leptonic reactions. If, on the other hand, the decay channels of Eqs. (17), (18) are open, then the corresponding decay rates are of the same order of the rates of the non-leptonic processes of Eqs. (1),(2) and, therefore, a new contribution to the viscosity appears. While a detailed calculation is clearly needed, the main result of our work, namely the existence of a large viscosity for T 10 10 K, would even be strengthened.
Second scenario
We now discuss the second scenario in which a "nucleon-quark" MP forms, hyperons are absent and CFL quark pairing cannot take place in the MP [35]. We will assume that in the pure quark matter phase CFL gaps can form so that this phase will not contribute to the viscosity. The only source of viscosity, neglecting as before semi-leptonic reactions, is therefore
d + u HW ←→ u + s .(19)
Concerning fast processes, they are given by the "melting" of nucleons into unpaired quarks. The chemical equilibrium respect to these reactions must therefore be satisfied during the perturbation. The linearized equations governing the density fluctuations, analogous to Eqs. (6)-(10) of the first scenario, read Eqs. (20)- (22) impose baryon number conservation, electric charge neutrality and mechanical equilibrium as in the first case. Eqs. (23), (24) describe the chemical equilibrium respect to the two fast processes of "melting". As before we can calculate the chemical imbalance associated to the weak reaction (19) and the corresponding relaxation time. The rate of the process (19) is taken from Eqs. (5)-(7) of Ref. [10](see also footnote [36]). The resulting bulk viscosity is shown in Fig. 3 (lower panel). The bulk viscosity of "nucleon-quark" MP is of the same order of magnitude of bulk viscosity of strange matter [11].
0 = (1 − χ)(δρ n + δρ p ) + χ(δρ u + δρ d + δρ s )/3 + δχ ((ρ u + ρ d + ρ s )/3 − ρ n − ρ p ) ,(20)0 = (1 − χ)δρ p + χ(2δρ u − δρ d − δρ s )/3 + δχ ((2ρ u − ρ d − ρ s )/3 − ρ p ) ,(21)0 = {H} p H δρ H − {Q} p Q δρ Q ,(22)0 = α pn δρ n + α pp δρ p − 2α uu δρ u − α dd δρ d ,(23)0 = α nn δρ n + α np δρ p − α uu δρ u − 2α dd δρ d .(24)
B. Effects of the surface tension
Let us shortly discuss the effect of a non-vanishing surface tension σ at the interface between hadronic and quark matter. We will assume that σ 30 MeV/fm 2 , so that MP can form because it is energetically favored. Finite-size structures (drops, rods, slabs) form at different densities, to minimize the energy of the MP. In this case the perturbation of the pressure due to r-modes is too weak to induce the "melting" process which is now a very slow process (in comparison to the period of the perturbation) and it plays no role in the calculation of the bulk viscosity. The response of the finite size structures inside MP to a perturbation of the density corresponds to three possible processes: a) the formation of a drop of "new" phase; b) the merging of two structures into a single larger structure; c) the absorption of "old" phase into a structure of "new" phase. Obviously, also the reverse processes are possible. It is easy to see that large values of σ suppress all these processes. In particular, in case a) the radius of a critical drop of new phase increases with σ, making it more difficult to produce a new drop; case b) shares some similarity with the fission problem in nuclear physics, where the process of the separation of a heavy nucleus into two lighter nuclei is suppressed for larger values of σ since, during the fission (or merging) process, configurations having a large surface are produced. Finally, c) can be viewed as a special case of b), in which the absorbed hadron can be assimilated to a small drop of quark matter. Concerning the Coulomb interaction, it mainly plays a role in determining the size of the structures while it is not so important for the response of the structures to the perturbation, at least for relatively large values of the surface tension σ 10 MeV/fm 2 . In that case, in fact, the screening due to electrons almost completely cancels the effect of the Coulomb interaction in the nucleation rate [24]. In the following for simplicity we will assume that 10 MeV/fm 2 σ 30 MeV/fm 2 . In conclusion we reasonably assume that large values of σ suppress all these processes, therefore the equations describing the melting process as a fast reaction, i.e. Eq. (10) (first scenario) and Eqs. (23), (24) (second scenario) are not imposed. We can compute the viscosity by requiring that the baryon number and the electric charge of the two phases are separately conserved. This implies that Eq. (6) in the first scenario and Eqs. (20), (21) in the second scenario, each of them separate into two distinct equations. In the first scenario, Eq. (7) is not modified due to the charge neutrality of the CFL quark phase. Finally, the equation of mechanical equilibrium (Eq. (8) in the first scenario and Eq. (22) in the second scenario) is still valid and it represents the only fast process connecting the two phases. Notice that since melting processes are suppressed, the only reaction which allows the system to rapidly re-equilibrate is elastic scattering between the two phases. In the following we have assumed that a residual interaction between the two phases always exists, similarly to the existence of "entrainment" in superfluid neutron matter as discussed e.g. in Ref. [25]. The systems of equations read therefore:
0 = (1 − χ)(δρ n + δρ p + δρ Λ + δρ Σ ) − δχ(ρ n + ρ p + ρ Λ + ρ Σ ) ,(25)0 = χδρ q + δχρ q ,(26)0 = (1 − χ)(δρ p − δρ Σ ) − δχ(ρ p − ρ Σ ) ,(27)0 = {H} p H δρ H − p q δρ q ,(28)0 = β n δρ n + β p δρ p + β Λ δρ Λ + β Σ δρ Σ(29)
for the first scenario and
0 = (1 − χ)(δρ n + δρ p ) − δχ(ρ n + ρ p ) ,(30)0 = χ(δρ u + δρ d + δρ s ) + δχ(ρ u + ρ d + ρ s ) , (31) 0 = (1 − χ)δρ p − δχρ p ,(32)0 = χ(2δρ u − δρ d − δρ s ) + δχ(2ρ u − ρ d − ρ s ) (33) 0 = {H} p H δρ H − {Q} p Q δρ Q(34)
for the second scenario. In Fig. 3 and 4 we show the effect of a non-vanishing surface tension on the viscosity (dotted lines). On rather general grounds, one can expect that the effect of a nonvanishing surface tension is to reduce the viscosity. Indeed, the surface tension suppresses the fast processes of "melting" which are responsible for the reduction of the chemical unbalance. In particular, the only equation connecting the two phases is now the equation corresponding to the mechanical equilibrium. Moreover, in the first scenario the number of constraints on δρ H reduces from four to three and, in the second scenario, the constraints on δρ Q from four to two. It is also interesting to notice that in both scenarios, near the beginning of the MP (χ → 0), baryon number conservation requires δχ → 0 if the surface tension is non-vanishing (see Eqs. (26) and (31)). On the other hand, in the first scenario ρ Λ = 0 at the beginning of the MP below the Λ production threshold, and therefore, the constraint δχ = 0 is also satisfied in the absence of surface tension, as explained in footnote [32]. Therefore the dotted lines coincide with the solid lines in both panels of Fig. 3 for ρ B 0.6 fm −3 .
In the second scenario, the effect of the surface tension is more dramatic. In Fig. 6 we display the chem-ical unbalances corresponding to a vanishing and to a finite value of the surface tension, respectively. As already remarked δµ/δρ n is larger in presence of a surface tension. Concerning the singular behavior of the chemical unbalance near the first critical density, it stems from neglecting δρ e in the equation of the electric charge conservation. While in general this is a safe approximation due to the slowness of the modified Urca process (see Ref. [8]), in this particular case this approximation implies the vanishing of the viscosity at threshold. Actually, the existence of a finite value for δρ e implies that the viscosity is small but finite at threshold. It is not possible to apply directly the formalism of Ref. [8] if two independent perturbations (δρ n and δρ e ) exist in the system. We are therefore forced to discuss separately the viscosities stemming from the two independent unbalances. Concerning the viscosity associated with the modified Urca process, we remind that it scales as T 6 and therefore it is essentially negligible below T ∼ 10 10 K while it will be included in the calculation of the stability of the star presented in the next section. In conclusion, the result corresponding to the dotted line of Fig. 4 would not be significantly modified by taking into account a finite value of δρ e .
IV. STABILITY OF HYBRID STARS
We can now address the problem of the stability of a rotating compact star. To compute the critical angular velocity we use the standard formalism of Refs. [4,8,26]. We need first to integrate the viscosity on the structure of the star, which is obtained by solving Tolman-Oppenheimer-Volkov equation. In Fig. 5 we show the structure of a M = 1.46M ⊙ star for the two scenarios discussed above. For simplicity we computed the structure of the star assuming a negligible value for σ. The main effect of the presence of finite size structures is to reduce the volume occupied by the MP. For σ 30 MeV/fm 2 , which is the limit of validity of our approach, the shrinking of the MP is rather modest [21]. The critical angular velocity Ω crit is the one for which the imaginary part of the r-mode frequency vanishes and it is obtained by solving the equation
− 1/τ GR + 1/τ B + 1/τ B(Urca) = 0 .(35)
Here τ GR is the time scale for gravitational waves emission while τ B and τ B(Urca) are the time scales of the bulk viscosity produced by hadronic processes and by the modified Urca process of the nucleons [3], respectively. Results for the critical angular velocity are shown in Fig. 7. In the upper panel we compare the stability of a purely hyperonic star with the stability of a HyS containing hyperon-quark MP. As it can be seen, due to the large viscosity of the MP the HyS is as stable as the hyperonic star. Let us stress again that our result indicate that the viscosity in the MP is almost independent on the hyperonic content and therefore hybrid hyperonquark stars can be stable as long as a tiny fraction of hyperon is present. In the lower panel we compare the stability of a star made entirely of non-interacting quarks with a HyS containing a nucleon-quark MP. We have assumed that the viscosity of the pure quark matter phase in the HyS vanishes, to simulate a CFL core. As it can be seen, the small MP region, located near the edge of the star, is sufficient to damp the r-modes. For simplicity, we have assumed quarks to be unpaired in the MP, but similar results should be obtainable if a 2SC phase is present. A feature of r-modes is that they are active mostly in the outer regions of the star, and therefore the value of the bulk viscosity at a not too large densities is crucial for the stability of the star. In particular, in the first scenario the stability of the star is due to the presence of the Σ hyperons in the pure hadronic phase and in the MP. Λ hyperons, which are produced at larger densities, play a lesser role. In the second scenario, a small window of MP, present in the outer region, is sufficient to stabilize the star. Let us also remark that the star is stable at large temperatures, due to the modified Urca processes active in the crust. These processes does not exist in the case of pure quark stars, which are therefore unstable at large temperatures. Finally, when a finite value for the hadron-quark surface tension is taken into account the instability window is larger, but the main conclusion concerning the stability of a young hybrid star remains valid. It is a pleasure to thank J.C. Miller and L. Rezzolla for very useful discussions.
FIG. 1 :
1Particle abundances, as function of the total baryon density. The upper panel corresponds to the case of hyperon-quark MP and the lower panel to nucleon-quark MP.
,(7) impose baryon number conservation and electric charge neutrality. Eq. (8) (where the sum runs on all hadrons) imposes the mechanical equilibrium defined by Eq. (5). Finally, Eqs. (9),(10) describe the equilibrium respect to the two strong processes of Eqs. (3),(4). Notice that δχ does not appear in Eqs. (8)-(10), because neither the pressure nor the chemical potentials explicitely depends on the quark volume fraction χ. Solving the system allows to express all the δρ i and δχ as function of δρ n[32].
FIG. 3 :
3(3). The real part of bulk viscosity, which is the relevant quantity Bulk viscosity in the first scenario as a function of baryon density for various temperatures. Solid lines refer to MP viscosity, while dashed lines correspond to pure hyperonic matter (vanishing hyperon gap in the upper panel and finite hyperon gap in the lower panel). The dotted lines corresponds to the case in which a large hadron-quark surface tension (10 MeV/fm 2 < σ < 30 MeV/fm 2 ) is taken into account and they are computed for T= 10 9.5 K (see text).
FIG. 4 :
4Bulk viscosity as a function of baryon density, for various temperatures in the second scenario. Solid lines refer to MP viscosity, while dashed lines correspond to pure quark matter. The dotted line corresponds to the case in which a large nucleon-quark surface tension (10 MeV/fm 2 < σ < 30 MeV/fm 2 ) is taken into account and it is computed for T= 10 9.5 K (see text).
FIG. 5 :
5Particles densities profiles inside hybrid stars of mass M = 1.46M⊙ in the two discussed scenarios. Parameters as inFig. 1.
FIG. 6 :
6The chemical unbalance for the second scenario is shown as a function of the baryonic density.The solid line corresponds to the case of a vanishing of the surface tension and the dotted line corresponds to a finite surface tension.
FIG. 7 :
7Critical angular velocities. The solid lines refer to hybrid stars, dashed lines correspond to hyperonic stars (upper panel) and to strange stars (lower panel). The dotted line corresponds to the case of a large hadron-quark surface tension (10 MeV/fm 2 < σ < 30 MeV/fm 2 , see text for details).
FIG. 2: Hyperon superfluid gap at zero temperature and for two values of the baryon density. (see Ref. [8])0.2 0.4 0.6 0.8
1
1.2 1.4
k f fm 1
0.2
0.4
0.6
0.8
1
1.2
1.4
H MeV
ΡB 0.8 fm 3
ΡB 0.4 fm 3
. N Andersson, Astrophys. J. 502708N. Andersson, Astrophys. J. 502, 708 (1998).
. J L Friedman, S M Morsink, Astrophys. J. 502714J. L. Friedman and S. M. Morsink, Astrophys. J. 502, 714 (1998).
. R F Sawyer, Phys. Rev. 393804R. F. Sawyer, Phys. Rev. D39, 3804 (1989).
. L Lindblom, G Mendell, B J Owen, Phys. Rev. 6064006L. Lindblom, G. Mendell, and B. J. Owen, Phys. Rev. D60, 064006 (1999).
. L Bildsten, G Ushomirsky, L. Bildsten and G. Ushomirsky (1999).
. N Andersson, D I Jones, K D Kokkotas, N Stergioulas, Astrophys. J. 53475N. Andersson, D. I. Jones, K. D. Kokkotas, and N. Ster- gioulas, Astrophys. J. 534, L75 (2000).
. P B Jones, Phys. Rev. 6484003P. B. Jones, Phys. Rev. D64, 084003 (2001).
. L Lindblom, B J Owen, Phys. Rev. 6563006L. Lindblom and B. J. Owen, Phys. Rev. D65, 063006 (2002).
. Q D Wang, T Lu, Phys. Lett. 148211Q. D. Wang and T. Lu, Phys. Lett. B148, 211 (1984).
. R F Sawyer, Phys. Lett. 233412R. F. Sawyer, Phys. Lett. B233, 412 (1989).
. J Madsen, Phys. Rev. 463290J. Madsen, Phys. Rev. D46, 3290 (1992).
. K Rajagopal, F Wilczek, hep-ph/0011333K. Rajagopal and F. Wilczek (2000), hep-ph/0011333.
. J Madsen, Phys. Rev. Lett. 8510J. Madsen, Phys. Rev. Lett. 85, 10 (2000).
. B Liu, V Greco, V Baran, M Colonna, M Di Toro, Phys. Rev. 6545201B. Liu, V. Greco, V. Baran, M. Colonna, and M. Di Toro, Phys. Rev. C65, 045201 (2002).
N Glendenning, Compact Stars. Springer-VerlagN. Glendenning, Compact Stars (Springer-Verlag, 1997).
. M G Alford, K Rajagopal, F Wilczek, Nucl. Phys. 537443M. G. Alford, K. Rajagopal, and F. Wilczek, Nucl. Phys. B537, 443 (1999).
. M G Alford, K Rajagopal, S Reddy, F Wilczek, Phys. Rev. 6474017M. G. Alford, K. Rajagopal, S. Reddy, and F. Wilczek, Phys. Rev. D64, 074017 (2001).
. M Alford, S Reddy, Phys. Rev. 6774024M. Alford and S. Reddy, Phys. Rev. D67, 074024 (2003).
. F Neumann, M Buballa, M Oertel, Nucl. Phys. 714481F. Neumann, M. Buballa, and M. Oertel, Nucl. Phys. A714, 481 (2003).
. S Banik, D Bandyopadhyay, Phys. Rev. 67123003S. Banik and D. Bandyopadhyay, Phys. Rev. D67, 123003 (2003).
. H Heiselberg, C J Pethick, E F Staubo, Phys. Rev. Lett. 701355H. Heiselberg, C. J. Pethick, and E. F. Staubo, Phys. Rev. Lett. 70, 1355 (1993).
. D N Voskresensky, M Yasuhira, T Tatsumi, Nucl. Phys. 723291D. N. Voskresensky, M. Yasuhira, and T. Tatsumi, Nucl. Phys. A723, 291 (2003).
. P Jaikumar, M Prakash, T Schafer, Phys. Rev. 6663003P. Jaikumar, M. Prakash, and T. Schafer, Phys. Rev. D66, 063003 (2002).
. K Iida, K Sato, Phys. Rev. 582538K. Iida and K. Sato, Phys. Rev. C58, 2538 (1998).
. N Andersson, G L Comer, D Langlois, Phys. Rev. 66104002N. Andersson, G. L. Comer, and D. Langlois, Phys. Rev. D66, 104002 (2002).
. L Lindblom, B J Owen, S M Morsink, Phys. Rev. Lett. 804843L. Lindblom, B. J. Owen, and S. M. Morsink, Phys. Rev. Lett. 80, 4843 (1998).
. L Rezzolla, F K Lamb, S L Shapiro, Astrophys. J. 531141L. Rezzolla, F. K. Lamb, and S. L. Shapiro, Astrophys. J. 531, L141 (2000).
. P Haensel, K P Levenfish, D G Yakovlev, A&A. 372P. Haensel, K. P. Levenfish, and D. G. Yakovlev, A&A . 372 (2001).
Also the presence of a magnetic field inside the compact star can damp the r-modes on a time-scale of order hours or days. 27Also the presence of a magnetic field inside the compact star can damp the r-modes on a time-scale of order hours or days [27].
The structure of the MP obtained imposing Gibbs condition is clearly different from a two-fluid model like the one adopted e.g. in Ref. 25], in which the two phases are essentially independentThe structure of the MP obtained imposing Gibbs con- dition is clearly different from a two-fluid model like the one adopted e.g. in Ref. [25], in which the two phases are essentially independent
that all leptonic reaction rates are much smaller than those associated with non-leptonic reactions, so we do not include them in the calculation of bulk viscosity of the MP. The viscosity due to semileptonic reactions (modified Urca processes) becomes relevant at very high temperatures and we will include its. We assume, as in Ref. effect when computing the stability of the starWe assume, as in Ref.[8], that all leptonic reaction rates are much smaller than those associated with non-leptonic reactions, so we do not include them in the calculation of bulk viscosity of the MP. The viscosity due to semi- leptonic reactions (modified Urca processes) becomes rel- evant at very high temperatures and we will include its effect when computing the stability of the star.
If the Σ density vanishes, the corresponding density fluctuation is identically zero and the constraint given by Eq. (3) has not to be imposed. If the Λ density vanishes. then δρΛ = 0, Eq. (3) has not to be imposed, the melting process described by Eq. (4) does not exist and δχ = 0If the Σ density vanishes, the corresponding density fluc- tuation is identically zero and the constraint given by Eq. (3) has not to be imposed. If the Λ density vanishes, then δρΛ = 0, Eq. (3) has not to be imposed, the melting process described by Eq. (4) does not exist and δχ = 0.
Notice that the Coulomb interaction does not play any direct role when computing the viscosity in the present scheme, since it cannot modify the value of the chemical unbalance δµ. Notice that the Coulomb interaction does not play any direct role when computing the viscosity in the present scheme, since it cannot modify the value of the chemical unbalance δµ.
Our results in both scenarios are consistent with the general outcome of Ref.[28] stating that in the highfrequency limit the bulk viscosity is just the sum of partial bulk viscosities in different slow reaction channels. Our results in both scenarios are consistent with the general outcome of Ref.[28] stating that in the high- frequency limit the bulk viscosity is just the sum of par- tial bulk viscosities in different slow reaction channels.
Notice that if CFL gaps can form at low density then the bulk viscosity of MP vanishes. Notice that if CFL gaps can form at low density then the bulk viscosity of MP vanishes.
The reaction rate has been multiplied by a factor three, in agreement with Ref. 11The reaction rate has been multiplied by a factor three, in agreement with Ref.[11].
| []
|
[
"One-electron energy spectra of heavy highly charged quasimolecules",
"One-electron energy spectra of heavy highly charged quasimolecules"
]
| [
"Artem A Kotov \nDepartment of Physics\nSt. Petersburg State University\n199034St. PetersburgRussia\n",
"Dmitry A Glazov \nDepartment of Physics\nSt. Petersburg State University\n199034St. PetersburgRussia\n",
"Vladimir M Shabaev \nDepartment of Physics\nSt. Petersburg State University\n199034St. PetersburgRussia\n",
"Günter Plunien \nInstitut für Theoretische Physik\nTechnische Universität Dresden\nD-01062DresdenGermany\n"
]
| [
"Department of Physics\nSt. Petersburg State University\n199034St. PetersburgRussia",
"Department of Physics\nSt. Petersburg State University\n199034St. PetersburgRussia",
"Department of Physics\nSt. Petersburg State University\n199034St. PetersburgRussia",
"Institut für Theoretische Physik\nTechnische Universität Dresden\nD-01062DresdenGermany"
]
| []
| The generalized dual-kinetic-balance approach for axially symmetric systems is employed to solve the two-center Dirac problem. The spectra of one-electron homonuclear quasimolecules are calculated and compared with the previous calculations. The analysis of the monopole approximation with two different choices of the origin is performed. Special attention is paid to the lead and xenon dimers, Pb 82+ -Pb 82+ -e − and Xe 54+ -Xe 54+ -e − , where the energies of the ground and several excited σ-states are presented in the wide range of internuclear distances. The developed method provides the quasicomplete finite basis set and allows for construction of the perturbation theory, including within the bound-state QED. 1 arXiv:2105.05966v1 [physics.atom-ph] 12 May 2021 I. INTRODUCTION Due to the critical phenomena of the bound-state quantum electrodynamics, such as spontaneous electron-positron pair production, quasimolecular systems emerging in ion-ion or ion-atom collisions attract much interest [1-8]. While collisions of highly charged ions with neutral atoms are presently available for experimental investigations, in particular, at the GSI Helmholtz Center for Heavy Ion Research [9-11], the upcoming experiments at the GSI/FAIR [12], NICA [13], and HIAF [14] facilities will allow observation of the heavy ionion (up to U 92+ -U 92+ ) collisions. The relativistic dynamics of the heavy-ion collisions has been investigated for decades by various methods, see, e.g., Refs. [6-8, 15-21] and references therein. Theoretical predictions of the quasimolecular spectra are also in demand for analysis of the experimental data in these collisions.Within the Bohr-Oppenheimer approximation, the one-electron problem is reduced to the Dirac equation with Coulomb potential of two nuclei at a fixed internuclear distance D. This problem was investigated previously by a number of authors, see, e.g., Refs. [15,20,[22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37].Majority of these works relied on the partial-wave expansion of the two-center potential in the center-of-mass coordinate system. Alternative approaches include, e.g., usage of the Cassini coordinates [33] and the atomic Dirac-Sturm basis-set expansion[35,36]. We consider the method based on the dual-kinetic-balanced finite-basis-set expansion [38] of the electron wave function for the axially symmetric systems[39]. The results for the ground state of uranium dimers with one and two electrons were already presented in Ref.[40]. In this work, we extend the one-electron calculations to the lowest excited σ-states and present the results for the one-electron dimers, Pb 82+ -Pb 82+ -e − and Xe 54+ -Xe 54+ -e − . For the ground state we demonstrate the accuracy of this method for the nuclear charge numbers Z from 1 to 100 at the so-called "chemical" distances, D = 2/Z a.u. We also investigate the difference between the two-center values and those obtained within the monopole approximation.The relativistic units ( = 1, c = 1, m e = 1) and the Heaviside charge unit (α = e 2 /(4π), e < 0) are used throughout the paper. | 10.3390/atoms9030044 | [
"https://arxiv.org/pdf/2105.05966v1.pdf"
]
| 234,482,620 | 2105.05966 | 8c233d0516f8d0705fcc201322bb669571e7dbdc |
One-electron energy spectra of heavy highly charged quasimolecules
Artem A Kotov
Department of Physics
St. Petersburg State University
199034St. PetersburgRussia
Dmitry A Glazov
Department of Physics
St. Petersburg State University
199034St. PetersburgRussia
Vladimir M Shabaev
Department of Physics
St. Petersburg State University
199034St. PetersburgRussia
Günter Plunien
Institut für Theoretische Physik
Technische Universität Dresden
D-01062DresdenGermany
One-electron energy spectra of heavy highly charged quasimolecules
The generalized dual-kinetic-balance approach for axially symmetric systems is employed to solve the two-center Dirac problem. The spectra of one-electron homonuclear quasimolecules are calculated and compared with the previous calculations. The analysis of the monopole approximation with two different choices of the origin is performed. Special attention is paid to the lead and xenon dimers, Pb 82+ -Pb 82+ -e − and Xe 54+ -Xe 54+ -e − , where the energies of the ground and several excited σ-states are presented in the wide range of internuclear distances. The developed method provides the quasicomplete finite basis set and allows for construction of the perturbation theory, including within the bound-state QED. 1 arXiv:2105.05966v1 [physics.atom-ph] 12 May 2021 I. INTRODUCTION Due to the critical phenomena of the bound-state quantum electrodynamics, such as spontaneous electron-positron pair production, quasimolecular systems emerging in ion-ion or ion-atom collisions attract much interest [1-8]. While collisions of highly charged ions with neutral atoms are presently available for experimental investigations, in particular, at the GSI Helmholtz Center for Heavy Ion Research [9-11], the upcoming experiments at the GSI/FAIR [12], NICA [13], and HIAF [14] facilities will allow observation of the heavy ionion (up to U 92+ -U 92+ ) collisions. The relativistic dynamics of the heavy-ion collisions has been investigated for decades by various methods, see, e.g., Refs. [6-8, 15-21] and references therein. Theoretical predictions of the quasimolecular spectra are also in demand for analysis of the experimental data in these collisions.Within the Bohr-Oppenheimer approximation, the one-electron problem is reduced to the Dirac equation with Coulomb potential of two nuclei at a fixed internuclear distance D. This problem was investigated previously by a number of authors, see, e.g., Refs. [15,20,[22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37].Majority of these works relied on the partial-wave expansion of the two-center potential in the center-of-mass coordinate system. Alternative approaches include, e.g., usage of the Cassini coordinates [33] and the atomic Dirac-Sturm basis-set expansion[35,36]. We consider the method based on the dual-kinetic-balanced finite-basis-set expansion [38] of the electron wave function for the axially symmetric systems[39]. The results for the ground state of uranium dimers with one and two electrons were already presented in Ref.[40]. In this work, we extend the one-electron calculations to the lowest excited σ-states and present the results for the one-electron dimers, Pb 82+ -Pb 82+ -e − and Xe 54+ -Xe 54+ -e − . For the ground state we demonstrate the accuracy of this method for the nuclear charge numbers Z from 1 to 100 at the so-called "chemical" distances, D = 2/Z a.u. We also investigate the difference between the two-center values and those obtained within the monopole approximation.The relativistic units ( = 1, c = 1, m e = 1) and the Heaviside charge unit (α = e 2 /(4π), e < 0) are used throughout the paper.
II. METHOD
In heavy atomic systems the parameter αZ (α is the fine-structure constant and Z is the nuclear charge), which measures the coupling of electrons with nuclei, is not small. Therefore, the calculations for these systems should be done within the fully relativistic approach, i.e., to all orders in αZ. With this in mind, we start with the Dirac equation for the two-center potential,
α · p + β + V ( r) Ψ n ( r) = E n Ψ n ( r),(1)V ( r) = V A nucl (| r − R 1 |) + V B nucl (| r − R 2 |).(2)
Here r and R 1,2 are the coordinates of the electron and nuclei, respectively, V A,B nucl (r) is the nuclear potential at the distance r generated by nucleus with the charge Z A,B , α and β are the standard 4 × 4 Dirac matrices:
β = I 0 0 −I , α = 0 σ σ 0 ,(3)
where σ is a vector of the Pauli matrices.
In the following we consider the identical nuclei, i.e. Z A = Z B , with the Fermi model of the nuclear charge distribution:
V nucl (r) = −4παZ ∞ 0 ρ(r ) max(r, r ) r 2 dr , ρ(r) = ρ 0 1 + exp (r − c)/a ,(4)
where ρ 0 is the normalization constant, a is skin thickness constant and c is the half-density radius, for more details see, e.g., Ref. [41].
The solution of Eq. (1) is obtained within the dual-kinetic-balance (DKB) approach, which allows one to solve the problem of "spurious" states. Originally, this approach was implemented for spherically symmetric systems, like atoms, [38], using the finite basis set constructed from the B-splines [42,43]. Later, authors of Ref. [39] generalized it to the case of axially symmetric systems (A-DKB) they considered atom in the external homogeneous field. This situation was also considered within this method in Refs. [44][45][46] to evaluate the higher-order contributions to the Zeeman splitting in highly charged ions. In Ref. [40] we have adapted the A-DKB method to diatomic systems, which also possess axial symmetry.
Below we provide a brief description of the calculation scheme.
The system under consideration is rotationally invariant with respect to the z-axis directed along the internuclear vector D = R 2 − R 1 . Therefore, the z-projection of the total angular momentum with the quantum number m J is conserved and the electronic wave function can be written as,
Ψ(r, θ, ϕ) = 1 r G 1 (r, θ)e i(m J − 1 2 )ϕ G 2 (r, θ)e i(m J + 1 2 )ϕ iF 1 (r, θ)e i(m J − 1 2 )ϕ iF 2 (r, θ)e i(m J + 1 2 )ϕ .(5)
The (r, θ)-components of the wave function are represented using the finite-basis-set expansion:
Φ(r, θ) = 1 r G 1 (r, θ) G 2 (r, θ) F 1 (r, θ) F 2 (r, θ) ∼ = 4 u=1 Nr ir=1 N θ i θ =1 C u ir,i θ ΛB ir (r)Q i θ (θ)e u ,(6)
where B ir (r)
Nr ir=1 are B-splines, Q i θ N θ i θ =1
are Legendre polynomials of the argument 2θ/π − 1, and e u 4 u=1 are the standard four-component basis vectors:
e 1 = 1 0 0 0 , e 2 = 0 1 0 0 , e 3 = 0 0 1 0 , e 4 = 0 0 0 1 .(7)
The Λ-matrix
Λ = I − 1 2 D m J − 1 2 D m J I ,(8)D m J = (σ z cos θ + σ x sin θ) ∂ ∂r − 1 r + 1 r (σ x cos θ − σ z sin θ) ∂ ∂θ + 1 r sin θ im J σ y + 1 2 σ x ,(9)
imposes the dual-kinetic-balance conditions on the basis set. With the given form of Φ and the finite basis set one can find the corresponding Hamiltonian matrix H ij . The eigenvalues and eigenfunctions are found by diagonalization of H ij . As a result, we obtain quasicomplete finite set of wave functions and electronic energies for the two-center Dirac equation. Ground and several lowest excited states are reproduced with high accuracy while the higher-excited states effectively represent the infinite remainder of the spectrum. The negative-energy continuum is also represented by the finite number of the negative energy eigenvalues. This quasicomplete spectrum can be used to construct the Green function, which is needed for the perturbation theory calculations.
III. RESULTS
Relativistic calculations of the binding energies of heavy one-electron quasimolecules were presented, in particular, in Refs. [20,28,31,33,35,36], see also references therein. Ref. [36] provides nearly the most accurate up-to-date values for the very broad range of Z and taking into account the finite nuclear size. So, we use just these data for comparison, see Table I, where the ground-state energies are presented for Z = 1 . . . 100 at the so-called "chemical" distances, D = 2/Z a.u. We observe that the results are in good agreement, the relative deviation varies from 2 × 10 −6 for hydrogen to 5 × 10 −5 for Z = 100. This deviation is consistent with our own estimation of the numerical uncertainty, which is evaluated by inspecting the convergence of the results with respect to the size of the basis set. In this calculation up to N r = 320 B-splines and N θ = 54 Legendre polynomials are used, for heavy nuclei this number of basis functions ensures the uncertainty, which is comparable to or smaller than the uncertainty of the finite nuclear size effect at all internuclear distances from 0 to 2/Z a.u.
Next, we present the obtained one-electron spectra of the Pb 82+ -Pb 82+ -e − and Xe 54+ -Xe 54+ -e − quasimolecules in the wide range of the internuclear distances from few tens of fermi up to the "chemical" distances. In the present figures only σ-states (m J = ± 1 2 ) are shown. The precise quantum numbers are m J and parity, g (gerade) or u (ungerade). In addition, we determine the quantum numbers of the "merged atom", i.e. the state of the system with internuclear distance D → 0, and put it to the left of molecular term symbol, e.g., the ground state is 1s 1/2 σ g .
In Figure 1, the energies of the ground (n = 1) and first 9 (n = 2 . . . 10) excited states of Pb 82+ -Pb 82+ -e − system as the functions of the internuclear distance are shown. Here, n has no connection with atomic principal quantum number, it simply enumerates the σ-states.
To visually compare the data obtained with the ones by Soff et al. we zoom the second plot in Fig. 1 to match the scale of the corresponding figure from Ref. [15]. Although we cannot compare the numerical results, the plots for all the states under consideration appear to be in very good agreement all the states are identified correctly, all the crossings and avoided crossings appear at the same internuclear distances. The similar results for xenon, i.e., the energies of the ground (n = 1) and first 9 (n = 2 . . . 10) excited states of Xe 54+ -Xe 54+ -e − system are shown in Figure 3. Tables II and III, uncertainty serves as a non-trivial self-test of the method, since the basis-set expansion (6) is essentially different for the two cases. In fact, due to the lower symmetry of the second c.s., the uncertainty of the TC(2) values is much larger and completely determines the difference between TC(1) and TC (2). The differences between the TC(1) and MA(1) results Table II: Ground-state binding energy E 1σg (in eV) of the Pb 82+ -Pb 82+ -e − quasimolecule for the two-center potential (TC) and for the monopole-approximation potential (MA), with coordinate system origin at the center of mass of the nuclei (1) and at the one of the nuclear centers (2). are not yet available for the TC calculations, e.g., the two-photonexchange and QED corrections [40].
Also, in
IV. DISCUSSION AND CONCLUSION
In this work, the two-center Dirac equation is solved within the dual-kinetic-balance method [38,39]. The energies of the ground and several excited σ-states in such heavy diatomic systems as Pb 82+ -Pb 82+ -e − and Xe 54+ -Xe 54+ -e − are plotted as a function of the The notations are the same as in Table II.
) Energies of states with n = 6 . . . 10.
Figure 1 :Figure 2 :
12Electronic terms of the one-electron Pb 82+ -Pb 82+ quasimolecule. Electronic terms of the one-electron Pb 82+ -Pb 82+ quasimolecule. Energies of states with n = 6 . . . 9 (scaled).
we compare the ground-state binding energies obtained within our approach for the two-center (TC) potential with those for the widely used monopole approximation (MA), where only the spherically symmetric part of the two-center potential is considered. Within MA the potential and all the results depend on where to place the origin of the coordinate system (c.s.). At the same time, for the TC potential the results should be identical within the numerical error bars. We compare two different placements of the c.s. origin: (1) at the center of mass of the nuclei, (2) at the center of one of the nuclei, seeFigure 4. The agreement between TC(1) and TC(2) within the anticipated numerical of states with n = 6 . . . 10.
Figure 3 :
3Electronic terms of the one-electron Xe 54+ -Xe 54+ quasimolecule.
Figure 4 :
4Two different coordinate systems considered. Left: (1) origin is at the center-of-mass of the system, right: (2) origin is at the center of one of the nuclei.
are presented in the second-to-last column, they can be interpreted as inaccuracy of the MA. In the last column, the differences between the MA(1) and MA(2) results are given, a kind of "inherent inconsistency" of the MA. As one can see from these data, except for the regions where MA(1) − MA(2) is anomaly small due to the sign change, it is comparable to TC(1) − MA(1). This observation can be used to quantify the inaccuracy of the MA for the contributions, which
distance D. The ground-state energies at the "chemical" distances (D = 2/Z a.u.) are presented for one-electron dimers with Z = 1 . . . 100. Obtained data are compared with the available previous calculations and a good agreement is observed. The comparison of the results for different origin placement of the coordinate system is used as a self-test of the method. The values obtained within the monopole approximation are also presented. It is shown that their dependence on the origin placement can serve to estimate the deviation from the two-center results. The developed method, in addition to the energies and wave functions of the ground and lowest excited states, provides the quasicomplete finite spectrum. The Green function computed on the basis of this spectrum gives an access, in particular, to evaluation of the Feynman diagrams within the bound-state QED. ACKNOWLEDGMENTS Valuable discussions with Ilia Maltsev, Alexey Malyshev, Leonid Skripnikov, and Ilya Tupitsyn are gratefully acknowledged. The work was supported by the Foundation for the Advancement of Theoretical Physics and Mathematics "BASIS", by the Russian Foundation for Basic Research (grant number 19-02-00974), by TU Dresden (DAAD Programm Ostpartnerschaften), and by G-RISC.
Table I :
IComparison of the ground-state energies of one-electron quasimolecular systems with Z = 1 . . . 100 at the internuclear distance D = 2/Z a.u.Z
This work
Dirac-Sturm [36]
1
−1.1026433
−1.102641581032
2
−4.4106607
−4.410654714140
10
−110.33722
−110.3371741499
20
−442.23969
−442.2392996469
30
−998.4194
−998.4214646525
40
−1783.5479
−1783.563450815
50
−2804.5304
−2804.571434254
60
−4070.971
−4071.036267926
70
−5595.889
−5595.926978290
80
−7397.003
−7397.028800116
90
−9498.452
−9498.588788490
92
−9957.567
−9957.775519122
100 −11935.89
−11936.41770218
0
200
400
600
800
1000
D, fm
800
700
600
500
400
300
200
100
E, keV
3d 3/2 g
3p 1/2 u
2p 3/2 u
2p 1/2 u
2s 1/2 g
1s 1/2 g
(a) Energies of states with n = 1 . . . 6.
Table III :
IIIGround-state binding energy E 1σg (in eV) of the Xe 54+ -Xe 54+ -e − quasimolecule.
. S S Gerstein, Y B Zeldovich, Sov. Phys. JETP. 30358S. S. Gerstein and Y. B. Zeldovich, Sov. Phys. JETP 30, 358 (1969).
. W Pieper, W Greiner, Z. Phys. 218327W. Pieper and W. Greiner, Z. Phys. 218, 327 (1969).
. Y B Zeldovich, V S Popov, Sov. Phys. Usp. 14673Y. B. Zeldovich and V. S. Popov, Sov. Phys. Usp. 14, 673 (1972).
. J Rafelski, L P Fulcher, A Klein, Phys. Rep. 38227J. Rafelski, L. P. Fulcher, and A. Klein, Phys. Rep. 38, 227 (1978).
W Greiner, B Müller, J Rafelski, Quantum Electrodynamics of Strong Fields. BerlinSpringer-VerlagW. Greiner, B. Müller, and J. Rafelski, Quantum Electrodynamics of Strong Fields (Springer- Verlag, Berlin, 1985).
. I A Maltsev, V M Shabaev, R V Popov, Y S Kozhedub, G Plunien, X Ma, T Stöhlker, D A Tumakov, Phys. Rev. Lett. 123113401I. A. Maltsev, V. M. Shabaev, R. V. Popov, Y. S. Kozhedub, G. Plunien, X. Ma, T. Stöhlker, and D. A. Tumakov, Phys. Rev. Lett. 123, 113401 (2019).
. R V Popov, V M Shabaev, D A Telnov, I I Tupitsyn, I A Maltsev, Y S Kozhedub, A I , R. V. Popov, V. M. Shabaev, D. A. Telnov, I. I. Tupitsyn, I. A. Maltsev, Y. S. Kozhedub, A. I.
. N V Bondarev, X Kozin, G Ma, T Plunien, D A Stöhlker, V A Tumakov, Zaytsev, Phys. Rev. D. 10276005Bondarev, N. V. Kozin, X. Ma, G. Plunien, T. Stöhlker, D. A. Tumakov, and V. A. Zaytsev, Phys. Rev. D 102, 076005 (2020).
. D N Voskresensky, arXiv:2102.07182D. N. Voskresensky, arXiv:2102.07182.
. P Verma, P Mokler, A Bräuning-Demian, H Bräuning, C Kozhuharov, F Bosch, D Liesen, S Hagmann, T Stöhlker, Z Stachura, D Banas, A Orsic-Muthig, M Schöffler, D Sierpowski, U Spillmann, S Tashenov, S Toleikis, M Wahab, Nucl. Instrum. Meth. Phys. Res. B. 24556P. Verma, P. Mokler, A. Bräuning-Demian, H. Bräuning, C. Kozhuharov, F. Bosch, D. Liesen, S. Hagmann, T. Stöhlker, Z. Stachura, D. Banas, A. Orsic-Muthig, M. Schöffler, D. Sierpowski, U. Spillmann, S. Tashenov, S. Toleikis, and M. Wahab, Nucl. Instrum. Meth. Phys. Res. B 245, 56 (2006).
. P Verma, P Mokler, A Bräuning-Demian, C Kozhuharov, H Bräuning, F Bosch, D Liesen, T Stöhlker, S Hagmann, S Chatterjee, A Gumberidze, R Reuschl, M Schöffler, U Spillmann, A Muthig, S Tachenov, Z Stachura, M Wahab, Radiation Physics and Chemistry. 75P. Verma, P. Mokler, A. Bräuning-Demian, C. Kozhuharov, H. Bräuning, F. Bosch, D. Liesen, T. Stöhlker, S. Hagmann, S. Chatterjee, A. Gumberidze, R. Reuschl, M. Schöffler, U. Spill- mann, A. Orsic Muthig, S. Tachenov, Z. Stachura, and M. Wahab, Radiation Physics and Chemistry 75, 2014 (2006).
. S Hagmann, T Stöhlker, C Kozhuharov, V Shabaev, I Tupitsyn, Y Kozhedub, H Rothard, U Spillmann, R Reuschl, S Trotsenko, F Bosch, D Liesen, D Winters, J Ullrich, R Dörner, R Moshammer, P Hillenbrand, D Jakubassa-Amundsen, A Voitkiv, A Surzhykov, D Fischer, E De Filippo, X Wang, B Wei, AIP Conference Proceedings. 1336115S. Hagmann, T. Stöhlker, C. Kozhuharov, V. Shabaev, I. Tupitsyn, Y. Kozhedub, H. Rothard, U. Spillmann, R. Reuschl, S. Trotsenko, F. Bosch, D. Liesen, D. Winters, J. Ullrich, R. Dörner, R. Moshammer, P. Hillenbrand, D. Jakubassa-Amundsen, A. Voitkiv, A. Surzhykov, D. Fis- cher, E. de Filippo, X. Wang, and B. Wei, AIP Conference Proceedings 1336, 115 (2011).
. A Gumberidze, T Stöhlker, H Beyer, F Bosch, A Bräuning-Demian, S Hagmann, A. Gumberidze, T. Stöhlker, H. Beyer, F. Bosch, A. Bräuning-Demian, S. Hagmann,
. G Soff, W Greiner, W Betz, B Müller, Phys. Rev. A. 20169G. Soff, W. Greiner, W. Betz, and B. Müller, Phys. Rev. A 20, 169 (1979).
. U Becker, N Grün, W Scheid, G Soff, Phys. Rev. Lett. 562016U. Becker, N. Grün, W. Scheid, and G. Soff, Phys. Rev. Lett. 56, 2016 (1986).
. J Eichler, Physics Reports. 193165J. Eichler, Physics Reports 193, 165 (1990).
. K Rumrich, G Soff, W Greiner, Phys. Rev. A. 47215K. Rumrich, G. Soff, and W. Greiner, Phys. Rev. A 47, 215 (1993).
. D C Ionescu, A Belkacem, Physica Scripta. 80128D. C. Ionescu and A. Belkacem, Physica Scripta T80, 128 (1999).
. I I Tupitsyn, Y S Kozhedub, V M Shabaev, G B Deyneka, S Hagmann, C Kozhuharov, G Plunien, T Stöhlker, Phys. Rev. A. 8242701I. I. Tupitsyn, Y. S. Kozhedub, V. M. Shabaev, G. B. Deyneka, S. Hagmann, C. Kozhuharov, G. Plunien, and T. Stöhlker, Phys. Rev. A 82, 042701 (2010).
. I I Tupitsyn, Y S Kozhedub, V M Shabaev, A I Bondarev, G B Deyneka, I A Maltsev, S Hagmann, G Plunien, T Stöhlker, Phys. Rev. A. 8532712I. I. Tupitsyn, Y. S. Kozhedub, V. M. Shabaev, A. I. Bondarev, G. B. Deyneka, I. A. Maltsev, S. Hagmann, G. Plunien, and T. Stöhlker, Phys. Rev. A 85, 032712 (2012).
. B Müller, J Rafelski, W Greiner, Phys. Lett. B. 475B. Müller, J. Rafelski, and W. Greiner, Phys. Lett. B 47, 5 (1973).
. J Rafelski, B Müller, Phys. Lett. B. 65205J. Rafelski and B. Müller, Phys. Lett. B 65, 205 (1976).
. J Rafelski, B Müller, Phys. Rev. Lett. 36517J. Rafelski and B. Müller, Phys. Rev. Lett. 36, 517 (1976).
. V I Lisin, M S Marinov, V S Popov, Phys. Lett. B. 69141V. I. Lisin, M. S. Marinov, and V. S. Popov, Phys. Lett. B 69, 141 (1977).
. V I Lisin, M S Marinov, V S Popov, Phys. Lett. B. 9120V. I. Lisin, M. S. Marinov, and V. S. Popov, Phys. Lett. B 91, 20 (1980).
. L Yang, D Heinemann, D Kolb, Chem. Phys. Lett. 178213L. Yang, D. Heinemann, and D. Kolb, Chem. Phys. Lett. 178, 213 (1991).
. F A Parpia, A K Mohanty, Chem. Phys. Lett. 238209F. A. Parpia and A. K. Mohanty, Chem. Phys. Lett. 238, 209 (1995).
. G B Deineka, Opt. Spectrosc. 84159G. B. Deineka, Opt. Spectrosc. 84, 159 (1998).
. V I Matveev, D U Matrasulov, H Y Rakhimov, Phys. Atom. Nuclei. 63318V. I. Matveev, D. U. Matrasulov, and H. Y. Rakhimov, Phys. Atom. Nuclei 63, 318 (2000).
. O Kullie, D Kolb, Eur. Phus. J. D. 17167O. Kullie and D. Kolb, Eur. Phus. J. D 17, 167 (2001).
. A Ishikawa, H Nakashima, H Nakatsuji, J. Chem. Phys. 128124103A. Ishikawa, H. Nakashima, and H. Nakatsuji, J. Chem. Phys. 128, 124103 (2008).
. A N Artemyev, A Surzhykov, P Indelicato, G Plunien, T Stoehlker, J. Phys. B. 43235207A. N. Artemyev, A. Surzhykov, P. Indelicato, G. Plunien, and T. Stoehlker, J. Phys. B 43, 235207 (2010).
. A Ishikawa, H Nakashima, H Nakatsuji, Chem. Phys. 40162A. Ishikawa, H. Nakashima, and H. Nakatsuji, Chem. Phys. 401, 62 (2012).
. I I Tupitsyn, D V Mironova, Opt. Spectrosc. 117351I. I. Tupitsyn and D. V. Mironova, Opt. Spectrosc. 117, 351 (2014).
. D V Mironova, I I Tupitsyn, V M Shabaev, G Plunien, Chem. Phys. 44910D. V. Mironova, I. I. Tupitsyn, V. M. Shabaev, and G. Plunien, Chem. Phys. 449, 10 (2015).
. A N Artemyev, A Surzhykov, Phys. Rev. Lett. 114243004A. N. Artemyev and A. Surzhykov, Phys. Rev. Lett. 114, 243004 (2015).
. V M Shabaev, I I Tupitsyn, V A Yerokhin, G Plunien, G Soff, Phys. Rev. Lett. 93130405V. M. Shabaev, I. I. Tupitsyn, V. A. Yerokhin, G. Plunien, and G. Soff, Phys. Rev. Lett. 93, 130405 (2004).
. E B Rozenbaum, D A Glazov, V M Shabaev, K E Sosnova, D A Telnov, Phys. Rev. A. 8912514E. B. Rozenbaum, D. A. Glazov, V. M. Shabaev, K. E. Sosnova, and D. A. Telnov, Phys. Rev. A 89, 012514 (2014).
. A A Kotov, D A Glazov, A V Malyshev, A V Vladimirova, V M Shabaev, G Plunien, X-Ray Spectrometry. 49110A. A. Kotov, D. A. Glazov, A. V. Malyshev, A. V. Vladimirova, V. M. Shabaev, and G. Plu- nien, X-Ray Spectrometry 49, 110 (2020).
. V M Shabaev, J. Phys. B. 261103V. M. Shabaev, J. Phys. B 26, 1103 (1993).
. W R Johnson, S A Blundell, J Sapirstein, Phys. Rev. A. 37307W. R. Johnson, S. A. Blundell, and J. Sapirstein, Phys. Rev. A 37, 307 (1988).
. J Sapirstein, W R Johnson, J. Phys. B. 295213J. Sapirstein and W. R. Johnson, J. Phys. B 29, 5213 (1996).
. A S Varentsova, V A Agababaev, A M Volchkova, D A Glazov, A V Volotka, V M Shabaev, G Plunien, Nucl. Instrum. Meth. Phys. Res. B. 40880A. S. Varentsova, V. A. Agababaev, A. M. Volchkova, D. A. Glazov, A. V. Volotka, V. M. Shabaev, and G. Plunien, Nucl. Instrum. Meth. Phys. Res. B 408, 80 (2017).
. A M Volchkova, A S Varentsova, N A Zubova, V A Agababaev, D A Glazov, A V , A. M. Volchkova, A. S. Varentsova, N. A. Zubova, V. A. Agababaev, D. A. Glazov, A. V.
. V M Volotka, G Shabaev, Plunien, Nucl. Instrum. Meth. Phys. Res. B. 40889Volotka, V. M. Shabaev, and G. Plunien, Nucl. Instrum. Meth. Phys. Res. B 408, 89 (2017).
. A M Volchkova, V A Agababaev, D A Glazov, A V Volotka, S Fritzsche, V M Shabaev, G Plunien, arXiv:2009.00109A. M. Volchkova, V. A. Agababaev, D. A. Glazov, A. V. Volotka, S. Fritzsche, V. M. Shabaev, and G. Plunien, arXiv:2009.00109.
| []
|
[
"Towards understanding the structure of voids in the cosmic web",
"Towards understanding the structure of voids in the cosmic web"
]
| [
"J Einasto \nTartu Observatory\nEE-61602TõravereEstonia\n\nEstonian Academy of Sciences\nEE-10130TallinnEstonia\n\nICRANet\nPiazza della Repubblica 1065122PescaraItaly\n",
"I Suhhonenko \nTartu Observatory\nEE-61602TõravereEstonia\n",
"G Hütsi \nTartu Observatory\nEE-61602TõravereEstonia\n",
"E Saar \nTartu Observatory\nEE-61602TõravereEstonia\n\nEstonian Academy of Sciences\nEE-10130TallinnEstonia\n",
"M Einasto ",
"L J Liivamägi \nTartu Observatory\nEE-61602TõravereEstonia\n",
"V Müller \nTartu Observatory\nEE-61602TõravereEstonia\n\nLeibniz-Institut für Astrophysik Potsdam\nAn der Sternwarte 16D-14482PotsdamGermany\n",
"A A Starobinsky \nLandau Institute for Theoretical Physics\nRAS\n119334MoscowRussia\n\nGraduate School of Science\nResearch Center for the Early Universe (RESCEU)\nThe University of Tokyo\n113-0033TokyoJapan\n",
"E Tago \nTartu Observatory\nEE-61602TõravereEstonia\n",
"E Tempel \nTartu Observatory\nEE-61602TõravereEstonia\n"
]
| [
"Tartu Observatory\nEE-61602TõravereEstonia",
"Estonian Academy of Sciences\nEE-10130TallinnEstonia",
"ICRANet\nPiazza della Repubblica 1065122PescaraItaly",
"Tartu Observatory\nEE-61602TõravereEstonia",
"Tartu Observatory\nEE-61602TõravereEstonia",
"Tartu Observatory\nEE-61602TõravereEstonia",
"Estonian Academy of Sciences\nEE-10130TallinnEstonia",
"Tartu Observatory\nEE-61602TõravereEstonia",
"Tartu Observatory\nEE-61602TõravereEstonia",
"Leibniz-Institut für Astrophysik Potsdam\nAn der Sternwarte 16D-14482PotsdamGermany",
"Landau Institute for Theoretical Physics\nRAS\n119334MoscowRussia",
"Graduate School of Science\nResearch Center for the Early Universe (RESCEU)\nThe University of Tokyo\n113-0033TokyoJapan",
"Tartu Observatory\nEE-61602TõravereEstonia",
"Tartu Observatory\nEE-61602TõravereEstonia"
]
| [
"Astronomy & Astrophysics manuscript"
]
| Context. According to the modern cosmological paradigm, cosmic voids form in low density regions between filaments of galaxies and superclusters. Aims. Our goal is to see how density waves of different scale combine to form voids between galaxy systems of various scales. Methods. We perform numerical simulations of structure formation in cubes of size 100, and 256 h −1 Mpc, with resolutions 256 3 and 512 3 particles and cells. To understand the role of density perturbations of various scale, we cut power spectra on scales from 8 to 128 h −1 Mpc, using otherwise in all cases identical initial random realisations. Results. We find that small haloes and short filaments form all over the simulation box, if perturbations only on scales as large as 8 h −1 Mpc are present. We define density waves of scale ≥ 64 h −1 Mpc as large, waves of scale ≃ 32 h −1 Mpc as medium scale, and waves of scale ≃ 8 h −1 Mpc as small scale, within a factor of two. Voids form in regions where medium-and large-scale density perturbations combine in negative parts of the waves because of the synchronisation of phases of medium-and large-scale density perturbations. In voids, the growth of potential haloes (formed in the absence of large-scale perturbations) is suppressed by the combined negative sections of medium-and large-scale density perturbations, so that their densities are less than the mean density, and thus during the evolution their densities do not increase.Conclusions. The phenomenon of large multi-scale voids in the cosmic web requires the presence of an extended spectrum of primordial density perturbations. The void phenomenon is due to the action of two processes: the synchronisation of density perturbations of medium and large scales, and the suppression of galaxy formation in low-density regions by the combined action of negative sections of medium-and large-scale density perturbations. | 10.1051/0004-6361/201117248 | [
"https://arxiv.org/pdf/1105.2464v2.pdf"
]
| 55,539,825 | 1105.2464 | 737a42eb467f91f923557dd2cc212666933d5dc0 |
Towards understanding the structure of voids in the cosmic web
11 Dec 2011 January 20, 2013
J Einasto
Tartu Observatory
EE-61602TõravereEstonia
Estonian Academy of Sciences
EE-10130TallinnEstonia
ICRANet
Piazza della Repubblica 1065122PescaraItaly
I Suhhonenko
Tartu Observatory
EE-61602TõravereEstonia
G Hütsi
Tartu Observatory
EE-61602TõravereEstonia
E Saar
Tartu Observatory
EE-61602TõravereEstonia
Estonian Academy of Sciences
EE-10130TallinnEstonia
M Einasto
L J Liivamägi
Tartu Observatory
EE-61602TõravereEstonia
V Müller
Tartu Observatory
EE-61602TõravereEstonia
Leibniz-Institut für Astrophysik Potsdam
An der Sternwarte 16D-14482PotsdamGermany
A A Starobinsky
Landau Institute for Theoretical Physics
RAS
119334MoscowRussia
Graduate School of Science
Research Center for the Early Universe (RESCEU)
The University of Tokyo
113-0033TokyoJapan
E Tago
Tartu Observatory
EE-61602TõravereEstonia
E Tempel
Tartu Observatory
EE-61602TõravereEstonia
Towards understanding the structure of voids in the cosmic web
Astronomy & Astrophysics manuscript
11 Dec 2011 January 20, 2013Received 12 May 2011/ Accepted 20 August 2011large-scale structure of the Universeearly Universecosmology: theorymethods: numerical
Context. According to the modern cosmological paradigm, cosmic voids form in low density regions between filaments of galaxies and superclusters. Aims. Our goal is to see how density waves of different scale combine to form voids between galaxy systems of various scales. Methods. We perform numerical simulations of structure formation in cubes of size 100, and 256 h −1 Mpc, with resolutions 256 3 and 512 3 particles and cells. To understand the role of density perturbations of various scale, we cut power spectra on scales from 8 to 128 h −1 Mpc, using otherwise in all cases identical initial random realisations. Results. We find that small haloes and short filaments form all over the simulation box, if perturbations only on scales as large as 8 h −1 Mpc are present. We define density waves of scale ≥ 64 h −1 Mpc as large, waves of scale ≃ 32 h −1 Mpc as medium scale, and waves of scale ≃ 8 h −1 Mpc as small scale, within a factor of two. Voids form in regions where medium-and large-scale density perturbations combine in negative parts of the waves because of the synchronisation of phases of medium-and large-scale density perturbations. In voids, the growth of potential haloes (formed in the absence of large-scale perturbations) is suppressed by the combined negative sections of medium-and large-scale density perturbations, so that their densities are less than the mean density, and thus during the evolution their densities do not increase.Conclusions. The phenomenon of large multi-scale voids in the cosmic web requires the presence of an extended spectrum of primordial density perturbations. The void phenomenon is due to the action of two processes: the synchronisation of density perturbations of medium and large scales, and the suppression of galaxy formation in low-density regions by the combined action of negative sections of medium-and large-scale density perturbations.
Introduction
The goal of this series of papers is to study the role of perturbations on various scales in formation the cosmic web. Einasto et al. (2011) used wavelet techniques to understand the formation of rich systems of galaxies -clusters and superclusters. They conclude that superclusters are objects where density waves of medium and large scales combine in similar phases to generate high density peaks. Similarly, voids are regions in space where medium-and large-scale density perturbations combine in similar under-density phases of waves. Suhhonenko et al. (2011) demonstrated that the properties of the cosmic web depend essentially on density perturbations of small and medium scales, whereas perturbations of large scale ≥ 100 h −1 Mpc modulate the richness of galaxy systems from clusters to superclusters, and make voids emptier. This paper is devoted to the study of the influence of medium-and large-scale density waves on the structure and evolution of voids in the cosmic web.
The cosmic web was first openly discussed at the IAU Symposium on Large Scale Structure of the Universe (Longair & Einasto, 1978). At this symposium, four groups reported results of studies of the three-dimensional distribution of galaxies in space using available data for the redshifts of Send offprint requests to: J. Einasto, e-mail: [email protected] galaxies. The presence of voids in the distribution of galaxies was reported by , Tarenghi et al. (1978), Tifft & Gregory (1978), and Tully & Fisher (1978) in the Perseus-Pisces, Hercules, Coma, and Local superclusters, respectively.
The main results reported in this symposium were that: (1) galaxies, groups, and clusters of galaxies are not randomly distributed but form chains, converging in superclusters; (2) the space between galaxy chains contains almost no galaxies and forms voids of diameters 20 . . . 70 h −1 Mpc; (3) superclusters are not isolated systems, but are connected by galaxy filaments to a connected network -the supercluster-void network Einasto et al., 1980;Zeldovich et al., 1982;Oort, 1983) or the cosmic web (Bond et al., 1996). These early results were confirmed by the Second Harvard Sky Survey (de Lapparent et al., 1986;Geller & Huchra, 1989) and the discovery of the very large Bootes void by Kirshner et al. (1981Kirshner et al. ( , 1987.
The presence of long essentially one-dimensional galaxy and group/cluster chains, and the elongated shape along the chain of the central cluster galaxies (often supergiant galaxies of type cD) suggests that galaxies and groups/clusters of the chain had formed within the chain simultaneously with the formation of the whole cosmic web. This occurred in the gaseous phase of the structure evolution, which allowed the dissipation and cancelling of velocities perpendicular to the chain axis. These data gave strong support to the Zeldovich (1970Zeldovich ( , 1978 pancake scenario of galaxy formation. The observed distribution of galaxies was quite similar to the distribution of simulation particles in a two-dimensional numerical simulation of the evolution of the structure of the Universe, prepared by Shandarin (1975, private communication) and published by Doroshkevich et al. (1980). In this simulation a network of high-and low-density regions was seen: high-density regions form cells that surround large under-dense regions. Subsequent three-dimensional simulations confirmed this picture (Klypin & Shandarin, 1983;Melott et al., 1983).
However, some important differences between the model and observations were evident. First of all, there exists a rarefied population of simulation particles in voids that is absent in real data. This was the first indication of physical biasing in galaxy formation (the term "biased galaxy formation" was introduced a few years later by Kaiser (1984), see also Bardeen et al. (1986)). A theoretical explanation of the absence of galaxies in voids was given by Einasto et al. (1980) (see also a more detailed discussion by Einasto et al. (1994a)). A simple analytical solution of the cosmological evolution of the density of matter shows that in over-dense regions the density increases until the matter collapses to form compact objects (pancaking by Zeldovich (1970)). In contrast, in under-dense regions the density decreases but never reaches a zero value -gravity cannot evacuate voids completely.
Even early studies proposed that the cosmic web has a hierarchical structure. The parts of the web formed by objects of different mass or luminosity have different characteristic sizes. As voids are defined by the web, the void assembly has several important properties: voids defined by more luminous (or massive) objects have larger diameters, and voids defined by clusters/galaxies of certain luminosity contain substructure formed by less luminous objects.
According to the present cosmological paradigm, all structural elements of the Universe were formed by the growth of initial small density perturbations created during the very early phase of the evolution of the Universe. To these elements belong galaxies, groups, clusters, and their systems, such as filaments and superclusters. Rich superclusters form a cellular distribution, with large voids surrounded by rich superclusters. The characteristic diameter of these supervoids is of the order of 100 h −1 Mpc Kirshner et al., 1981;Einasto et al., 1994bEinasto et al., , 1997b. Supervoids are not empty, but contain a hierarchy of voids (Einasto et al., 1989;Martel & Wasserman, 1990;van de Weygaert & van Kampen, 1993;Lindner et al., 1995;Müller et al., 2000;Gottlöber et al., 2003;Aragón-Calvo et al., 2007;von Benda-Beckmann & Müller, 2008;Aragon-Calvo et al., 2010;Jones et al., 2010).
Among the early studies of the void evolution we mention Hoffman & Shaham (1982); Hoffman et al. (1983), Peebles (1982), Icke (1984), Fillmore & Goldreich (1984), Bertschinger (1987), among others. The study of the hierarchical evolution of voids was pioneered by Dubinski et al. (1993). Sahni et al. (1994) described the evolving void hierarchy within the context of the adhesion theory. Sheth & van de Weygaert (2004) developed the excursion set (extended Press-Schechter) description of a void hierarchy in the dark matter distribution, followed by Furlanetto & Piran (2006) comparing this with the void hierarchy in galaxy populations. Superclusters are connected by filaments of galaxies, i.e voids have substructure. Observationally, this was already evident in early void studies de Lapparent et al., 1986). Theoretical discussion of the void substructure has been presented by Regos & Geller (1989), Martel & Wasserman (1990), van de Weygaert & van Kampen (1993), Goldberg & Vogeley (2004); Goldberg et al. (2005), and many others. The skeleton of the cosmic web was discussed by Hahn et al. (2007), Forero-Romero et al. (2009), Sousbie et al. (2008, Bond et al. (2010a,b), Shandarin (2010), and Einasto et al. (2011).
It is generally accepted that the initial density perturbations had a smooth, extended power spectrum (quasi-flat in terms of metric perturbations n s ≈ 1) and a random (Gaussian) distribution of perturbation phases. The amplitude of perturbations (∆ ∝ k 3 P(k)) is larger at short wavelengths and per δ ln k, where k is the wavenumber, and P(k) is the power spectrum of perturbations. For this reason, small objects (mini-haloes in numerical simulations and dwarf galaxies in the real Universe) should form first. The early formation of dwarf galaxies is confirmed by observation of very distant galaxies (Beckwith et al., 2006). These early galaxies grow by the attraction of more primordial matter and by clustering, as originally suggested by Peebles (1971).
Initial small-scale perturbations were present everywhere, and this raises the question: why do voids not contain galaxies, even dwarf galaxies? This question was asked more specifically by Peebles (2001). The deepest voids in the simulated dark matter distribution are never completely empty; they still contain low-mass condensations of primordial matter -mini-haloes. Therefore, voids could be the environment in which faint dwarf galaxies are most likely to reside. Why is this not the case?
This "Peebles question" has stimulated a number of studies, both observational and theoretical, to find dwarf void galaxies and to study either analytically or by numerical simulations void regions of the Universe. Studies of void galaxies were made by Szomoru et al. (1996), Grogin & Geller (1999), Gottlöber et al. (2003), Rojas et al. (2004), Hoeft et al. (2006), Stanonik et al. (2009), Kreckel et al. (2011b, and others. Among recent large-scale surveys, we mention here , Croton et al. (2004), Conroy et al. (2005), Patiri et al. (2006), andTinker et al. (2007). Karachentsev et al. (2003Karachentsev et al. ( , 2004Karachentsev et al. ( , 2007 studied the Local Volume out to a distance 10 h −1 Mpc, where they found about 550 mainly very faint galaxies. A void galaxy survey (VGS) was initiated by van de Weygaert and collaborators in our local neighbourhood out to redshift z = 0.02 (Stanonik et al., 2009;Kreckel et al., 2011a,b;van de Weygaert et al., 2011). These studies suggest that extremely faint galaxies do not fill voids, but are located in the vicinity of brighter galaxies.
To investigate the properties of galaxies in voids, the halo occupation distribution model is used to determine the relationship between galaxies and dark matter haloes (see Seljak (2000), Cooray & Sheth (2002), Berlind & Weinberg (2002), Zehavi et al. (2004), Zehavi et al. (2005), Zheng et al. (2007), van den Bosch et al. (2007), Tinker et al. (2006, 2008a, White et al. (2007), and Padmanabhan et al. (2009)). The results obtained by analysing of observational data are in good agreement with the results of semi-analytic models, hydrodynamic cosmological simulations, and high-resolution collisionless simulations (Kravtsov et al., 2004;Zheng et al., 2005;Conroy et al., 2006;von Benda-Beckmann & Müller, 2008;Tikhonov & Klypin, 2009). In particular, the simulations of Tikhonov & Klypin (2009) and Ishiyama et al. (2011) show the presence of very low-mass haloes in voids, but their mass is probably too low for star formation to be possible. The comparison of the distribution of model haloes and dwarf galaxies shows that haloes should have a circular velocity of at least ∼ 35 km/s in order to form a galaxy. Tinker et al. (2008b) showed that the smallest void haloes have so little mass that no galaxy formation is expected according to the model.
The absence of dwarf galaxies in voids can also be explained by gasodynamical processes. Using high-resolution hydrodynamical simulations, Hoeft et al. (2006), and Hoeft & Gottlöber (2010) showed that photoionisation by the UV radiation field is able to stop the cooling and collapse of gas in dwarf galaxy haloes. At the redshift z = 0, the characteristic mass scale of photo-evaporation corresponds to a circular velocity ∼ 27 km/s, in good agreement with other studies cited above.
All studies suggest that galaxy formation is a threshold phenomenon. In many of the studies cited above, it was demonstrated that galaxies do not form in voids, when the void haloes have very low masses. However, these studies did not explain what is the physical reason for the difference between the typical halo masses in the void and supercluster environments? In this paper, we try to find an explanation for the difference between the masses of haloes in various global environments.
In the case of the presence of an extended primordial perturbation spectrum, systems of galaxies are produced by an interplay of density perturbations on all scales. It is clear that the difference between the present distribution of galaxies and the very early distribution of protogalaxies must have something to do with perturbations of a typical scale-length that is larger than the scales responsible for the formation of primeval small protohaloes. This lead us to the study of the influence of perturbations of various scale-lengths on the evolution of the structure. Preliminary results of this study very interestingly demonstrated that the whole supercluster-void network is caused by the interplay of medium-and large-scale perturbations. These preliminary results were reported on several conferences and summerschools (see the web-site of Tartu Observatory 1 ).
In the present paper, we attempt to understand the influence of perturbations of various scale on the evolution and structure of voids in the cosmic web. As in Suhhonenko et al. (2011) and Einasto et al. (2011), we use numerical simulations in boxes of various scale-lengths from 100 to 256 h −1 Mpc, calculated for power spectra cut off above different scales from 8 to 128 h −1 Mpc, to determine the influence of perturbations of various scales on the formation and internal structure of voids. We employ a wavelet technique to follow the evolution of underdense and over-dense regions in terms of density waves of various scales.
The paper is composed as follows. In the next section, we describe numerical models used in the study. In Section 3, we perform a wavelet analysis of our simulations. In Section 4, we describe our correlation analysis of wavelet-decomposed density fields. In Section 5, we investigate the structure of voids using haloes of various mass as objects which define voids. We discuss our results in Section 6, and in our last Section we present our conclusions.
Modelling the evolution of voids in the cosmic web
role. Thus, to understand the supercluster-void phenomenon correctly, we need to perform numerical simulation in a box containing large waves. On the other hand, most systems of galaxies in the Universe are groups of galaxies -there are almost no very isolated galaxies far away from groups. The characteristic scale of groups is 1 h −1 Mpc, thus the simulation must have at least a resolution similar to this scale.
To have both a high spatial resolution and the presence of density perturbations in a larger scale interval, we performed simulations in boxes of sizes 100 h −1 Mpc, and 256 h −1 Mpc, and resolutions N 3 grid = 256 3 and N 3 grid = 512 3 . The main parameters of our series of models are given in Table 1, where L is the cube size, N part is the number of particles and cells used in simulations, and M part is the mass of a particle in units of 10 9 M ⊙ . We assumed the cosmological parameters (Seljak et al., 2005;Tegmark et al., 2004Tegmark et al., , 2006) Ω m = 0.28 for the matter density, Ω Λ = 0.72 for the dark energy density (in units of the critical cosmological density), and σ 8 = 0.84 for the initial amplitude parameter. We use the notation h for the present-day dimensionless Hubble parameter in units of 100 km s −1 Mpc −1 ; in simulations we used a value of h = 1.
As we are interested in the study of the role of perturbations on different scales to the evolution of voids, we used simulations with the full power spectrum, as well as with a power spectrum truncated at wave-numbers k cut , so that the amplitude of the power spectrum on large scales is zero: P(k) = 0, if k < k cut , wavelength λ cut = 2π/k cut . The cut scale in h −1 Mpc, λ cut = 2π/k cut , is given in Table 1. The amplitude of a spectrum was set to zero for k < k cut during the calculation of the initial density field, keeping all simulation parameters fixed across the full set of realisations. For the models of the M256 series, we used in simulations the AMIGA code (Knebe et al., 2001). This code uses an adaptive mesh technique in the regions where the density exceeds a fixed threshold. In this code, gravity is automatically softened adaptively, so that the softening length is near its optimum value in both high-and low-density regions. We chose a maximum level of eight refinements. For models with a 512 3 resolution, we used the GADGET-2 code with a gravitational softening length of 10 h −1 kpc (L100 models) and 20 h −1 kps (L256 models) (Springel et al., 2001;Springel, 2005). The simulation L100 was performed at the Leibniz-Institut für Astrophysik Potsdam, and simulations M256, and L256 at the High Performance Computing Centre of University of Tartu. The initial density fluctuation spectrum was generated using the COSMICS code by Bertschinger (1995) 2 ; to generate the initial data, we assumed the baryonic matter density Ω b = 0.044. Calculations started at an early epoch, z = 100. For every particle, we calculated the local density in units of the mean density, using positions of 27 nearby particles. All models of the same series have the same realisation, so the role of different waves in models can be easily compared. To allow different models to be compared every particle has an identification number, the same for all models of a series. Particle positions and velocities were extracted for seven epochs in the redshifts range z = 30, . . . , 0. Power spectra for the models of the L256 series are shown in Fig. 1 for an early epoch, z = 30, and for the present epoch z = 0.
For each particle, we also calculated the global density at the location of the particle (for details see Appendix A.1). To find the global density field, we used smoothing with the B 3 spline kernel of scale ∼ 8 h −1 Mpc, which is rather close to smoothing with an Epanechnikov kernel of the same scale. Smoothing of density fields with different kernels is discussed by Martínez & Saar (2002). These local and global density fields were calculated for all models and all epochs, and were used in the subsequent analysis to select particles belonging to a population with given properties, and to follow the density evolution of the model. Hence, for each particle we stored coordinates, local, and global 2 http://arcturus.mit.edu/cosmics density values. Particles were sorted, thus their number in the file serves as an ID number.
To see the effects of density waves of different scale and to understand the evolution of the density field, we use the wavelet technique. We use theá trous wavelet transform (for details, see Martínez & Saar (2002) and Appendix A.2). The field is decomposed into several frequency bands as follows. The highresolution (zero level) density field was calculated with the B 3 spline kernel of width equal to the size of one cell of the field, where every next field was calculated with twice larger kernel. Wavelets were found by subtracting higher level density fields from the previous level fields. In such a way, each wavelet band contains waves twice the size of the previous band, in the range ± √ 2 centered on the mean (central) wave. The sum of these bands restores the original density field. Using this technique, we calculated the density fields and wavelets up to index 5 (6 for models of the M256 series).
The high-resolution density fields of the model M256.256 for the epochs z = 0, 1, 5, 10 are shown in Fig. 2. The highresolution density fields of models L100.100 and L100.016 for redshifts z = 0 and z = 2 are shown in Fig. 3. The scale of the cosmic web is rather different in models of different cutoff scale. The dependence of the scale of the web on the maximal wavelength of density perturbations was investigated in detail by Suhhonenko et al. (2011).
To investigate the spatial structure of both the cosmic web and voids, we found haloes using the adaptive Amiga Halo Finder (AHF) code developed by Knollmann & Knebe (2009), with the number of particles in a halo N p ≥ 20. Haloes and their parameters (masses, virial radii, positions, velocities etc.) were found for all models and simulation epochs.
Analysis of models
To understand the evolution of the density field, and to see the effects of density waves of different scale, we analyse the density fields and their wavelet decompositions for various simulation epochs and cut off scales. We focus on the evolution of the following properties:
1. the global patterns of the density field, using wavelets of different levels; 2. the fine structure of the density field for perturbations of various scales; 3. density perturbations of various scales;
The evolution of the global patterns of density fields
To compare the patterns of density fields and their wavelet decompositions for various cosmic epochs, we use the model M256.256.
In Fig. 2, we plot the high-resolution density field and wavelets of the model M256.256 at the four redshifts: z = 0, 1, 5, 10. In the second column of Fig. 2, the wavelets of order five are shown. The characteristic scale of density pertur-bations for this wavelet is 64 h −1 Mpc. The upper density levels used in plotting for w5 are 1.2, 0.7, 0.2, and 0.1, for redshifts 0, 1, 5, and 10, respectively. Lower limits to these redshifts are −0.6, −0.35, −0.1, and −0.05. In wavelets of the order 4 and 3, a similar choice of colour limits is applied. This colour-coding of wavelets at different redshifts is chosen so that a certain colour corresponds approximately to the density level, corrected by the linear growth factor for that redshift. Wavelet blue colours correspond to under-dense regions of density waves, green colours to slightly over-density regions, and red colours to highly overdensity regions. The border between the light blue and the dark green colours corresponds to the critical density D loc = 1.6, which separates low-density haloes and haloes collapsed during the Hubble time (Kaiser, 1984;Bardeen et al., 1986). Note that in both models and simulation epochs the majority of filaments in voids have densities below the critical density. Figure 2 shows that the pattern of the cosmic web on wavelet w5 is almost identical at all redshifts, only the amplitude of the density waves increasing approximately in proportion to the linear growth factor. This linear growth is expected for density waves of large scales, which are in the linear stage of growth. The pattern of the web of the wavelet w4 changes little, but the growth of the amplitude of density waves is more rapid. The pattern of the wavelet w3 changes much more during the evolution, and the amplitude of density waves increases more rapidly, but essential features remain unchanged, i.e. the locations of highdensity peaks and low-density depressions are almost independent of the epoch. 2 shows that at all redshifts high-density peaks of wavelets of medium and large scales almost coincide. In other words, density perturbations of medium and large scales have a tendency of phase coupling or synchronisation at peak positions. Einasto et al. (2011) reached the same conclusion using models with a much broader scale interval. Figure 2 shows that the synchronisation of medium and large scales applies also to underdense regions. The analysis below describes the differences between the synchronisation of over-and under-dense regions in quantitative terms.
The evolution of the fine structure in the density field
To follow the evolution of the fine structure in the density field, we compare high-resolution density fields of the models of L100 series (see Fig. 3). We show slices at the coordinate k = 51 of the full model L100.100, and of the strongly cut model L100.016, at epochs z = 0, and z = 2. The k coordinate is chosen so that the slice in the model L100.100 crosses a large under-dense region between a rich supercluster and several rich clusters. To see more clearly the differences between the density fields of models L100.100 and L100.016, we show in Fig. 3 only the zoom-in of the central 50 × 50 h −1 Mpc (256 × 256 pixels) region. To compare the present field with the initial density field, we use the smallest scale wavelet w1 for both models at the redshift z = 10, shown as a zoom-in plot in Fig. 4, which is similar to the plot for the present epoch in Fig. 3.
The power spectrum of density perturbations has the highest power on small scales. Thus, the influence of small-scale perturbations relative to large-scale perturbations is strongest in the early period of evolution of structure. For this reason, density fields and wavelets w1 at early epochs are qualitatively rather similar for the full model L100.100, and for the model cut on small scales, λ cut = 8 h −1 Mpc, called L100.008 (see Fig. 4).
However, there are small but important differences in the patterns of small-scale structures in the models L100.100 and L100.008 at z = 10. The density peaks of the model L100.008 have more or less equal heights throughout the whole simulation box, whereas in the model L100.100 in regions of future voids the peak heights are lower than in the future supercluster regions. This shows that already at the epoch z = 10 large-scale perturbations started to influence the density field on small scales.
The colour coding is identical in all panels of Figs. 3. Codes are chosen so that the density level D loc = 1.6 is clearly visible. Particle clouds of density above this limit can form collapsed haloes during the Hubble time, and below this limit they stay in pre-galactic more diffuse form. The actual collapse parameter is 1.69 (Kaiser, 1984;Bardeen et al., 1986). In both models, there exists a network of filaments. We see that in all models shown in Fig. 3 the majority of filaments have densities below this limit, i.e. that filaments consist of strings of primordial matter that have not yet formed compact haloes.
At the epoch z = 2, there are already strong differences between the density fields of models L100.100 and L100.016. In the model L100.016, the high-density knots are distributed more uniformly, and the whole pattern of filaments has a smaller scale. We discuss the distribution of mean void sizes for both models below. In the full model L100.100, the fraction of particles above the critical density D loc = 1.6 in regions of low global density (voids) is lower (see Fig. 7). Regions containing highdensity knots (marked by green and red) start to form superclusters. In the model L100.016, these regions are more uniformly distributed.
The comparison of distributions for the epochs z = 2 and z = 0 shows that filaments contract with time, as known from many earlier studies cited above. In the model L100.016, the pattern of filaments changes very little between the epochs z = 2 and z = 0. In contrast, for the model L100.100 density evolution can be clearly seen. At z = 2, small-scale filaments fill almost the whole space of voids between rich systems. At z = 0, most of these small filaments have merged leaving more space for very low-density regions.
The differences between the models L100.100 and L100.016 at the present epoch z = 0 are very well seen in Fig. 3. In the model L100.100, between the supercluster at the left corner and the cluster at the right edge there is a large low-density region. This region is crossed near the center by a filament that has several knots in the green and red colour. In the model L100.016, there are no rich superclusters, the whole region being covered by a web of small-scale filaments. In other words, large-scale perturbations present in the model L100.100, have suppressed the growth of the density of filaments in void regions.
There exists a low-density smooth background, seen in the Fig. 3 in deep dark-blue colour. The density of this background, D ≈ 0.1, is lower at the present epoch z = 0, i.e. the density of the smooth background decreases with time (see Fig. 7). Regions of very low density have much larger sizes in the model L100.100 than in the model L100.016.
The evolution of density perturbations of various scales
We now follow the evolution of density perturbations of various scales in the models of the series L100 and L256 in more detail. The evolution of density perturbations has several characteristics: the shapes of density perturbations of various scales and their change with time; synchronisation, amplification, and suppression of density perturbations of various scales; and the formation of regions of very low density.
To see the evolution of the density field and its wavelets, we show in Fig. 5 one-dimensional density distributions (beams) along horizontal lines of the plane in Fig. 3. The distributions are generated along the axis j = 222, k = 51, for the models L100.100 and L100.016 at the redshifts 0, 2, 5. Beams are taken along the i−coordinate; at each i value, all cells in the j− and k−coordinate within ±5 from the centre of the beam are counted, i.e. beams have sizes 11 × 11 cells (about 2.15 × 2.15 h −1 Mpc in the models of the L100 series). The average densities for a given i are found, and are given in the mean density units. We use overdensities D − 1 here, so the mean overdensity level is zero, which is similar to the mean value for wavelets. Wavelet amplitudes are divided by the linear growth factor, thus during linear growth their amplitudes should not change. The effective scale for the wavelet w5 of the model L100 is 12.5 h −1 Mpc, as seen also from the separation between the high-density peaks in Figs. 5.
At high redshifts, perturbations of various scales have almost sinusoidal shapes, and the wave peaks have approximately equal heights. During the subsequent stages of evolution, Einasto et al. (2011) showed that perturbations of larger scales begin to affect the evolution. These perturbations amplify small-scale perturbations near maxima, and suppress small-scale perturbations near minima. Thereafter, still larger perturbations amplify smaller perturbations near their maxima, and suppress smaller perturbations near their minima, and so on.
For early stages, the density fields and wavelets are given for the models L256.256 and L256.008, for the epoch z = 30, in Fig. 6. This model also allows us to illustrate the influence of larger waves, because the characteristic scale for the wavelet w5 of the model L256 is 32 h −1 Mpc.
We see that in the model L256.008 wavelets of different order have approximately sinusoidal shapes. The wavelet w5 of this model has zero amplitude, and the amplitude of the wavelet w4 is small, because in this model only waves of scale up to 8 h −1 Mpc are present. The wavelet w3 of the effective scale, which is approximately equal to the cut scale of this model, has the highest amplitude. The maxima and minima of this wavelet are partly synchronised with the maxima and minima of the wavelets w2 and w1; synchronisation is better in the regions that happen to coincide with the maxima and minima of the wavelet w4.
In the model L256.256 at the early epoch z = 30, the wavelet w5 has the highest amplitude, the next highest being that of the wavelet w4. These two wavelets are only partly synchronised, i.e. the maxima of the wavelet w5 coincide in position only approximately with the maxima of the wavelet w4, and near the minima of the wavelet w5 there is a secondary maximum of the wavelet w4, i.e. the wavelet w4 behaves as the first overtone of the wavelet w5. The shape of small-scale wavelets is not sinusoidal. This is caused by large-scale waves that have started to change the shapes of small-scale waves.
Further evolution with time of the wavelets can be followed using the model L100 in Fig. 5. We discuss first the evolution of wavelets in the model L100.016 (see the right panels of the Figure). The largest wavelet w5 has for all redshifts approximately the same amplitude and a sinusoidal shape, suggesting that density perturbations of this scale are in the linear growth regime. The shape of the next wavelet w4 is very different from sinusoidal. In some regions of maxima of the wavelet w5, the wavelet w4 has very strong maxima. An example is the region near i ≈ 370, where all wavelets of smaller scale also have strong maxima, and wavelets of all scales up to w4 are very well-synchronised. The overall shape of the density profile is determined by the wavelet w3 that has a characteristic scale 3.1 h −1 Mpc. Most peaks seen in the density profile are due to the maxima of this wavelet. In most of these peaks, wavelets of smaller scale also have maxima, i.e. near the peaks small-scale wavelets are synchronised.
When we compare the density and wavelet distributions of this model at various epochs, we see little difference. This shows that in the model L100.016 the structure has rapidly evolved at early epochs and changes only a little later. The most important development is the decrease in the density in deep void regions, and the increase in the density in massive haloes.
We now consider the evolution of the full model L100.100, shown in the left columns of Fig. 5. We see that the evolution of the largest wavelet w5 is almost linear up to the epoch z ≥ 1 (the density and wavelet distributions for the epochs z = 1 and z = 2 are rather similar, only the amplitudes of wavelets up to w4 are smaller for z = 1). The shapes of the wavelet w5 for different epochs are almost sinusoidal and the heights of the maxima are approximately equal. The next wavelet w4 behaves as a first overtone of the wavelet w5 -near the minima of w5 there are maxima of w4, which have much lower heights than the maxima near the maxima of w5. Evidence of this phenomenon can also be clearly seen in the wavelet analysis of the Sloan Digital Sky Survey (see Fig. 6 of Einasto et al. (2011)). Near the joint maxima of w5 and w4, there are very strong maxima of all wavelets of smaller order; this is very well seen at locations i ≈ 290 and i ≈ 380.
Near the minima of w5 and the maxima of w4, wavelets of smaller order also have maxima, but these maxima get weaker at lower redshifts, as in the region around i ≈ 220. Here, smallscale wavelets are partly synchronised, and small-scale peaks of the density field are related to maxima of the wavelets w2 and w3.
The density and wavelet distributions of the model L100.100 for the present epoch z = 0 are completely different from the distributions at higher redshifts. In the present epoch the dominant feature is a large under-dense region in the interval 120 < i < 380. This large under-dense region is caused by large-scale den-sity perturbations that are not shown as wavelets in the Figure. In this region, these large-scale density waves have their minima and the amplitudes of density waves of smaller scales, including w4 and w5, are suppressed. The amplitudes of wavelets w3 and lower orders are almost zero. Near the density maxima seen at higher redshifts at i ≈ 210 and i ≈ 290, there are very weak density peaks with maxima below the mean density level. These maxima are seen in the density field as weak filaments (see Fig. 3).
The most remarkable feature of the density field in the model L100.100 at the present epoch is the presence of large underdense regions of very low density D ≈ 0.1, which can be clearly seen in Fig. 3 in deep-blue colour. At earlier epochs, the density in these regions was higher and there were numerous lowdensity peaks within the regions, for the present epoch most of these peaks are gone. This is caused by density perturbations on larger scales.
When we compare the evolution of density distributions of models L100.100 and L100.016, we see remarkable differences. These differences are solely due to the presence of density perturbations of scales larger than λ cut = 16 h −1 Mpc in the model L100.100. We note that both models were generated with identical "random amplitudes", i.e. the perturbations of scales λ ≤ λ cut = 16 h −1 Mpc are identical in both models.
The main result of the paper by Einasto et al. (2011) was that at all redshifts high-density peaks of wavelets of large and medium scales almost coincide. Figures 2, 3, and 5 show that the same conclusion is valid for positions of density depressions (deepest voids) of wavelets of large and medium scales. The other main conclusion of Einasto et al. (2011) was that positions of peaks of waves of different scale coincide, i.e. density waves of different scale are synchronised. A look at Figs. 2 and 5 shows that the synchronisation of density waves of different scales also concerns density depressions of large and medium scales. However, the synchronisation of the depressions of density waves, which is responsible for the formation of voids, is less pronounced than that of density peaks.
The evolution of density distributions in void regions
To investigate the evolution of the density distribution in void regions, we selected in all models void particles in the present epoch z = 0. We calculated the distributions of the global densities of full models (i.e. models with no cuts in the power spectra), and selected particles of the lowest global density values, about 10 % of all particles (13.390.895 particles in the model L100.100). The corresponding value of the threshold global density is 0.565 for the model L100.100 (in units of the mean density).
The evolution of the distributions of local densities of void particles for the full model L100.100 is given in the left panel of Fig. 7. The distributions of void particle local densities for the models of the L100 series for the present epoch are shown in the right panel of Fig. 7. Figure 7 shows that the initial distribution of particles at the redshift z = 10 is quite symmetrical in the log-log representation. The density distribution has a peak at D ≈ 0.8. As time goes on, the peak density decreases, and at the present epoch has a value D ≈ 0.2; the lowest density occurs close to D ≈ 0.1. Particles in overdense regions (D ≥ 1) form haloes; with decreasing redshift z, the peak densities increase, i.e. haloes in void regions become more massive and denser.
Density evolution depends strongly on the cutoff wavelength of the model. In the most strongly cut model L100.008 at the present epoch, the fraction of particles in very low-density regions (D ≈ 0.2) is about five times lower than in the full model L100.100. In the model L100.008, most particles in void regions form haloes of mean density D ≈ 100, and with the maximum density D ≈ 1000. With the increase in the cut wavelength λ cut , the fraction of particles in very low-density regions increases. The maximal density of haloes in void regions reaches the highest value, D ≈ 2500, in the model L100.016. In this model, density waves between the scales 8 and 16 h −1 Mpc amplify the density of haloes. If density perturbations of larger scale are included (models L100.032 and L100.100), then in the void regions still larger perturbations start to decrease the maximum masses of haloes. This depression is largest in the full model L100.100, where the maximum local densities in void haloes reach the values D ≈ 600.
Correlation analysis of wavelet-decomposed density fields
We now attempt to quantify some of the qualitative statements given earlier in the text. To this end, we perform the correlation analysis of wavelet-decomposed density fields. Our approach is analogous to the one presented in Einasto et al. (2011) with the exception that here instead of over-densities we focus on underdense regions. In the following, we present our results only for the model M256 since the other ones lead to very similar results. We consider six wavelet levels: w1, w2, . . ., w6 with the effective smoothing scales of 4, 8, . . ., 128 h −1 Mpc, respectively, and use the simulated density fields at five different redshifts z = 30, 10, 5, 1, 0. Quite generally, one can choose two redshifts, z i and z j , along with two wavelet levels, w m and w n , and calculate the correlators
r w m z i , w n z j = (δ w m z i − δ w m z i )(δ w n z j − δ w n z j ) (δ w m z i − δ w m z i ) 2 (δ w n z j − δ w n z j ) 2 ,(1)
where δ w m z i corresponds to the wavelet-decomposed density field for level w m at redshift z i . The angle brackets represent an ensemble average, which under the ergodicity assumption is replaced by a simple spatial average. We note that we calculate the correlators for zero lag only, i.e. we do not shift one field with respect to the other. . We see that for the largest smoothing scale, i.e. w6, all the correlators stay quite close to r = 1, while later on, as the other redshift decreases below z = 30, the lines start to deviate from r = 1. Thus, on the largest scales the information is approximately preserved, while on the smallest scales the information gets gradually erased.
In what follows, the above correlators are not calculated for the full density fields, but instead masks are applied to separate under-dense void regions. If two wavelet fields δ w m z i and δ w n z j are correlated then the mask is always defined with the field that has larger smoothing scale, e.g., the field δ w m z i if w m ≥ w n . The masking level is taken such that only 10% of the most underdense cells end up inside the mask.
In the following, we use two types of correlators:
1. Fixed wavelet scale correlators, i.e. w m = w n , at different redshifts. 2. Correlators at fixed redshifts, i.e. z i = z j , for different wavelet levels.
In the first case, we take one of the density fields always at redshift z = 30, which is high enough for all of the scales of interest to be well in the linear regime. It is easy to understand that under the linear evolution, where the values of δ just get multiplied by the same scale-independent but time-varying factor, the correlation coefficient should always stay at the value r = 1, i.e. all of the initial information is well preserved. In Fig. 8, we show the behaviour of the correlation coefficient for different redshift pairs: (z i , z j ) ∈ {(30, 10); (30, 5); (30, 1); (30, 0)} for all the six wavelet levels. It is easy to see that for the largest smoothing scale, i.e. w6, all the correlators stay rather close to r = 1, while later on, as the other redshift gets smaller than z = 30 the lines start to decline from r = 1, especially on the smallest scales. Thus, on the largest scales the information is approximately conserved, while on the smallest scales the information gets erased. correlation coefficient wavelet level z=0 z=1 z=5 z=10 z=30 Fig. 9. Correlators at fixed redshifts z i = z j = 30, 10, 5, 1, 0 from top to bottom (from light to dark blue) for the under-dense regions. For w m = 1, the curves are peaked at w n = 1, while they drop gradually as w n is increased to higher values. Similarly, the curves for w m = 2 are peaked at w n = 2, and get reduced as the distance increases from this point. For the other w m and w n values, the behaviour is very similar. We see that the lower the output redshift, the narrower the coupling kernels, i.e. a nonlinear evolution in under-dense regions leads to additional decoupling of nearby wavelet modes.
The lower the redshift of the other density field the greater the loss of information. In practice, for the cases z = 10 and z = 5 the loss of information is relatively modest if the wavelet level w ≥ 3. For z = 1 and z = 0, the information is approximately saved only for the largest scales. The dashed lines in Fig. 8 show the corresponding results for the over-dense regions, in this case focusing on the 10% of the most over-dense cells. As one can see, the information loss in under-dense regions occurs more rapidly with time than for the over-densities.
In Fig. 9, we plot the correlators at fixed redshifts z i = z j = 30, 10, 5, 1, 0 from top to bottom (from light to dark blue). For m = 1, the curves are peaked at n = 1, while their amplitude decrease gradually as n increases to higher values. Similarly, the curves for m = 2 are peaked at n = 2, and decrease in amplitude as one moves to neighbouring wavelet levels. For the other n values, the behaviour is very similar. As long as the evolution proceeds in a linear manner, i.e. the growth depends only on redshift, but is independent of the wavelet scale, the coupling kernels plotted in Fig. 9 should stay exactly the same. However, we see that the lower the output redshift, the narrower the coupling kernels, i.e. a nonlinear evolution in under-dense regions leads to the additional decoupling of the nearby wavelet modes.
The corresponding figure for the over-densities was given in Einasto et al. (2011) (see Fig. 7 there). The main difference between Fig. 7 by Einasto et al. (2011) and Fig. 9 is that in the case of over-densities the coupling kernels become broader because of nonlinear evolution, i.e. instead of desynchronisation we have increasing synchronisation of over-densities of different wavelet levels.
It is important to realise that even with only the linear evolution of the Gaussian density field the nearby wavelet levels at fixed redshift get significantly coupled, since the neighbour- ing levels tend to contain some of the common Fourier space modes. However, assuming only linear evolution it is clear that the coupling does not change with redshift.
The structure of voids
Cosmic voids are defined by objects surrounding them -galaxies and clusters of galaxies of various luminosity (mass). In the Fig. 11. The left panels show the numbers of voids, defined by the AHF haloes for various threshold masses and models, as shown in Fig. 10. The right panels show the mean radii of voids, defined by the AHF haloes for different threshold masses, M th , and for various cut-off scales. The upper, middle, and lower panels are for the redshifts z = 0, 2, 5, respectively. case of models, it is customary to use dark matter haloes instead of galaxies or clusters. To investigate the influence of density perturbations of various scale on the void structure, we shall use our models of the L100 series which have the highest resolution in mass and scale. To find haloes, we applied the Amiga halo finder by Knollmann & Knebe (2009). We characterise the web and void structure by the dependence of the halo mass function on the scale of density perturbations for various epochs, and by the number and radii of voids defined by haloes of various mass. Our results are shown in Figs. 10 and 11. Fig. 10 shows the cumulative mass functions of the AHF haloes for all models of the L100 series, for three epochs, z = 0, 2, 5. We see that in the models where the large-scale waves have been cut off, maximum haloes have much lower masses than in the full models. This effect can also be seen in Fig. 3. The differences between the models with various cutoff scales increase with time: at early epochs, halo masses are lower and time is needed for most massive haloes to grow.
To find voids, we used a simple void finder proposed by Einasto et al. (1989). For each vertex of the simulation grid, we first calculated its distance to the nearest AHF halo. The maxima of the void distance matrix correspond to the centres of voids, and their values are the void radii. The distribution of the AHF haloes is noisy, thus there are many nearby local maxima in the distance matrix. We define the position of the void centre as the location of the cell, which has the largest distance to a halo within a box of the size of ±3 grid elements.
The left panels in Fig. 11 show the number of voids found for various AHF halo mass thresholds, which correspond to systems of galaxies of different mass. The right panels of the Figure show mean radii of voids as a function of the AHF halo mass threshold. The void numbers and radii characterise the hierarchy of voids. We see that as the threshold halo mass used in the void search increases, the number of voids continuously decreases, and the void radii increase. This means that some filaments are fainter than the respective mass threshold limit, and do not contribute to the void definition. Both parameters also depend strongly on the highest density perturbations used in the models. Models cut at larger scales have more voids, but their radii are smaller. This effect can be clearly seen at all simulation epochs.
In the models L100.100 and L100.032, the dependence of the number of voids and their mean radii is a more-or-less continuous function of the halo threshold mass. In contrast, in the strongly cut models L100.016 and L100.008, at higher mass thresholds, the number of voids decreases very rapidly and void radii also increase more rapidly than in the models L100.100 and L100.032. This effect is due to the very sharp decrease in the number of haloes of high mass (see Fig. 10). These rare haloes define very large voids. The sizes of these voids are not characteristic of the overall cosmic web pattern of the particular model.
The simulation box used in the present void structure study has the size L = 100 h −1 Mpc, thus very large perturbations responsible for the formation of rich superclusters are absent in the model. For this reason, the largest voids for the highest halo mass thresholds have radii ≃ 10 h −1 Mpc. As shown by Suhhonenko et al. (2011), in models with cube sizes L = 265 and L = 768 h −1 Mpc the maximum void radii are much larger (see Fig. 6 by Suhhonenko et al. (2011)). This comparison shows that large-scale density perturbations are needed to form voids defined by superclusters.
Discussion
According to the current cosmological paradigm, the cosmic web with systems of galaxies of various scale and mass, from clusters to filaments and superclusters, and voids between them, is formed from tiny density perturbations during the very early stage of the evolution of the Universe. For the formation of the web and of voids between various objects of the web, the presence of a continuous spectrum of density perturbations of various scales is essential. The power spectrum of density perturbations has the highest power on small scales. Thus, the influence of small-scale perturbations relative to large-scale perturbations is strongest in the early period of structure evolution. Small-scale systems, i.e. small haloes in simulations and dwarf galaxies in the real world are the earliest compact objects to form. As shown by Suhhonenko et al. (2011), and confirmed in the present study, small-scale haloes form at early epochs everywhere. The wavelet analysis done by Einasto et al. (2011) shows that wavelets w1 at redshift z = 30 are almost identical in models L256.256 and L256.008 (see Fig. 8 there). The present study shows that this is also the case for the wavelets w1 of the models L100.100 and L100.008 at the epoch z = 10 (see Fig. 4).
Further evolution of the web depends on the presence of density perturbations on larger scales. In models where mediumscale density perturbations are absent, no systems of filaments and voids form (Suhhonenko et al., 2011). In models with density perturbation spectra cut on large scales, the cosmic web with filaments and voids has a characteristic scale of the largest scale present in the density perturbation field. The main quantitative characteristics of the web -the masses of haloes, the number and sizes of voids defined by haloes of various mass -depend on the largest scale perturbations present.
In the models with strongly cut power spectra (λ cut ≤ 16 h −1 Mpc), the maximum masses of haloes are lower than in the models with a full power spectrum. Their number is larger, but they define a cellular cosmic web with smaller mean void sizes (see Fig. 11). With the increase of the cut-off wavelength λ cut , the maximum masses of haloes increase, and they define a cellular web with larger cells but fewer voids.
The wavelet analysis described in previous Sections shows a very important property of the evolution of density waves with time: the synchronisation of the phases of density waves on various scales. Einasto et al. (2011) discussed this property of the evolution of over-density features -clusters and superclusters of galaxies in numerical simulations. In the present paper, we have followed the evolution of both over-and under-density regions. The analogy in the evolution of over-and under-density regions is expected, since in the early linear stage of the evolution of structure positive and negative parts of density waves were similar and symmetrical.
The wavelet analysis leads us to the conclusion that the properties of the large-scale cosmic web with filaments and voids depend on two connected properties of the evolution of density perturbations. The first property is the synchronisation of density waves of medium and large scales. Due to the synchronisation of density waves of different scales, positive amplitude regions of density waves add together to form rich systems of galaxies, and negative amplitude regions of density waves add together to decrease the mean overall density in voids. The amplification of density perturbations is another property of density evolution. Due to the addition of negative amplitudes of medium and large scale perturbations, there is no possibility for the growth of the initial small-scale positive density peaks in void regions. For this reason, small-scale protohaloes dissolve there. In the absence of medium and large-scale density perturbations, these peaks would contract to form haloes, which would also fill the void regions, i.e. there would be no void phenomenon as observed.
Simulations with truncated power spectra were performed by Little et al. (1991) and Einasto & Gramann (1993). Little et al. (1991) used three-dimensional simulations of resolution 128 3 with power spectra in the form P(k) ∼ k −1 for k ≤ k c and P(k) = 0 for k > k c , i.e. the spectra were cut at small scales, in contrast to our study here. The cuts were scaled so that k = 1 represents the fundamental mode of the simulation box of the size L = 64 h −1 Mpc. The authors used the cuts: k = 2, 4, 8, 16, 32, 64. The main result of the study was that the structure of the cosmic web depends on density perturbations of larger scale than the cut-off scale, in accordance with our results. Einasto & Gramann (1993) made a two-dimensional simulation of the resolution 512 2 with the full power spectrum, and a power spectrum cut at the scale λ t = L/4, where L is the size of the simulation box. The fine structure of filaments in both models was rather similar, only the location and strength of filaments was slightly different. This result is also in agreement with our present findings.
As the analysis shows, the phase synchronisation of both positive and negative sections of density waves is stronger for density waves of larger scales, λ ≥ 32 h −1 Mpc. Scales larger than the sound horizon at recombination, ≈ 146 Mpc according to the most recent cosmological data by Jarosik et al. (2010), were outside the horizon most of the time. This scale, 105 h −1 Mpc for the presently accepted Hubble constant h = 0.72, is surprisingly close to the characteristic scale of the supercluster-void network (Einasto et al., 1997a(Einasto et al., , 2001. The skeleton of the supercluster-void network was created during the very early post-inflation stage of the evolution of the Universe (Kofman & Shandarin, 1988). This result is also true for large voids between superclusters of galaxies -the seeds for these supervoids were created in the very early Universe.
Conclusions
Our present study of the evolution of density perturbations of various scales has led to the following conclusions:
-The formation of the cosmic web with filaments and voids is due to the synchronisation of density waves of medium and large scales, and the amplification of both over-and underdense regions. -Voids are regions in space where medium-and large-scale density waves combine in similar under-density phases. -Owing to phase synchronisation, the mean density of matter in void regions is below the mean density, thus initial smallscale perturbations cannot grow.
This kernel preserves the interpolation property (mass conservation) for all kernel widths that are integer multiples of the grid step, h = N. The 3-D K (3) B box spline kernel we use is given by the direct product of the three one-dimensional kernels K B (x; N) ≡ K (3) B (x; N) = K (1) B )(x; N)K (1) B (y; N)K (1) B (z; N), (A.3) where x ≡ {x, y, z}. To calculate the high-resolution density field, we use the kernel of scale, equal to the cell size of the particular simulation.
A.2. Wavelets
We use theá trous wavelet transform (for details see Starck et al., 1998;Starck & Murtagh, 2002). The field is decomposed into several frequency bands as follows. The high-resolution (zero level) density field was calculated with the B 3 spline kernel with width equal to the size of one cell of the field, every subsequent field being calculated with a kernel twice as wide. Wavelets were found by subtracting higher level density fields from the previous level fields. In such a way, each wavelet band contains waves twice the scale of the previous band, in the range ± √ 2 centered on the mean (central) scale. The sum of these bands restores the original density field.
The 'á trous algorithm' wavelet transform decomposes an n × n × n data set D as a superposition of the form
D = D J + J j=1 w j , (A.4)
where D J is a J times smoothed version of the original data D, and w j represents the structure of D at scale 2 j . The wavelet decomposition output consists of J three-dimensional mother fields D j and wavelets w j of size n × n × n. Following the traditional indexing convention, we mark the mother fields and wavelets of the finest scale with the index j = 1. The smoothed version of the original data, D J = D 0 , is the density field found with the kernel of the scale, equal to the cell size of the simulation L/N grid . The wavelets can be found in a recursive manner, but we also needed to evaluate the partial density fields (mother fields of different order) for our analysis. Thus, we found the mother fields D j first by convolving the field D j−1 by the B 3 kernel of twice the scale used for calculating the field D j−1 . We then found the wavelets of index j by subtracting the mother density fields w j = D j−1 − D j .
(A.5)
In this construction, a wavelet of index j describes density waves between the scales ∆ j−1 = l c × 2 j−1/2 and ∆ j = l c × 2 j+1/2 . The scales are the diameters of kernels used in calculating of the density fields D j−1 and D j .
Fig. 1 .
1The left and right panels show the power spectra for the models of the series L256 at the epochs z = 30 and z = 0, respectively. with resolutions of N grid = 256 and N grid = 512, respectively. The subsequent number gives the size of the simulation box, L, in h −1 Mpc, and the next indicates the maximum wavelength used in the simulation in h −1 Mpc. The locations of the cells inside the cubical density grid are marked by cell indices (i, j, k).
Fig. 2 .
2The high-resolution density fields of the model M256.256 are shown in the left column, at the k = 140 coordinate. The second, third, and fourth columns show the wavelet w5, w4, and w3 decompositions at the same k, respectively. The upper row gives data for the present epoch, z = 0, the second row for the redshift z = 1, the third row for the redshift z = 5, and the last row for the redshift z = 10. Densities are expressed on a linear scale. In wavelet panels, green and red colours show the positive regions of wavelets, and the blue colour shows negative (under-density) wavelet regions. Colour codes are plotted at the bottom of the figure for wavelet w3 at epoch z = 10. 4. the density distribution in void regions.
Fig. 3 .
3Zoom-ins to the high-resolution density fields of the models L100.100 and L100.016, left and right columns, respectively. Zoom factor is 2, central 50 × 50 h −1 Mpc (256 × 256 pixels) of all models are shown. Upper panels are for the present epoch z = 0, lower panels for the epoch z = 2. All panels are at the k = 51 coordinate. Cross sections (beams) at coordinates j = 222 and k = 51 for both models are shown inFig. 5for three redshifts to see the evolution of the density field and its wavelets.In the upper left corner of the Figure there is a rich supercluster in the model L100.100, absent in the model L100.016. Both models have at the right edge of the Figure a rich cluster. This cluster is well seen in the Fig. 5 at i = 390 coordinate. Densities are expressed in the logarithmic scale, identical lower and upper limits for plotting with the SAO DS9 package are used.
Fig. 4 .
4Wavelets w1 of models L100.100 and L100.008 at redshift z = 10 are shown in the left and right panels, respectively, at coordinate k = 51. Densities are expressed on a linear scale, only over-density regions are shown. As inFig. 3, the central 256 × 256 pixels of the full models are shown. The characteristic scale of density perturbations corresponding to this wavelet is 0.4 h −1 Mpc. Note the weakening of peak densities of the model L100.100 in the region of future large under-dense region, seen inFig. 3.
Fig.
Fig. 2 shows that at all redshifts high-density peaks of wavelets of medium and large scales almost coincide. In other words, density perturbations of medium and large scales have a tendency of phase coupling or synchronisation at peak positions. Einasto et al. (2011) reached the same conclusion using models with a much broader scale interval. Figure 2 shows that the synchronisation of medium and large scales applies also to underdense regions. The analysis below describes the differences between the synchronisation of over-and under-dense regions in quantitative terms.
Fig. 5 .
5The evolution of the local density and wavelets of the models L100.100 (left panels) and L100.016 (right panels) in beams along the i−coordinate at j = 220, k = 51. The same k−coordinate was used in plotting the density field inFig. 3. Data are shown for epochs z = 0, 2, 5. To see better details only the region 144 ≤ i ≤ 400 of length 50 h −1 Mpc is shown. The characteristic scale of the wavelet w5 is 12.5 h −1 Mpc. Wavelets are divided by the factor f ∝ (1 + z) −1 .
Fig. 6 .
6The local density and wavelets of the models L256.256 (left panels) and L256.008 (right panels) in beams along the i−coordinate at j = 90, k = 105 at the early epoch, z = 30. To enhance details, only the region 100 ≤ i ≤ 250 of length 75 h −1 Mpc is shown. The characteristic scale of the wavelet w5 is 32 h −1 Mpc.
Fig. 7 .
7Left panel: the distribution of void particle local densities of the model L100.100 as a function of redshift. Right panel: the distributions of void particle local densities at the present epoch of the models of the series L100 with different cutoff wavelengths.
Fig. 8 .
8The behaviour of the correlation coefficient for different redshift pairs (z i , z j ) ∈ {(30, 10); (30, 5); (30, 1); (30, 0)} for all the six wavelet levels. The solid and dashed lines correspond to the under-and overdensities, respectively (10% of the most under/over-dense cells)
Fig. 10 .
10The cumulative mass functions of the AHF haloes of models of the L100 series with various cutoff scales. The upper, middle, and lower panels are for the redshifts z = 0, 2, 5, respectively.
Table 1 .
1Parameters of models.Model
L λ cut N part
M part
(1)
(2)
(3)
(4)
M256.256 256 256 256 3
77.72
M256.064 256
64 256 3
77.72
M256.032 256
32 256 3
77.72
M256.016 256
16 256 3
77.72
M256.008 256
8 256 3
77.72
L256.256
256 256 512 3
9.714
L256.128
256 128 512 3
9.714
L256.064
256
64 512 3
9.714
L256.032
256
32 512 3
9.714
L256.016
256
16 512 3
9.714
L256.008
256
8 512 3
9.714
L100.100
100 100 512 3 0.5583
L100.032
100
32 512 3 0.5583
L100.016
100
16 512 3 0.5583
L100.008
100
8 512 3 0.5583
Notes.
column 1: L -size of the simulation box in h −1 Mpc;
column 2: λ cut -cut scale in h −1 Mpc;
column 3: Number of particles;
column 4: Particle mass in 10 9 M ⊙ .
As in Suhhonenko et al. (2011), we use the notations for our
models whereby the first characters M and L designate models
Previous analyses of the observational galaxy samples and numerical simulations have shown that in the formation of superclusters and voids, large-scale perturbations play an important 1 http://www.aai.ee/∼einasto/reports.php
Acknowledgements. We thank the referee for constructive suggestions. Our special thanks go to Rien van de Weygaert and other participants of the workshop "Cosmic Web Morphology and Topology", held in Warsaw 12 -17 July 2011, for a detailed discussion of void structure problems. The present study was supported by the Estonian Science Foundation grants No. 7146 and 8005, and by the Estonian Ministry for Education and Science grant SF0060067s08. It has also been supported by ICRAnet through a professorship for Jaan Einasto, and by the University of Valencia (Vicerrectorado de Investigación) through a visiting professorship for Enn Saar and by the Spanish MEC projects "ALHAMBRA" (AYA2006-14056) and "PAU" (CSD2007-00060), including FEDER contributions. J.E., I.S., and E.T. thank Leibniz-Institut für Astrophysik Potsdam (using DFG-grant Mu 1020/15-1), where part of this study was performed. J.E. also thanks the Aspen Center for Physics and the Johns Hopkins University for hospitality where this project was started and continued. In plotting of density fields and wavelets, we used the SAOImage DS9 program. A.A.S. acknowledges the RESCEU hospitality as a visiting professor. He was also partially supported by the Russian Foundation for Basic Research grant No. 11-02-00643 and by the Scientific Programme "Astronomy" of the Russian Academy of Sciences.Appendix A: Density field and waveletsA.1. Density field For each particle, we calculated the global density at the location of the particle. For this purpose, we first found the highresolution density field, using a B 3 spline
. M A Aragón-Calvo, B J T Jones, R Van De Weygaert, Van Der, J M Hulst, A&A. 474315Aragón-Calvo, M. A., Jones, B. J. T., van de Weygaert, R., & van der Hulst, J. M. 2007, A&A, 474, 315
. M A Aragon-Calvo, R Van De Weygaert, P A Araya-Melo, E Platen, A S Szalay, MNRAS. 40489Aragon-Calvo, M. A., van de Weygaert, R., Araya-Melo, P. A., Platen, E., & Szalay, A. S. 2010, MNRAS, 404, L89
. M A Aragón-Calvo, R Van De Weygaert, B J T Jones, MNRAS. 4082163Aragón-Calvo, M. A., van de Weygaert, R., & Jones, B. J. T. 2010, MNRAS, 408, 2163
. J M Bardeen, J R Bond, N Kaiser, A S Szalay, ApJ. 30415Bardeen, J. M., Bond, J. R., Kaiser, N., & Szalay, A. S. 1986, ApJ, 304, 15
. S V W Beckwith, M Stiavelli, A M Koekemoer, AJ. 1321729Beckwith, S. V. W., Stiavelli, M., Koekemoer, A. M., et al. 2006, AJ, 132, 1729
. A A Berlind, D H Weinberg, ApJ. 575587Berlind, A. A. & Weinberg, D. H. 2002, ApJ, 575, 587
. E Bertschinger, ApJL. 323103Bertschinger, E. 1987, ApJL, 323, L103
. E Bertschinger, ArXiv:astro-ph/9506070Bertschinger, E. 1995, ArXiv:astro-ph/9506070
. J R Bond, L Kofman, D Pogosyan, Nature. 380603Bond, J. R., Kofman, L., & Pogosyan, D. 1996, Nature, 380, 603
. N A Bond, M A Strauss, R Cen, MNRAS. 4061609Bond, N. A., Strauss, M. A., & Cen, R. 2010a, MNRAS, 406, 1609
. N A Bond, M A Strauss, R Cen, MNRAS. 409156Bond, N. A., Strauss, M. A., & Cen, R. 2010b, MNRAS, 409, 156
. C Conroy, A L Coil, M White, ApJ. 635990Conroy, C., Coil, A. L., White, M., et al. 2005, ApJ, 635, 990
. C Conroy, R H Wechsler, A V Kravtsov, ApJ. 647201Conroy, C., Wechsler, R. H., & Kravtsov, A. V. 2006, ApJ, 647, 201
. A Cooray, R Sheth, Phys. Rep. 3721Cooray, A. & Sheth, R. 2002, Phys. Rep., 372, 1
. D J Croton, M Colless, E Gaztañaga, MNRAS. 352828Croton, D. J., Colless, M., Gaztañaga, E., et al. 2004, MNRAS, 352, 828
. V De Lapparent, M J Geller, J P Huchra, ApJL. 3021de Lapparent, V., Geller, M. J., & Huchra, J. P. 1986, ApJL, 302, L1
. A G Doroshkevich, E V Kotok, A N Poliudov, MNRAS. 192321Doroshkevich, A. G., Kotok, E. V., Poliudov, A. N., et al. 1980, MNRAS, 192, 321
. J Dubinski, L N Da Costa, D S Goldwirth, M Lecar, T Piran, ApJ. 410458Dubinski, J., da Costa, L. N., Goldwirth, D. S., Lecar, M., & Piran, T. 1993, ApJ, 410, 458
. J Einasto, M Einasto, S Gottloeber, Nature. 385139Einasto, J., Einasto, M., Gottloeber, S., et al. 1997a, Nature, 385, 139
. J Einasto, M Einasto, M Gramann, MNRAS. 238155Einasto, J., Einasto, M., & Gramann, M. 1989, MNRAS, 238, 155
. J Einasto, M Gramann, ApJ. 407443Einasto, J. & Gramann, M. 1993, ApJ, 407, 443
. J Einasto, G Hütsi, E Saar, A&A. 53175Einasto, J., Hütsi, G., Saar, E., et al. 2011, A&A, 531, A75
. J Einasto, M Jõeveer, E Saar, MNRAS. 193353Einasto, J., Jõeveer, M., & Saar, E. 1980, MNRAS, 193, 353
. J Einasto, E Saar, M Einasto, W Freudling, M Gramann, ApJ. 429465Einasto, J., Saar, E., Einasto, M., Freudling, W., & Gramann, M. 1994a, ApJ, 429, 465
. M Einasto, J Einasto, E Tago, MNRAS. 269301Einasto, M., Einasto, J., Tago, E., Dalton, G. B., & Andernach, H. 1994b, MNRAS, 269, 301
. M Einasto, J Einasto, E Tago, V Müller, H Andernach, AJ. 1222222Einasto, M., Einasto, J., Tago, E., Müller, V., & Andernach, H. 2001, AJ, 122, 2222
. M Einasto, E Tago, J Jaaniste, J Einasto, H Andernach, A&AS. 123119Einasto, M., Tago, E., Jaaniste, J., Einasto, J., & Andernach, H. 1997b, A&AS, 123, 119
. J A Fillmore, P Goldreich, ApJ. 2811Fillmore, J. A. & Goldreich, P. 1984, ApJ, 281, 1
. J E Forero-Romero, Y Hoffman, S Gottlöber, A Klypin, G Yepes, MNRAS. 3961815Forero-Romero, J. E., Hoffman, Y., Gottlöber, S., Klypin, A., & Yepes, G. 2009, MNRAS, 396, 1815
. S R Furlanetto, T Piran, MNRAS. 366467Furlanetto, S. R. & Piran, T. 2006, MNRAS, 366, 467
. M J Geller, J P Huchra, Science. 246897Geller, M. J. & Huchra, J. P. 1989, Science, 246, 897
. D M Goldberg, T D Jones, F Hoyle, ApJ. 621643Goldberg, D. M., Jones, T. D., Hoyle, F., et al. 2005, ApJ, 621, 643
. D M Goldberg, M S Vogeley, ApJ. 6051Goldberg, D. M. & Vogeley, M. S. 2004, ApJ, 605, 1
. S Gottlöber, E L Łokas, A Klypin, Y Hoffman, MNRAS. 344715Gottlöber, S., Łokas, E. L., Klypin, A., & Hoffman, Y. 2003, MNRAS, 344, 715
. N A Grogin, M J Geller, AJ. 1182561Grogin, N. A. & Geller, M. J. 1999, AJ, 118, 2561
. O Hahn, C Porciani, C M Carollo, A Dekel, MNRAS. 375489Hahn, O., Porciani, C., Carollo, C. M., & Dekel, A. 2007, MNRAS, 375, 489
. M Hoeft, S Gottlöber, Advances in Astronomy. Hoeft, M. & Gottlöber, S. 2010, Advances in Astronomy, 2010
. M Hoeft, G Yepes, S Gottlöber, V Springel, MNRAS. 371401Hoeft, M., Yepes, G., Gottlöber, S., & Springel, V. 2006, MNRAS, 371, 401
. G L Hoffman, E E Salpeter, I Wasserman, ApJ. 268527Hoffman, G. L., Salpeter, E. E., & Wasserman, I. 1983, ApJ, 268, 527
. Y Hoffman, J Shaham, ApJL. 26223Hoffman, Y. & Shaham, J. 1982, ApJL, 262, L23
. F Hoyle, M S Vogeley, ApJ. 607751Hoyle, F. & Vogeley, M. S. 2004, ApJ, 607, 751
. V Icke, MNRAS. 2061Icke, V. 1984, MNRAS, 206, 1P
. T Ishiyama, J Makino, S Portegies Zwart, arXiv:1101.2020Ishiyama, T., Makino, J., Portegies Zwart, S., et al. 2011, arXiv:1101.2020
Large Scale Structures in the Universe. M Jõeveer, J Einasto, IAU Symposium. M. S. Longair & J. Einasto79Jõeveer, M. & Einasto, J. 1978, in IAU Symposium, Vol. 79, Large Scale Structures in the Universe, ed. M. S. Longair & J. Einasto, 241-250
. M Jõeveer, J Einasto, E Tago, MNRAS. 185357Jõeveer, M., Einasto, J., & Tago, E. 1978, MNRAS, 185, 357
. N Jarosik, C L Bennett, J Dunkley, arXiv:1001.4744Jarosik, N., Bennett, C. L., Dunkley, J., et al. 2010, arXiv:1001.4744
. B J T Jones, R Van De Weygaert, M A Aragón-Calvo, MNRAS. 408897Jones, B. J. T., van de Weygaert, R., & Aragón-Calvo, M. A. 2010, MNRAS, 408, 897
. N Kaiser, ApJL. 2849Kaiser, N. 1984, ApJL, 284, L9
. I D Karachentsev, V Karachentseva, W Huchtmeier, ArXiv:astro- ph/0710.0520Karachentsev, I. D., Karachentseva, V., Huchtmeier, W., et al. 2007, ArXiv:astro- ph/0710.0520
. I D Karachentsev, V E Karachentseva, W K Huchtmeier, D I Makarov, AJ. 1272031Karachentsev, I. D., Karachentseva, V. E., Huchtmeier, W. K., & Makarov, D. I. 2004, AJ, 127, 2031
. I D Karachentsev, D I Makarov, M E Sharina, A&A. 398479Karachentsev, I. D., Makarov, D. I., Sharina, M. E., et al. 2003, A&A, 398, 479
. R P Kirshner, A OemlerJr, P L Schechter, S A Shectman, ApJL. 24857Kirshner, R. P., Oemler, Jr., A., Schechter, P. L., & Shectman, S. A. 1981, ApJL, 248, L57
. R P Kirshner, A OemlerJr, P L Schechter, S A Shectman, ApJ. 314493Kirshner, R. P., Oemler, Jr., A., Schechter, P. L., & Shectman, S. A. 1987, ApJ, 314, 493
. A A Klypin, S F Shandarin, MNRAS. 204891Klypin, A. A. & Shandarin, S. F. 1983, MNRAS, 204, 891
. A Knebe, A Green, J Binney, MNRAS. 325845Knebe, A., Green, A., & Binney, J. 2001, MNRAS, 325, 845
. S R Knollmann, A Knebe, ApJS. 182608Knollmann, S. R. & Knebe, A. 2009, ApJS, 182, 608
. L A Kofman, S F Shandarin, Nature. 334129Kofman, L. A. & Shandarin, S. F. 1988, Nature, 334, 129
. A V Kravtsov, A A Berlind, R H Wechsler, ApJ. 60935Kravtsov, A. V., Berlind, A. A., Wechsler, R. H., et al. 2004, ApJ, 609, 35
. K Kreckel, P J E Peebles, J H Van Gorkom, R Van De Weygaert, Van Der, J M Hulst, AJ. 141204Kreckel, K., Peebles, P. J. E., van Gorkom, J. H., van de Weygaert, R., & van der Hulst, J. M. 2011a, AJ, 141, 204
. K Kreckel, E Platen, M A Aragón-Calvo, AJ. 1414Kreckel, K., Platen, E., Aragón-Calvo, M. A., et al. 2011b, AJ, 141, 4
. U Lindner, J Einasto, M Einasto, A&A. 301329Lindner, U., Einasto, J., Einasto, M., et al. 1995, A&A, 301, 329
. B Little, D H Weinberg, C Park, MNRAS. 253295Little, B., Weinberg, D. H., & Park, C. 1991, MNRAS, 253, 295
The large scale structure of the universe. M S Longair, J Einasto, Proceedings of the Symposium. the SymposiumTallinn, Estonian SSR79Longair, M. S. & Einasto, J., eds. 1978, IAU Symposium, Vol. 79, The large scale structure of the universe; Proceedings of the Symposium, Tallinn, Estonian SSR, September 12-16, 1977
. H Martel, I Wasserman, ApJ. 3481Martel, H. & Wasserman, I. 1990, ApJ, 348, 1
V J Martínez, E Saar, Statistics of the Galaxy Distribution. Chapman & Hall/CRCMartínez, V. J. & Saar, E. 2002, Statistics of the Galaxy Distribution (Chapman & Hall/CRC)
. A L Melott, J Einasto, E Saar, Physical Review Letters. 51935Melott, A. L., Einasto, J., Saar, E., et al. 1983, Physical Review Letters, 51, 935
. V Müller, S Arbabi-Bidgoli, J Einasto, D Tucker, MNRAS. 318280Müller, V., Arbabi-Bidgoli, S., Einasto, J., & Tucker, D. 2000, MNRAS, 318, 280
. J H Oort, ARA&A. 21373Oort, J. H. 1983, ARA&A, 21, 373
. N Padmanabhan, M White, P Norberg, C Porciani, MNRAS. 3971862Padmanabhan, N., White, M., Norberg, P., & Porciani, C. 2009, MNRAS, 397, 1862
. S G Patiri, J E Betancort-Rijo, F Prada, A Klypin, S Gottlöber, MNRAS. 369335Patiri, S. G., Betancort-Rijo, J. E., Prada, F., Klypin, A., & Gottlöber, S. 2006, MNRAS, 369, 335
. P J E Peebles, Physical cosmology (Princeton Series in Physics. Princeton University PressPeebles, P. J. E. 1971, Physical cosmology (Princeton Series in Physics, Princeton, N.J.: Princeton University Press, 1971)
. P J E Peebles, ApJ. 257438Peebles, P. J. E. 1982, ApJ, 257, 438
. P J E Peebles, ApJ. 557495Peebles, P. J. E. 2001, ApJ, 557, 495
. E Regos, M J Geller, AJ. 98755Regos, E. & Geller, M. J. 1989, AJ, 98, 755
. R R Rojas, M S Vogeley, F Hoyle, J Brinkmann, ApJ. 61750Rojas, R. R., Vogeley, M. S., Hoyle, F., & Brinkmann, J. 2004, ApJ, 617, 50
. V Sahni, B S Sathyaprakah, S F Shandarin, ApJ. 43120Sahni, V., Sathyaprakah, B. S., & Shandarin, S. F. 1994, ApJ, 431, 20
. U Seljak, MNRAS. 318203Seljak, U. 2000, MNRAS, 318, 203
. U Seljak, A Makarov, P Mcdonald, Phys. Rev. D. 71103515Seljak, U., Makarov, A., McDonald, P., et al. 2005, Phys. Rev. D, 71, 103515
. S F Shandarin, ArXiv: 1011.1924Shandarin, S. F. 2010, ArXiv: 1011.1924
. R K Sheth, R Van De Weygaert, MNRAS. 350517Sheth, R. K. & van de Weygaert, R. 2004, MNRAS, 350, 517
. T Sousbie, S Colombi, C Pichon, MNRAS. 393457Sousbie, T., Colombi, S., & Pichon, C. 2009, MNRAS, 393, 457
. T Sousbie, C Pichon, S Colombi, D Novikov, D Pogosyan, MNRAS. 3831655Sousbie, T., Pichon, C., Colombi, S., Novikov, D., & Pogosyan, D. 2008, MNRAS, 383, 1655
. V Springel, MNRAS. 3641105Springel, V. 2005, MNRAS, 364, 1105
. V Springel, N Yoshida, S D M White, New A. 679Springel, V., Yoshida, N., & White, S. D. M. 2001, New A, 6, 79
. K Stanonik, E Platen, M A Aragón-Calvo, ApJL. 6966Stanonik, K., Platen, E., Aragón-Calvo, M. A., et al. 2009, ApJL, 696, L6
J Starck, F Murtagh, Astronomical image and data analysis. Starck, J.-L. & Murtagh, FStarck, J. & Murtagh, F. 2002, Astronomical image and data analysis, ed. Starck, J.-L. & Murtagh, F.
Image Processing and Data Analysis. J Starck, F D Murtagh, A Bijaoui, J.-L Starck, F D Murtagh, A Bijaoui, I Suhhonenko, J Einasto, L J Liivamägi, A&A. 531149Starck, J., Murtagh, F. D., & Bijaoui, A. 1998, Image Processing and Data Analysis, ed. Starck, J.-L., Murtagh, F. D., & Bijaoui, A. Suhhonenko, I., Einasto, J., Liivamägi, L. J., et al. 2011, A&A, 531, A149
. A Szomoru, J H Van Gorkom, M D Gregg, M A Strauss, AJ. 1112150Szomoru, A., van Gorkom, J. H., Gregg, M. D., & Strauss, M. A. 1996, AJ, 111, 2150
M Tarenghi, W G Tifft, G Chincarini, H J Rood, L A Thompson, Large Scale Structures in the Universe. M. S. Longair & J. Einasto79263IAU SymposiumTarenghi, M., Tifft, W. G., Chincarini, G., Rood, H. J., & Thompson, L. A. 1978, in IAU Symposium, Vol. 79, Large Scale Structures in the Universe, ed. M. S. Longair & J. Einasto, 263
. M Tegmark, D J Eisenstein, M A Strauss, Phys. Rev. D. 74123507Tegmark, M., Eisenstein, D. J., Strauss, M. A., et al. 2006, Phys. Rev. D, 74, 123507
. M Tegmark, M A Strauss, M R Blanton, Phys. Rev. D. 69103501Tegmark, M., Strauss, M. A., Blanton, M. R., et al. 2004, Phys. Rev. D, 69, 103501
Large Scale Structures in the Universe. W G Tifft, S A Gregory, IAU Symposium. M. S. Longair & J. Einasto79267Tifft, W. G. & Gregory, S. A. 1978, in IAU Symposium, Vol. 79, Large Scale Structures in the Universe, ed. M. S. Longair & J. Einasto, 267
. A V Tikhonov, A Klypin, MNRAS. 3951915Tikhonov, A. V. & Klypin, A. 2009, MNRAS, 395, 1915
. J Tinker, A V Kravtsov, A Klypin, ApJ. 688709Tinker, J., Kravtsov, A. V., Klypin, A., et al. 2008a, ApJ, 688, 709
. J L Tinker, C Conroy, P Norberg, ApJ. 68653Tinker, J. L., Conroy, C., Norberg, P., et al. 2008b, ApJ, 686, 53
. J L Tinker, P Norberg, D H Weinberg, M S Warren, ApJ. 659877Tinker, J. L., Norberg, P., Weinberg, D. H., & Warren, M. S. 2007, ApJ, 659, 877
. J L Tinker, D H Weinberg, M S Warren, ApJ. 647737Tinker, J. L., Weinberg, D. H., & Warren, M. S. 2006, ApJ, 647, 737
Large Scale Structures in the Universe. R B Tully, J R Fisher, IAU Symposium. M. S. Longair & J. Einasto79214Tully, R. B. & Fisher, J. R. 1978, in IAU Symposium, Vol. 79, Large Scale Structures in the Universe, ed. M. S. Longair & J. Einasto, 214
. R Van De Weygaert, M A Aragon-Calvo, B J T Jones, E Platen, arXiv:0912.3448van de Weygaert, R., Aragon-Calvo, M. A., Jones, B. J. T., & Platen, E. 2009, arXiv:0912.3448
. R Van De Weygaert, K Kreckel, E Platen, arXiv:1101.4187arXiv:0912.2997MNRAS. 2631189MNRASvan de Weygaert, R., Kreckel, K., Platen, E., et al. 2011, arXiv:1101.4187 van de Weygaert, R. & Platen, E. 2009, arXiv:0912.2997 van de Weygaert, R. & van Kampen, E. 1993, MNRAS, 263, 481 van den Bosch, F. C., Yang, X., Mo, H. J., et al. 2007, MNRAS, 376, 841 von Benda-Beckmann, A. M. & Müller, V. 2008, MNRAS, 384, 1189
. M White, Z Zheng, M J I Brown, A Dey, B T Jannuzi, ApJL. 65569White, M., Zheng, Z., Brown, M. J. I., Dey, A., & Jannuzi, B. T. 2007, ApJL, 655, L69
. I Zehavi, D H Weinberg, Z Zheng, ApJ. 60816Zehavi, I., Weinberg, D. H., Zheng, Z., et al. 2004, ApJ, 608, 16
. I Zehavi, Z Zheng, D H Weinberg, ApJ. 6301Zehavi, I., Zheng, Z., Weinberg, D. H., et al. 2005, ApJ, 630, 1
. Y B Zeldovich, A&A. 584Zeldovich, Y. B. 1970, A&A, 5, 84
Large Scale Structures in the Universe. Y B Zeldovich, IAU Symposium. M. S. Longair & J. Einasto79Zeldovich, Y. B. 1978, in IAU Symposium, Vol. 79, Large Scale Structures in the Universe, ed. M. S. Longair & J. Einasto, 409-420
. Y B Zeldovich, J Einasto, S F Shandarin, Nature. 300407Zeldovich, Y. B., Einasto, J., & Shandarin, S. F. 1982, Nature, 300, 407
. Z Zheng, A A Berlind, D H Weinberg, ApJ. 633791Zheng, Z., Berlind, A. A., Weinberg, D. H., et al. 2005, ApJ, 633, 791
. Z Zheng, A L Coil, I Zehavi, ApJ. 667760Zheng, Z., Coil, A. L., & Zehavi, I. 2007, ApJ, 667, 760
| []
|
[
"Behavior of susceptible-vaccinated-infected-recovered epidemics with diversity in the infection rate of the individuals",
"Behavior of susceptible-vaccinated-infected-recovered epidemics with diversity in the infection rate of the individuals"
]
| [
"Chao-Ran Cai \nInstitute of Computational Physics and Complex Systems\nLanzhou University\n730000LanzhouGansuChina\n",
"Zhi-Xi Wu \nInstitute of Computational Physics and Complex Systems\nLanzhou University\n730000LanzhouGansuChina\n",
"Jian-Yue Guan \nInstitute of Computational Physics and Complex Systems\nLanzhou University\n730000LanzhouGansuChina\n"
]
| [
"Institute of Computational Physics and Complex Systems\nLanzhou University\n730000LanzhouGansuChina",
"Institute of Computational Physics and Complex Systems\nLanzhou University\n730000LanzhouGansuChina",
"Institute of Computational Physics and Complex Systems\nLanzhou University\n730000LanzhouGansuChina"
]
| []
| We study a susceptible-vaccinated-infected-recovered (SVIR) epidemic-spreading model with diversity of infection rate of the individuals. By means of analytical arguments as well as extensive computer simulations, we demonstrate that the heterogeneity in infection rate can either impede or accelerate the epidemic spreading, which depends on the amount of vaccinated individuals introduced in the population as well as the contact pattern among the individuals. Remarkably, as long as the individuals with different capability of acquiring the disease interact with unequal frequency, there always exist a cross point for the fraction of vaccinated, below which the diversity of infection rate hinders the epidemic spreading and above which expedites it. The overall results are robust to the SVIR dynamics defined on different population models; the possible applications of the results are discussed. | 10.1103/physreve.88.062805 | [
"https://arxiv.org/pdf/1312.0691v2.pdf"
]
| 18,898,052 | 1312.0691 | 4fcab79d5b0cb66dcbb4c9a49b16a2a37da331fc |
Behavior of susceptible-vaccinated-infected-recovered epidemics with diversity in the infection rate of the individuals
Chao-Ran Cai
Institute of Computational Physics and Complex Systems
Lanzhou University
730000LanzhouGansuChina
Zhi-Xi Wu
Institute of Computational Physics and Complex Systems
Lanzhou University
730000LanzhouGansuChina
Jian-Yue Guan
Institute of Computational Physics and Complex Systems
Lanzhou University
730000LanzhouGansuChina
Behavior of susceptible-vaccinated-infected-recovered epidemics with diversity in the infection rate of the individuals
numbers: 0565+b8723Ge0250Le8975Fb
We study a susceptible-vaccinated-infected-recovered (SVIR) epidemic-spreading model with diversity of infection rate of the individuals. By means of analytical arguments as well as extensive computer simulations, we demonstrate that the heterogeneity in infection rate can either impede or accelerate the epidemic spreading, which depends on the amount of vaccinated individuals introduced in the population as well as the contact pattern among the individuals. Remarkably, as long as the individuals with different capability of acquiring the disease interact with unequal frequency, there always exist a cross point for the fraction of vaccinated, below which the diversity of infection rate hinders the epidemic spreading and above which expedites it. The overall results are robust to the SVIR dynamics defined on different population models; the possible applications of the results are discussed.
I. INTRODUCTION
Infectious diseases have always been the great enemy of human health. Historically, large outbreaks of epidemic usually posed a great threat to health and caused great loss for individuals. In some sense, the history of humans is a history of struggle with all kinds of diseases, from the Black Death in medieval Europe to the recently notorious severe acute respiratory syndrome [1][2][3], avian influenza [4,5], swine influenza [6,7], etc.
So far, vaccination is the most effective approach to preventing transmission of vaccine-preventable diseases, such as seasonal influenza and influenza like epidemics, as well as reducing morbidity and mortality [8]. In a voluntary vaccination program, the individuals are subject not only to social factors such as religious belief and human rights, but also to various other conditions such as risk of infection, prevalence of disease, coverage, and cost of vaccination.
Recently, a great deal of effort has been devoted to the investigation of the interplay between vaccine coverage, disease prevalence, and the vaccinating behavior of individuals by integrating game theory into traditional epidemiological models [8][9][10][11][12][13][14][15][16][17][18][19]. For brief reviews of this research topic, we refer the reader to Refs. [20,21] and reference therein. Bauch et al. used game theory to explain the relationship between group interest and self-interest in smallpox vaccination policy [8,9] and found that voluntary vaccination was unlikely to reach the group-optimal level. Vardavas and co-workers investigated the effect of voluntary vaccination on the prevalence of influenza based on minority game theory and showed that severe epidemics could not be prevented unless vaccination programs offer incentives [10]. Zhang et al. studied the epidemic spreading with voluntary vaccination strategy on both Erdös-Rényi random graphs and Barabási-Albert scalefree networks [12]. They found that disease outbreak can be more effectively inhibited on scale-free networks rather than * [email protected] † [email protected] on random networks, which is attributed to the fact that the hub nodes of scale-free networks are more inclined to getting vaccinated after balancing the pros and cons. More recently, Fu and co-workers proposed a game-theoretic model to study the dynamics of vaccination behavior on lattice and complex networks [13,15]. They found that the population structure causes both advantages and problems for public health: It can promote voluntary vaccination to high levels required for herd immunity when the cost for vaccination is sufficiently small, whereas small increases in the cost beyond a certain threshold will cause vaccination to plummet, and infection to rise, more dramatically than in well-mixed populations. Another research line studying the effect of human behavior on the dynamics of epidemic spreading considers mainly the coevolution of node dynamics and network structure (the so-called adaptive networks [22]), which can affect considerably the spreading of a disease [23][24][25].
In most classical epidemiological models [26,27], the individuals in the population are assumed to be identical, e.g., all susceptible individuals acquire the disease with the same probability whenever in contact with an infected individual, and all infected individuals recover, or go back to being susceptible, with the same rate. Such consideration is, however, far from the actual situation. Generally, catching a disease could be caused by many complex factors and there might be great difference among the individuals in the contact rate [28], the infection rate (or disease transmission rate) [29,30], the recovery rate, the cost when the individual is infected, and so forth. One example of such a scenario would be the case where the population is divided into a relatively wealthy class (e.g., representing urban residents), which is less susceptible to infectious disease being considered due to better living conditions and/or health care, and a class of relatively impoverished (e.g., representing rural residents), which is more susceptible to infection. An alternative view is to regard roughly the whole population as composed of two main groups, say, youths and adults, where the former is more resistant to disease than the latter, owning to their stronger physique and immune system.
In the present work, we relax the assumption of identical nature of the individuals and take into account their hetero-geneity in acquiring disease when in contact with infectious individuals. To do this, we divide the whole population into two groups, youths (hereafter group A) and adults (group B) for simplicity, with the same size, and assume that the individuals from group B are more likely to be infected than those from group A. For the sake of comparison, we presume that only the disease transmission rate for the individuals in the two groups are distinct and other parameters, including the recovery rate, the cost of infection, and the cost for vaccination, are identical. By doing so we hope to catch any possible effects on disease prevalence and vaccination coverage caused by the variability of susceptibility. Our results presented below show that the heterogeneity in infection rate has a significant influence on disease spreading and hence cannot be ignored in the forecast of epidemic size and vaccination coverage.
Our paper is organized as follows. In Sec. II we define our model and give detailed information for the numerical simulation method and the parametrizations. In Sec. III we present and analyze the main results of our model. We summarize and discuss the results in Sec. IV.
II. MODEL DEFINITION
It is well known that the contact pattern among individuals dramatically impacts the spatiotemporal dynamics of epidemic spreading in a population [26,27]. In order to examine the robustness of the results of our model, we consider two types of population models, namely, a simple metapopulation model and a spatially structured population model, as illustrated in Fig. 1.
In the metapopulation model, the whole population is divided into two subpopulations with equal size, namely, group A and group B. Within each subpopulation, the individuals are assumed to be homogeneously mixed, that is, every individual has the same opportunity to be in contact with everyone else. Generally speaking, because of the diversity in social conditions or lifestyles, the individuals living in an urban area would be more likely to interact with those also living the same area and less likely to interact with those in the suburb. Therefore, we consider the distinct contact pattern among the individuals to study its impact. This is done by assuming that any pair of individuals from different (the same) groups have an interaction frequency (1-). Here is restricted to the interval [0,0.5]. In the spatially structured population model, we consider two kinds of occupation of the individuals on a square lattice to introduce the diversity of interaction pattern among them. To be more specific, in the first case, the youths and the adults are arranged in a random way such that they can interact with the same frequency, which is similar to the case of = 0.5 in the metapopulation case. In the second case, the individuals are regularly prearranged to gather together with the same type of individuals [see Fig. 1(c)]. In this way, we are able to investigate how the mixing pattern affects the epidemic spreading in the population.
We implement our susceptible-vaccinated-infectedrecovered epidemic-spreading dynamics in the following way. The epidemic strain infects an initial number of individuals I 0 and then spreads in the population according to the classical susceptible-infected-recovered(SIR) epidemiological model, with per-day transmission rate r for each pair of susceptible-infected contact and recovery rate g for each infected individual getting immune to the disease. Whenever the vaccinated compartment is involved in the epidemiological model, a fraction f V of individuals are randomly chosen in the whole population in the initial stage to get vaccinated. For simplicity, here we assume that vaccination grants perfect immunity for the infectious disease. The epidemic continues until there are no more newly infected individuals. As such, those unvaccinated susceptible individuals would either be infected or successfully escape from infection at the end of each spreading season.
In realistic situations, to vaccinate or not to vaccinate is sometimes the business of the individuals. Thus, except for the above case where the fraction of vaccinated individuals is compulsively introduced, we also consider a voluntary vaccination program for preventing an influenzalike infectious disease, in which individuals need to decide whether or not to receive a vaccine each season based on their perceived risk of disease infection. Following previous studies [10,11,13,15], we model the vaccination dynamics as a two-stage game. At the first stage, each individual decides whether or not to get vaccinated, which will incur a cost C V , including the immediate monetary cost for vaccine and the potential risk of vaccine side effects. Individuals catching the epidemic will suffer from an infection cost C I , which may account for disease complications, expenses for treatment, etc. Those individuals who escape infection are free riders and pay for nothing. Without loss of generality, we set C I = 1 and let c = C V /C I describe the relative cost of vaccination, whose value is restricted in the region of [0,1] (otherwise, doing nothing would be better than getting vaccinated). The second stage is the same epidemic spreading processes as described before. After each spreading season, the individuals are allowed to rechoose their choice for vaccination based on a pairwise comparison rule (more details will be given below).
We carry out stochastic simulations for the above epidemiological (game-theoretic) processes in both population models, wherein each seasonal epidemiological process is implemented by using the well-known Gillespie algorithm [31,32]. In particular, at any time t, we calculate each individual's transition rate λ i (t). The rate for any susceptible individual becoming infected is λ i (t) = r × k inf and k inf is the number of infected neighbors of the focal individual. The rate for any infected individual recovering is λ i (t) = g. The recovered individuals do not change state and the rate for them is therefore λ i (t) = 0. Summing up all of them, we yield the total transition rate ω(t) = Σ i λ i (t). With this value in hand, the time at which the next transition event occurs is t = t + ∆t, where ∆t is sampled from an exponential distribution with mean 1 ω(t) (if we generate a uniform random number u ∈ [0, 1), then the time interval is ∆t = − ln(1−u) ω(t) ). The individual whose state is chosen to change at time t is sampled with a probability proportional to λ i (t). That is,
G r o u p B ( a ) G r o u p A ( b ) ( c )v ∈ [0, 1) is generated and if Σ k−1 j=1 λ j (t)/ω(t) < v < Σ k j=1 λ j (t)/ω(t)
, then individual k is chosen to change state. This elementary step is repeated until there are no infected individuals left in the population.
III. ANALYSIS AND RESULTS
A. Metapopulation without vaccinated compartment
We first examine our model in metapopulations. For convenience, the two groups A and B are denoted by the subscripts a and b, respectively. According to the above illustrated scenario, the time evolution of population states for group A can be expressed as the following deterministic ordinary differential equations:
dSa dt = −r a N S a [(1 − )I a + I b ],(1)dIa dt = r a N S a [(1 − )I a + I b ] − gI a ,(2)dRa dt = gI a .(3)
As mentioned before, the parameter is the cross contact coefficient, which stands for the contact frequency between individuals from different groups. For the whole system that includes groups A and B, we have the following equations:
dS dt = −r a N S a [(1 − ) I a + I b ] − r b N S b [(1 − ) I b + I a ] ,(4)dI dt = r a N S a [(1 − ) I a + I b ] + r b N S b [(1 − ) I b + I a ] − gI,(5)dR dt = g(I a + I b ) = gI.(6)
In the limit → 0, the basic reproduction number (whose value identifies the expected number of secondary infections produced by an infected individual during that individual's infectious period within the entire susceptible population) of groups A and B can be approximately written as R 0a = r a N/g and R 0b = r b N/g, respectively. By taking the average over each group, we obtain the effective basic reproduction number of the infectious disease R 0 = (r a + r b )N/2g = r N/g [33] where r is the average value of the disease transmission rate of the whole population. By varying the value of r a and r b , we are able to introduce the difference in transmission rate of the infectious disease for the individuals. For the sake of comparison, we keep the average value of the transmission rate fixed as r = (r a + r b )/2. Denoting r a /r b by x, the relative disease transmission rate for the two types of individuals, after some simple algebra we have
r a = 2x r 1 + x , and r b = 2 r 1 + x .(7)
When x is close to zero, there exists a great difference between the individuals in group A and those in group B in acquiring the disease (i.e., we consider the case where the youths are very resistant to the infection, while the adults are very vulnerable to the disease). As x goes to unity, the variation of the disease transmission rate among the two groups vanishes. Let us show in Fig. 2 the influence of the cross contact coefficient on the epidemic spreading in the population without a vaccinated compartment. In the case of the limit x → 1, we have r a ≈ r b , which means that the possibilities of acquiring the disease through susceptible-infected contact for the individuals from the two groups are almost the same. As a consequence, the final epidemic size f R , i.e., the average fraction of recovered individuals in the whole population, does not change much as the parameter varies. Note that with the current parametrization settings the final epidemic size without vaccination is about 89.3% for x = 1.0 [13]. As x diminishes, f R decreases considerably. This point can be understood by considering the case of → 0. In such a case, as demonstrated in Appendix V, due to the concavity of f R as a function of R 0 , the decrease of epidemic size f Ra in group A cannot be offset by the increase of f Rb in group B and consequently the final epidemic size of the whole system will decrease continuously as x decreases. In particular, when the value of x is less than 0.25, the value of R 0a will be smaller than unity, which means that the epidemic cannot spread throughout group A. Hence f R of the whole population is mainly contributed by f Rb and converges approximately to a value ≈ 0.5 for x < 0.25. With the increment of , the more frequent contact between the two groups will infect more individuals in group A, while the somewhat less frequent contact among those individuals from group B has just a slight impact on the final f Rb (see Appendix V). The introduction of heterogeneity of the infection rate can greatly suppress the prevalence of the infectious disease.
B. Metapopulation with vaccinated compartment
We now incorporate the vaccinated compartment into the epidemic spreading in the metapopulation model. We denote by f V a the proportion of the population initially vaccinated in group A. In our work we assume the same fraction of initially vaccinated individuals for the two groups, that is,
f V a = f V b = f V .
For given values of f V , x, and , we obtain the final epidemic size by implementing stochastic simulations as described in Sec. II. The simulation results are summarized in Fig. 3, which are in good agreement with those predicted by numerically solving Eqs. (4)-(6). The overall result is that with the involvement of the vaccinated compartment, the final epidemic size will gradually decrease with the increase of f V , which is expected since vaccination can provide perfect immunity to the infectious disease and a sufficiently large fraction of vaccinated individuals can completely prohibit the propagation of the infectious disease. Though the difference between f R for x = 1.0 and that for x < 1.0 is vanishing in the limit of large f V , there exists a qualitative difference for the variation. When the individuals from the two groups interact quite frequently = 0.5, the smaller the relative disease transmission rate x is, the smaller the final epidemic size f R is. Such a dynamic scenario, however, changes when the interaction frequency among the in-dividuals from distinct groups is decreased. Specifically, a crossover behavior of f R as a function of f V emerges as the parameter drops close to zero. We notice that there arises a critical value of f V , say, f V c (whose value is about 0.45), below which the presence of heterogeneity in infection rate for the individuals from different groups can hinder the epidemic spreading, while above which the opposite effect takes place (see Appendix VI for more details). It is worth pointing out that for sufficiently small , the individuals in the two groups almost interact with others within the same group, which leads to the clustering of susceptible individuals with a high infection rate of the disease (in group B). Consequently, the disease prevalence is enlarged as compared to the case of a homogeneous interaction pattern of the two groups [e.g., the curve for x = 0.02 in the case of = 0.1 is always above that in the case of = 0.5 (not shown here)].
C. Spatially structured population with vaccinated compartment
Now we study our model in a spatially structured population, where the individuals are located on a square lattice. For the sake of comparison, we calibrated the epidemic parameters to ensure that the infection risk in an unvaccinated population (without variation of infection) is equal across all population structures, that is, f R for x = 1 in the case of spatially structured population should be the same as f R for x = 1 in the case of a metapopulation. The simulation results are displayed in Fig. 4, from which we note that the final epidemic size f R decreases much more rapidly as compared to that in the metapopulation case when the vaccination level increases. When the two types of individuals are randomly prearranged, f R decreases monotonically as the variation of infection increases for each vaccination level. Noticeably, we find that the crossover behavior of f R as a function of f V still exists when the interaction frequency between the two types of individuals reduces to a very low level. From Fig. 4(b) we can see clearly that there is a crossing point near f V c = 0.1. For f V < f V c , the heterogeneity in infection can efficiently hinder the disease spreading, while it promotes the propagation for f V > f V c , similar to the results in Fig. 3(b) obtained for the metapopulation model.
D. Spatially structured population with vaccination dynamics
In what follows we investigate how the vaccination dynamics (i.e., we allow the individuals to change their vaccination behavior based on previous experience [13]) affects the epidemic spreading in structured populations. In the initial state, we randomly choose half of the population to get vaccinated. At the end of each epidemic spreading season, we give the individuals a chance to update their strategies for vaccination before the new one starts. We implement a pairwise comparison process for the strategy updating. Specifically, whenever an individual i updates one's vaccination strategy, one just chooses an individual j randomly from one's nearest neigh- Fig. 1(b) and (b) regularly arranged as in Fig. 1(c). The parameters are the total population size N =100×100, average value of the disease transmission rate r = 0.46 day −1 person −1 , recovery rate g = 1 3 day −1 , and number of initial infection seeds Ia=I b =10. Simulation results are averaged over 100 independent runs. bors to compare their cost (or payoff) and then adopts the vaccination choice of j with a probability dependent on the payoff difference [34][35][36]
q ij = 1 1 + exp[−β(P j − P i )] ,(8)
where P i and P j correspond to the payoffs of the two involved individuals and β denotes the strength of the selection. Unless otherwise specified, we select β = 1.0, implying that better-performing individuals are readily imitated, but it is not r a n d o m l y a r r a n g e d r e g u l a r l y a r r a n g e d x =0.5 r a n d o m l y a r r a n g e d r e g u l a r l y a r r a n g e d r a n d o m l y a r r a n g e d r e g u l a r l y a r r a n g e d x =0.5 We plot in Fig. 5 the epidemic size f R and the vaccination level f V in the steady state as a function of the relative cost for vaccination c for two differently arranged populations on square lattice. From Figs. 5(a)-5(c) we observe that as the value of x goes down, i.e., the heterogeneity in infection rate for the two types of individuals becomes more notable, the final epidemic level in the randomly arranged population (the open symbols) changes much more evidently than that in the case of regularly arranged population (the closed symbols). In particular, for x = 0.5, the final f R in the randomly arranged population is always greater than that in the regularly arranged population as c increases, albeit the vaccination level in the former case is slightly larger than that in the latter case for c < ∼ 0.25 [see Fig. 5(d)]. For x = 0.3, in the randomly arranged population, though the growth trend of f R is more apparently for small c, it attains at a smaller level for large enough values of c (when the vaccination level evolves to zero), which is comparable to the case of a regularly arranged population. As x decreases even to 0.1, f R in a randomly arranged population can just grow to a much lower level as compared to that in the case of a regularly arranged population, despite the fact that the vaccination level is zero for most c values [see Fig. 5(f)]. The reason is that the A-type individuals are difficult to infect even though they did not receive a vaccine when x is too small and as such they play the role of a natural obstructer to prevent large-scale spreading of the disease in the population. In addition, those unvaccinated A-type individuals will attract other individuals to not get vaccinated, giving rising to very low level of vaccination in the steady state [Figs. 5(e) and 5(f)]. For a regularly arranged population, however, since the B-type individuals are clustered together, they are very prone to the attack of disease, and consequently the final epidemic can reach a rather large level.
IV. CONCLUSION AND DISCUSSION
In summary, we have incorporated the heterogeneity in infection rate of individuals and also the vaccination dynamics into the traditional susceptible-infected-recovered compartmental epidemic model to study their potential effects on the disease prevalence and vaccination coverage. For this pur-pose, we have considered a more practical framework where the whole population is classed into two types of groups whose members are endowed with different capabilities in catching a disease. To keep things simple, the individuals within the same group are assumed to be identical in their infection rate. The proposed model has been investigated in a simple metapopulation and spatially structured populations, with and without involvement of vaccination, by using numerical simulations as well as analytical treatments.
We have shown that whether the introduction of heterogeneity in the infection rate of the individuals exerts positive or negative effects (i.e., hampers or expedites) on the epidemic spreading depends closely on both the extent of the heterogeneity of the disease transmission rate and the interaction frequency among the individuals from different groups. To be more specific, the heterogeneity in infection rate can always give rise to a decrease of the final epidemic size provided the individuals from different groups interact with equal likelihood. Nonetheless, as the individuals become more inclined to interact mainly with others from the same group, the heterogeneity in infection rate can hinder the epidemic spreading only in the situation that the fraction of individuals vaccinated is low enough. Very surprising, this just facilitates the epidemic spreading in a regime with the presence of a large fraction of vaccinated individuals (but not large enough to eradicate the disease completely).
Our work is expected to provide some valuable instructions for the prediction and intervention of epidemic spreading in the real world. The results summarized in Figs. 2-5 suggest that when evaluating the seriousness of an epidemic, we should take into account both the factors of the diversity of the infection rate of the individuals and the interaction patterns among them simultaneously; otherwise we may overestimate or underestimate the spreading trend. Alternatively, without such considerations, we may overshoot or undershoot the desired amount of action when developing, regulating, and making vaccine policy. In addition, when individuals are allowed to change their vaccination decisions according to their experience and observations, we find that as the heterogeneity in infection rate for the two types of individuals becomes more noticeable, the final epidemic level in randomly arranged population changes much more evidently than that in the case of a regularly arranged population, hence giving us a vital clue as to how to make efficient vaccine campaign, namely, we should distribute the vaccine in the population as widely as possible so that the spreading path of the disease can be efficiently suppressed.
To summarize, our proposed model captures essential elements in real-world epidemic spreading, which has not been fully discussed previously. Therefore, we believe our results will give some insights to the policy makers. There are still many issues, such as diversity of recovery rate, heterogeneous cost for infection and vaccination, and more complex contactnetwork structures, which are totally overlooked in the present work and deserve to be explored in the future. In addition, the spread of awareness of the epidemic and/or the vaccination sentiment would also impact greatly the vaccination behavior of the individuals and hence the epidemic outbreaks [37][38][39].
We hope our work could stimulate further work in this line of research.
ACKNOWLEDGMENTS
This work was supported by the National Natural Science Foundation of China (Grants No. 11005051, No. 11005052, and No. 11135001).
V. APPENDIX A
Here we present theoretical analysis for the simple metapopulation model. For convenience, let us denote r N/g by C, which is kept as a constant. The combination of Eqs. (1)-(3) with Eq. (7) yields
dSa dRa = − 2x 1+x S a C (1 − ) + I b Ia ,(9)dSa dR b = − 2x 1+x S a C (1 − ) Ia I b + ,(10)dS b dR b = − 2 1+x S b C (1 − ) + Ia I b ,(11)dS b dRa = − 2 1+x S a C (1 − ) I b Ia + .(12)
After eliminating Ia I b and I b Ia from these equations we readily obtain
Sa dS a − x(1− ) S b dS b = 2Cx(1−2 ) 1+x dR b ,(13)1− Sa dS a − x S b dS b = 2Cx(1−2 ) 1+x dR a .(14)
Now we integrate these two equations with respect to time from 0 to ∞. By using the initial condition S a (0) = S b (0) ≈ 1 and R a (0) = R b (0) = 0 and the final state I a (∞) = I b (∞) = 0, we get the following two transcendental equations for the final epidemic size R a (∞) and R b (∞) for each group:
ln [1 − R a (∞)] = 2Cx 1+x [(1 − )R a (∞) + R b (∞)] , (15) ln [1 − R b (∞)] = 2C 1+x [(1 − )R b (∞) + R a (∞)
] .(16) What we want to figure out is the relationship between the final epidemic size f R and the cross coefficient , so we take a derivative of Eqs. (15) and (16) with respect to and get
1 Ra(∞)−1 dRa(∞) d = − 2Cx 1+x −R a (∞) + (1 − ) dRa(∞) d + R b (∞) + dR b (∞) d ,(17)1 R b (∞)−1 dR b (∞) d = − 2Cx 1+x −R b (∞) + (1 − ) dR b (∞) d + R a (∞) + dRa(∞) d .(18)
After doing some algebra we obtain We can rewrite this equation as
1 1−Ra(∞) − 2Cx 1+x dRa(∞) d = − x 1−R b (∞) − 2Cx 1+x dR b (∞) d .(19)dR a (∞) d = −K dR b (∞) d ,(20)
where
K = x 1−R b (∞) − 2Cx 1+x 1 1−Ra(∞) − 2Cx 1+x .(21)
In the case of = 0, it is easy to verify numerically that K > 1 for all our x values of interest (say, x > 0.01). More intuitively, for SIR model in a well-mixed population, the final epidemic size is determined by the self-consistent equation R(∞) = 1 − exp −R0R(∞) . Figure 6 features the solutions, from which we note that f R is a concave function of R 0 . If we decrease the value of x such that in the limit of = 0 the variables R 0a and R 0b always satisfy the relationships R 0a < R 0b and (R 0a + R 0b ) /2 = C = R 0 , then due to the concave curvature, the variation of the final epidemic size in group A will be more remarkable than that in group B, as illustrated in Fig. 6. For each fixed value of x, as increases, the increasingly frequent contact among the individuals from different groups will have a greater affect on R a (∞) than on R b (∞) (as long as x is not too small), giving rise to the increase of f R in the whole population. Since R a (∞) increases and R b (∞) decreases with the increment of , the value of K will decrease monotonically according to Eq. (21), which is reflected correctly in Fig. 2.
VI. APPENDIX B
Here we demonstrate the existence of the crossover behavior for the curves of f R as a function of f V = y for different values of x. In a well-mixed population, we know the final fraction of recovered population for the SIR model satisfying the equation R(∞) = 1 − exp −R0R(∞) . When (5) a Results of (f V c , f R ) obtained from stochastic simulations. b Results of (f V c , f R ) predicted by analytical treatments. a proportion y of preemptive vaccination in introduced before the epidemic starts, we can readily obtain R(∞) = (1−y) 1 − exp −R0R(∞) . For our proposed model, we consider two limited cases. The first case is x = 1, i.e., the individuals in the two groups are identical, and in such a case we have
R b (∞) | x=1 = (1 − y) 1 − exp −R 0b R b (∞)|x=1 , (22) R(∞) | x=1 = R a (∞) | x=1 = R b (∞) | x=1 ,(23)
where R 0b = r b N/g = r N/g = C.
The other limited case is x → 0, which means that the disease transmission rate for the individuals in group A is nearly zero. By approximating R a (∞) | x→0 = 0 and combining Eqs. (15) with (16) we have
R b (∞) | x→0 = (1 − y)(1 − exp −R 0b (1− )R b (∞)|x→0 ),(24) R(∞) | x→0 = Ra(∞)|x→0+R b (∞)|x→0 2 = R b (∞)|x→0 2 ,(25)
where R 0b = r b N/g = 2 r N/g = 2C. We assume that the curves of f R for the two cases have a crossing point so that
R b (∞) | x=1 = R b (∞) | x→0 2 .(26)
Denoting R b (∞) | x=1 by z, combining Eqs. (22), (24), and (26), and recalling that C = 2.5, we obtain exp −10(1− )z = 2 exp −2.5z −1.
To validate the assumption, Eq. (27) must have an exact solution, which means that
exp −10(1− )z | z=0 < 2 exp −2.5z −1 | z=0 .(28)
Solving the inequality yields < 0.5. That is to say, the crossover behavior will always exist as long as is strictly smaller than one-half. From Eqs. (27) and (22)
dy dz = exp −2.5z +2.5z exp −2.5z −1 (exp −2.5z −1) 2 < 0.
By dividing Eq. (30) by (29) we get d /dy > 0, which indicates that the crossing point will move to the right (i.e., the curves intersect at larger values of y = f V ) with an increase of the cross contact coefficient . In Table I we summarize the crossing point values (f V c , f R ) of the curves for x = 1.0 and 0.02 yielded by the stochastic simulations as well as those predicted by Eqs. (27), (22), and (24). We notice that the results obtained from different methods match quite well with each other. The invisible differences may be due to the finitesystem-size effect. Specifically, with increasing the curves for x = 1.0 and 0.02 intersect at points with larger f V c .
FIG. 1 .
1(Color online) Schematic illustration of population models we studied in the main text. (a) Simple metapopulation model composed of two subpopulations, within each one consisting of the same type of individuals. (b) Individuals from different groups are randomly arranged on a square lattice. (c) Individuals from different groups are regularly arranged on the lattice. The A-type and B-type individuals are indicated by blue (dark gray) and green (light gray) squares, respectively. a uniform random number
FIG. 2 .
2(Color online) Epidemic spreading in the metapopulation model without the vaccinated compartment. The final epidemic size fR is plotted as a function of the cross coefficient for several different values of the relative disease transmission rate x. The lines are for the analytical predictions from Eqs. (4)-(6). The symbols are simulations obtained by carrying out the Gillespie algorithm. The parameters are the total population size N = NA + NB = 10 000, average value of the disease transmission rate r = 2.5 3N day −1 person −1 , recovery rate g = 1 3 day −1 , and number of initial infection seeds Ia=I b =10. Simulation results are averaged over 100 independent runs.
FIG. 3 .
3(Color online) Epidemic spreading in the metapopulation model with the vaccinated compartment. The final epidemic size fR is plotted as a function of the fraction of vaccinated individuals fV for several different values of the relative disease transmission rate x. The lines are for analytical predictions from deterministic equations and the symbols are obtained by simulations. The cross contact coefficient (a) = 0.5 and (b) = 0.1. Other parameters are the same as in Fig. 2. Simulation results are averaged over 100 independent runs.
FIG
. 4. (Color online) Epidemic spreading in spatially structured populations with the vaccinated compartment. The final epidemic size fR is plotted as a function of the vaccination level fV for several different values of the relative disease transmission rate x. The different types of individuals are (a) randomly arranged as in
FIG
. 5. (Color online) Epidemic spreading and vaccination dynamics in spatially structured populations. (a)-(c) The final epidemic size fR and (d)-(f) the final vaccination coverage fV are plotted as a function of the cost for vaccination c for three typical values of the relative disease transmission rate x. Open and closed symbols correspond to the results yielded for randomly and regularly arranged populations, respectively. Other parameters are the same as in Fig. 4. Simulation results are averaged over 100 independent runs. The lines are a guide to the eyes. impossible to adopt the behavior of an individual performing worse. What we are interested in this case is how many individuals are infected and the vaccination coverage in the final stable state. The results shown in Fig. 5 are the average of the last 1000 iterations among the total 5000 in 100 independent simulations.
FIG
. 6. (Color online) Solutions of the equation R(∞) = 1 − exp −R 0 R(∞) , where the final epidemic size fR = R(∞) as a function of basic reproduction ratio R0 is shown.
TABLE
we haved
dz
=
1
10
−
2.5 exp −2.5z
z(2 exp −2.5z −1)
−
ln(2 exp −2.5z −1)
z 2
< 0,
. J.-F H , 10.1126/science.1092002Science. 3031666J.-F. H. et al., Science 303, 1666 (2004).
. R A M E Fouchier, "10.1038/423240a"Nature. 423240R. A. M. e. Fouchier, Nature 423, 240 (2003).
. M Small, C Tse, 10.1016/j.physa.2005.01.009Physica A. 351499M. Small and C. Tse, Physica A 351, 499 (2005).
. H B John, 10.1056/NEJMra052211N. Engl. J. Med. 3531374H. B. John and et al., N. Engl. J. Med. 353, 1374 (2005).
. M Small, D M Walker, C K Tse, 10.1103/PhysRevLett.99.188702Phys. Rev. Lett. 99188702M. Small, D. M. Walker, and C. K. Tse, Phys. Rev. Lett. 99, 188702 (2007).
. C Fraser, 10.1126/science.1176062Science. 3241557C. Fraser and et al., Science 324, 1557 (2009).
. B Coburn, B Wagner, S Blower, 10.1186/1741-7015-7-30BMC Med. 730B. Coburn, B. Wagner, and S. Blower, BMC Med. 7, 30 (2009).
C T Bauch, A P Galvani, D J D Earn, 10.1073/pnas.1731324100Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA10010564C. T. Bauch, A. P. Galvani, and D. J. D. Earn, Proc. Natl. Acad. Sci. USA 100, 10564 (2003).
. C T Bauch, 10.1098/rspb.2005.3153Proc. R. Soc. B. 2721669C. T. Bauch, Proc. R. Soc. B 272, 1669 (2005).
. R Vardavas, R Breban, S Blower, 10.1371/journal.pcbi.0030085PLoS Comput. Biol. 385R. Vardavas, R. Breban, and S. Blower, PLoS Comput. Biol. 3, e85 (2007).
. R Breban, R Vardavas, S Blower, 10.1103/PhysRevE.76.031127Phys. Rev. E. 7631127R. Breban, R. Vardavas, and S. Blower, Phys. Rev. E 76, 031127 (2007).
. H Zhang, J Zhang, C Zhou, M Small, B Wang, 10.1371/journal.pcbi.1000280New J. Phys. 1223015H. Zhang, J. Zhang, C. Zhou, M. Small, and B. Wang, New J. Phys. 12, 023015 (2010).
. F Fu, D I Rosenbloom, L Wang, M A Nowak, 10.1098/rspb.2010.1107Proc. R. Soc. B. 27842F. Fu, D. I. Rosenbloom, L. Wang, and M. A. Nowak, Proc. R. Soc. B 278, 42 (2011).
. N Perra, D Balcan, B Gonalves, A Vespignani, 10.1371/journal.pone.0023084PLoS ONE. 623084N. Perra, D. Balcan, B. Gonalves, and A. Vespignani, PLoS ONE 6, e23084 (2011).
. H Zhang, F Fu, W Zhang, B Wang, "10.1016/j.physa.2012.05.009"Physica A. 3914807H. Zhang, F. Fu, W. Zhang, and B. Wang, Physica A 391, 4807 (2012).
. X.-T Liu, Z.-X Wu, L Zhang, 10.1103/PhysRevE.86.051132Phys. Rev. E. 8651132X.-T. Liu, Z.-X. Wu, and L. Zhang, Phys. Rev. E 86, 051132 (2012).
. X.-L Peng, X.-J Xu, X Fu, T Zhou, 10.1103/PhysRevE.87.022813Phys. Rev. E. 8722813X.-L. Peng, X.-J. Xu, X. Fu, and T. Zhou, Phys. Rev. E 87, 022813 (2013).
. H.-F Zhang, Z.-X Wu, X.-K Xu, M Small, L Wang, B.-H Wang, 10.1103/PhysRevE.88.012813Phys. Rev. E. 8812813H.-F. Zhang, Z.-X. Wu, X.-K. Xu, M. Small, L. Wang, and B.-H. Wang, Phys. Rev. E 88, 012813 (2013).
. A Cardillo, C Reyes-Suárez, F Naranjo, J Gómez-Gardeñes, 10.1103/PhysRevE.88.032803Phys. Rev. E. 8832803A. Cardillo, C. Reyes-Suárez, F. Naranjo, and J. Gómez- Gardeñes, Phys. Rev. E 88, 032803 (2013).
. S Funk, M Salathé, V A A Jansen, 10.1098/rsif.2010.0142J. R. Soc. Interface. 71247S. Funk, M. Salathé, and V. A. A. Jansen, J. R. Soc. Interface 7, 1247 (2010).
. C T Bauch, A P Galvani, 10.1126/science.1244492Science. 34247C. T. Bauch and A. P. Galvani, Science 342, 47 (2013).
. T Gross, C J D D'lima, B Blasius, 10.1103/PhysRevLett.96.208701Phys. Rev. Lett. 96208701T. Gross, C. J. D. D'Lima, and B. Blasius, Phys. Rev. Lett. 96, 208701 (2006).
. L B Shaw, I B Schwartz, 10.1103/PhysRevE.81.046120Phys. Rev. E. 8146120L. B. Shaw and I. B. Schwartz, Phys. Rev. E 81, 046120 (2010).
. N Crokidakis, S M D Queirs, J. Stat. Mech. 20126003N. Crokidakis and S. M. D. Queirs, J. Stat. Mech. 2012, P06003 (2012).
. B Wang, L Cao, H Suzuki, K Aihara, J. Phys. A. 4435101B. Wang, L. Cao, H. Suzuki, and K. Aihara, J. Phys. A 44, 035101 (2011).
R M Anderson, R M May, Infectious Diseases of Humans: Dynamics and Control. Oxford University PressR. M. Anderson and R. M. May, Infectious Diseases of Hu- mans: Dynamics and Control (Oxford University Press, Ox- ford, 1991).
M J Keeling, P Rohani, Modeling Infectious Diseases in Humans and Animals. PrincetonPrinceton University PressM. J. Keeling and P. Rohani, Modeling Infectious Diseases in Humans and Animals (Princeton University Press, Princeton, 2008).
. E Volz, 10.1140/epjb/e2008-00131-0Eur. Phys. J. B. 63381E. Volz, Eur. Phys. J. B 63, 381 (2008).
. N Crokidakis, M A De Menezes, J. Stat. Mech. 20125012N. Crokidakis and M. A. de Menezes, J. Stat. Mech. 2012, P05012 (2012).
. C Buono, F Vazquez, P A Macri, L A Braunstein, 10.1103/PhysRevE.88.022813Phys. Rev. E. 8822813C. Buono, F. Vazquez, P. A. Macri, and L. A. Braunstein, Phys. Rev. E 88, 022813 (2013).
. D T Gillespie, 10.1016/0021-9991(76)90041-3J. Comput. Phys. 22403D. T. Gillespie, J. Comput. Phys. 22, 403 (1976).
. D T Gillespie, 10.1021/j100540a008J. Phys. Chem. 812340D. T. Gillespie, J. Phys. Chem. 81, 2340 (1977).
Note that only in the limit case of → 0 can R0 be approximately written as (ra + r b )N/2g. In any cases 0, we are unable to write out the explicit form of R0. but just keep the quantity r = (ra + r b )/2 as constantNote that only in the limit case of → 0 can R0 be approxi- mately written as (ra + r b )N/2g. In any cases 0, we are unable to write out the explicit form of R0, but just keep the quantity r = (ra + r b )/2 as constant.
. G Szabó, C Tőke, 10.1103/PhysRevE.58.69Phys. Rev. E. 5869G. Szabó and C. Tőke, Phys. Rev. E 58, 69 (1998).
. A Traulsen, J M Pacheco, M A Nowak, "10.1016/j.jtbi.2007.01.002"J. Theor. Biol. 246522A. Traulsen, J. M. Pacheco, and M. A. Nowak, J. Theor. Biol. 246, 522 (2007).
A Traulsen, D Semmann, R D Sommerfeld, H.-J Krambeck, M Milinski, 10.1073/pnas.0912515107Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA1072962A. Traulsen, D. Semmann, R. D. Sommerfeld, H.-J. Krambeck, and M. Milinski, Proc. Natl. Acad. Sci. USA 107, 2962 (2010).
S Funk, E Gilad, C Watkins, V A A Jansen, 10.1073/pnas.0810762106Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA1066872S. Funk, E. Gilad, C. Watkins, and V. A. A. Jansen, Proc. Natl. Acad. Sci. USA 106, 6872 (2009).
. Z Ruan, M Tang, Z Liu, 10.1103/PhysRevE.86.036117Phys. Rev. E. 8636117Z. Ruan, M. Tang, and Z. Liu, Phys. Rev. E 86, 036117 (2012).
. E Campbell, M Salathe, 10.1038/srep01905Sci. Rep. 31905E. Campbell and M. Salathe, Sci. Rep. 3, 1905 (2013).
| []
|
[
"REDUCTION OF THE WAVEPACKET: HOW LONG DOES IT TAKE? *",
"REDUCTION OF THE WAVEPACKET: HOW LONG DOES IT TAKE? *"
]
| [
"W H Zurek \nTheory Division\nMS B213\nInstitute for Theoretical Physics\nLos Alamos National Laboratory Los Alamos\nUniversity of California Santa\n87545, 93106BarbaraNew Mexico, California\n"
]
| [
"Theory Division\nMS B213\nInstitute for Theoretical Physics\nLos Alamos National Laboratory Los Alamos\nUniversity of California Santa\n87545, 93106BarbaraNew Mexico, California"
]
| []
| We show that the "reduction of the wavepacket" caused by the interaction with the environment occurs on a timescale which is typically many orders of magnitude shorter than the relaxation timescale τ . In particular, we show that in a system interacting with a "canonical" heat bath of harmonic oscillators decorrelation timescale of two pieces of the wave-packet separated by N thermal de Broglie wavelengths is approximately τ /N 2 . Therefore, in the classical limith → 0 dynamical reversibility (τ → ∞) is compatible with "instantaneous" coherence loss. | 10.1007/978-1-4613-2181-1_10 | [
"https://arxiv.org/pdf/quant-ph/0302044v1.pdf"
]
| 3,165,155 | quant-ph/0302044 | b851e0c48256720233ded61d4667fd3c84903d0e |
REDUCTION OF THE WAVEPACKET: HOW LONG DOES IT TAKE? *
5 Feb 2003
W H Zurek
Theory Division
MS B213
Institute for Theoretical Physics
Los Alamos National Laboratory Los Alamos
University of California Santa
87545, 93106BarbaraNew Mexico, California
REDUCTION OF THE WAVEPACKET: HOW LONG DOES IT TAKE? *
5 Feb 2003
We show that the "reduction of the wavepacket" caused by the interaction with the environment occurs on a timescale which is typically many orders of magnitude shorter than the relaxation timescale τ . In particular, we show that in a system interacting with a "canonical" heat bath of harmonic oscillators decorrelation timescale of two pieces of the wave-packet separated by N thermal de Broglie wavelengths is approximately τ /N 2 . Therefore, in the classical limith → 0 dynamical reversibility (τ → ∞) is compatible with "instantaneous" coherence loss.
INTRODUCTION
It is sometimes argued that observables of macroscopic objects which obey, to a good approximation, reversible classical dynamics -i.e. their relaxation timescale t is, for all practical purposes, infinite -could not have lost coherence and become "classical" due to the interaction with the environment through environment-induced superselection. 1−5 For, the reasoning goes, relaxation rate is the measure of the strength of the coupling with the environment. In particular, when τ → ∞ one can neglect dissipation of energy. Consequently, one should be equally justified in neglecting any influence of the environment. We show that this argument is fallacious in an example of a free particle interacting with the environment of quantum oscillators in the high-temperature weak coupling limit. In particular, we show that the coherence between two pieces of the wave-packet ∆x apart is lost on a decorrelation timescale θ which is typically
θ = τ h/ √ 4mkT /∆x 2 .(1)
Here, m is the mass of the particle, k is Boltzmann's constant, and T is temperature. For "canonical" classical systems (m ∼ 1g, T ∼ 300 o K) and standard "macroscopic" separations ∆x ∼ 1cm, θ/τ ∼ 10 −40 . Moreover, in the classical limit,h → 0, θ/τ → 0. This enormous disparity between the two timescales can be regarded as the explanation of the apparent "instantaneous" collapse of the state vector of macroscopic objects, including distinguishable (i.e. separated by many de Broglie wavelength* λ dB =h/ √ 4mkT ) outcomes of measurements performed by a classical apparatus on a quantum system.
DECORRELATION OF A "FREE PARTICLE"
Consider an otherwise free particle of mass m interacting with the environment of many harmonic oscillators via the Hamiltonian:
H IN T = x c i q i .(2)
Above, x is the coordinate of the free particle while q i are the coordinates of harmonic oscillators. This interaction Hamiltonian was used extensively in many earlier discussions of relaxation, 7,8 and, more recently it is being used in calculations of dephasing in a harmonic oscillator. 9
*The more popular definition of thermal de Broglie wavelength is λ 2 T = h 2 /2πmkT . It differs by a factor π/2 (λ 2 dB = (2/π)λ 2 T ) from the de Broglie wavelength λ dB we shall use here.
We note that H IN T , Eq. (1), commutes with the position observable of the free particle:
[
H IN T , x] = 0 .(3)
Therefore, position can be regarded as pointer observable, 1,3,5 measured continuously by the environment of harmonic oscillators. In the absence of the self-Hamiltonian:
H 0 = − h 2 /2m (∂/∂x) 2(4)
x would be a constant of motion. One would then expect combined systemenvironment state vector to evolve from an initial, uncorrelated state |φ 0 = |ψ |ǫ into the time dependent, correlated state:
|φ t ∝ dx|ψ(x) |ǫ .
Tracing out an environment after it has performed idealized, perfect "measurement" -i.e. after states of the environment are correlated with different positions become orthogonal, ǫ x |ǫ y ∼ δ(x − y) -yields, for the system, the density matrix:
ρ ∝ dx|ψ(x)| 2 |x x|(5)
This density matrix is diagonal in the pointer basis {|x }.
In the more realistic case of finite H 0 the density matrix ρ will not achieve perfect diagonalization, Eq. (5). Rather, it will have a finite correlation length ∼ λ dB . Moreover, the distribution will become uniform, x|ρ|x = const., on a relaxation timescale. The estimate of the timescales of these two processes can be obtained from the effective master equation for the free particle. We shall use it in the form given by Caldeira and Leggett 7 . Its three consecutive terms correspond to the von Neumann's equation for the density matrix of a free particle, to the dissipation with viscosity η = 2mγ, and to the fluctuating force responsible for Brownian motion:
ρ = (ih/2m) ∂ 2 /∂x 2 − ∂ 2 /∂y 2 − γ(x − y) ∂ ∂x − ∂ ∂y − 2mπkT /h 2 (x − y) 2 ρ(6)
To compare relaxation and decorrelation timescales we consider an initial wavepacket of half-width δ. As we shall argue in the next section, this halfwidth will be typically of the order of the de Broglie wavelength. We now suppose that the initial wavepacket has been "split," coherently, into two pieces, |α and |β , so that the free particle is described by the wave function:
|ψ = (|α + |β ) / √ 2 .(7)
Here we assume for simplicity:
x|α = (2πδ 2 ) −1/4 exp −(x − ∆x/2) 2 /4δ 2 , (8a) x|β = (2πδ 2 ) −1/4 exp −(x − ∆x/2) 2 /4δ 2 .(8b)
The resulting initial density matrix ρ = |ψ ψ| (9) has, in the position representation, four extremes. Two of them occur on the diagonal: (1) x = y = ∆x/2; (2) x = y = −∆x/2. They are the maxima of | x|α | 2 and | x|β | 2 . In addition, there are off-diagonal maxima of x|α x|β and of its Hermitian conjugate which lie at: (3) x = −y = ∆x/2; (4) x = −y = −∆x/2. The size of these off-diagonal maxima provides a measure of the coherence between |α and |β . The rate of change of the diagonal terms due to the interaction with the environment can be estimated by calculating, from Eq. (6),
τ −1 = α t |ρ|α t ∼ = −(γ/2) α t |(x − y) 2 |α t 1/δ 2 + 1/λ 2 dB (10a)
Here |α t = exp(−iH 0 t/h)|α was used to separate out the evolution due to the environment from the evolution induced by the self-Hamiltonian H O . Similarly, the rate of change of the off-diagonal term is:
θ −1 = α t |ρ|β t ∼ = −(γ/2) α t |(x − y) 2 |β t 1/δ 2 + 1/δ 2 dB (10b)
The key and only difference between the two rates is then the size of the factor (x − y) 2 . For the diagonal terms it is given by
α t |(x − y) 2 |α t = δ 2 ∼ λ 2 dB . (11a)
For the off-diagonal elements, it is, on the other hand
α t |(x − y) 2 |β t = (∆x) 2 . (11b)
Therefore, the ratio of the two rates is
τ /θ = (∆x/δ) 2 ∼ (∆x/λ dB ) 2 .(12)
in accord with Eq. (1). For "macroscopic" values of ∆x, m, and T , this ratio is enormous and enforces environment-induced superselection. It is worth pointing out that qualitative conclusions of our discussion are in accord with more elaborate path integral treatment of the harmonic oscillator, given recently by Caldeira and Leggett. 9
DISCUSSION: A CLASSICAL LIMIT?
In the previous section we have shown that when δ ∼ λ dB , decorrelation is much more rapid than relaxation. The purpose of this section is to justify why, in the practical context, the assumption δ ∼ λ dB is natural. Moreover, we shall briefly point out consequences of the disparity between the two timescales for the interpretation of quantum mechanics.
Let us first consider a classic example of measurement, patterned after the one discussed by von Neumann. 10 We couple the measured system, initially in a state |φ , with the free particle measuring apparatus, so that their total Hamiltonian is
H = H SY ST EM + H 0 − ih∆xδ(t − t 0 )P (∂/∂x) .(13)
Here P is the measured operator which we assume has 0 and 1 as the eigenvalues, while x is the position of the free particle which will record the outcome of the measurement. Just before the observation the state of the apparatus must be determined with the accuracy better than ∆x. If the free particle apparatus is already in contact with the heat bath of temperature T , as discussed previously, then the measurement of its position with some accuracy σ, ∆x ≫ σ ≫ λ dB , will be a typical, sufficient preparation. Therefore, the apparatus will be left in an incoherent mixture of n = σ/λ dB wavelets. Such inexhaustive measurements may be not only "realistic," but also advantageous, as the resulting mixture will spread slower than the pure wavepacket of comparable width. 11 In the course of the interaction at t = t 0 , each of the de Broglie-sized wavepackets will be split into an "unmoved" |α portion, and into the shifted one: exp(−i∆x∂/∂x)|α = |β . Therefore, immediately after the observation, the state of the combination (system -free particle apparatus) is, in effect, described by a mixture of terms of the form:
|Υ = |α (1 − P )|φ + |β P |φ(14)
with all the |α contained within σ. Now the analysis of the decay of the pure state |T into the density matrix of the form
ρ = |α α|(1 − P )|φ φ|(1 − P ) + |β β|P |φ φ|P(15)
can be conducted in accord with the discussion of the previous section. In particular, δ ∼ λ dB will apply as long as the resolution σ of the measurement which prepares free-particle apparatus is worse than λ dB . Moreover, even if σ < λ dB , our qualitative conclusions still hold, as in that case decorrelation will be even more rapid. The most intriguing corollary of our discussion is, perhaps, the possibility that in the classical limit ofh → 0 the relaxation timescale may approach infinity, τ → ∞ (16) which allows the system to follow reversible, Newtonian dynamics, and yet the decorrelation timescale will remain arbitrarily short, or, indeed, it may approach zero:
θ → 0 .(17) We regard this limit as a true classical limit: Not only does it allow classical Newton's equations of motion, but it also prevents long-range quantum correlations, by imposing the environment-induced superselection.1,2It is worth stressing that the loss of coherence and the accompanying "irreversibility" is a consequence of the deliberate tracing out of the environment, which disposes of the mutual information,5and not of the approximations involved in the derivation of Eq.(6). This is particularly clearly demonstrated by the analogous results obtained by path-integral methods for harmonic oscillators.9I would like to thank Amir Caldeira and Dan Walls for discussions on the subject of this paper. This research was supported by the National Science Foundation under Grant No. PHY77-27084, supplemented by funds from the National Aeronautics and Space Administration.
Pointer Basis of Quantum Apparatus: Into What Mixture Does the Wavepacket Collapse?. W H Zurek, Phys. Rev. D. 241516W.H. Zurek, "Pointer Basis of Quantum Apparatus: Into What Mixture Does the Wavepacket Collapse?,"Phys. Rev. D 24:1516 (1981).
Environment-Induced Superselection Rules. W H Zurek, Phys. Rev. D. 261862W.H. Zurek, "Environment-Induced Superselection Rules,"Phys. Rev. D 26:1862 (1982).
Review of Quantum Mechanical Measurement Problem. E P Wigner, 43 of Ref. 6E.P. Wigner, "Review of Quantum Mechanical Measurement Problem," p. 43 of Ref. 6.
On the Irreversibility of Time and Observation in Quantum Theory. H D Zeh, Foundations of Quantum Mechanics, B. d'Espagnat. New YorkAcademic PressH.D. Zeh, "On the Irreversibility of Time and Observation in Quantum Theory," Foundations of Quantum Mechanics, B. d'Espagnat, ed., Aca- demic Press, New York (1971).
Information Transfer in Quantum Measurements: Irreversibility and Amplification. W H Zurek, 87W.H. Zurek, "Information Transfer in Quantum Measurements: Irre- versibility and Amplification," p. 87 in Ref. 6.
Quantum Optics, Experimental Gravitation and Measurement Theory, NATO ASI Series. P. Meystre and M.O. ScullyNew YorkPlenum PressP. Meystre and M.O. Scully, eds., Quantum Optics, Experimental Gravi- tation and Measurement Theory, NATO ASI Series, Plenum Press, New York (1983).
Path Integral Approach to Quantum Brownian Motion. A O Caldeira, A J Leggett, Physica. 121587and references thereinA.O. Caldeira and A.J. Leggett, "Path Integral Approach to Quantum Brownian Motion," Physica 121A:587 (1983), and references therein.
The Superposition Principle in a Macroscopic System. A J Leggett, p. 74 Proceedings of the International Symposium on Foundation of Quantum Mechanics. S. Kamefuchi et al.TokyoPhysical Soc. of JapanA.J. Leggett, "The Superposition Principle in a Macroscopic System," p. 74 Proceedings of the International Symposium on Foundation of Quan- tum Mechanics, S. Kamefuchi et al., eds., Physical Soc. of Japan, Tokyo (1983).
Influence of Damping on Quantum Interference: An Exactly Soluble Model. A O Caldeira, A J Leggett, Phys. Rev. A. submittedA.O. Caldeira and A.J. Leggett, "Influence of Damping on Quantum Interference: An Exactly Soluble Model," Phys. Rev. A, submitted.
. J Neumann, Mathematical Foundations of Quantum Mechanics. Prince-ton University PressJ. von Neumann, Mathematical Foundations of Quantum Mechanics, Prince-ton University Press, Princeton (1955).
N S Krylov, Works on Foundations of Statistical Physics. PrincetonPrinceton University PressN.S. Krylov, Works on Foundations of Statistical Physics, Princeton University Press, Princeton (1979).
| []
|
[
"Setting Up the Beam for Human-Centered Service Tasks",
"Setting Up the Beam for Human-Centered Service Tasks"
]
| [
"Utkarsh Patel \nElectrical Engineering and Computer Science Department\nCleveland State University\n44115ClevelandOH\n",
"Emre Hatay [email protected] \nElectrical Engineering and Computer Science Department\nCleveland State University\n44115ClevelandOH\n",
"Mike D ' Arcy \nElectrical Engineering and Computer Science Department\nCleveland State University\n44115ClevelandOH\n",
"Ghazal Zand [email protected] \nElectrical Engineering and Computer Science Department\nCleveland State University\n44115ClevelandOH\n",
"Pooyan Fazli [email protected] \nElectrical Engineering and Computer Science Department\nCleveland State University\n44115ClevelandOH\n"
]
| [
"Electrical Engineering and Computer Science Department\nCleveland State University\n44115ClevelandOH",
"Electrical Engineering and Computer Science Department\nCleveland State University\n44115ClevelandOH",
"Electrical Engineering and Computer Science Department\nCleveland State University\n44115ClevelandOH",
"Electrical Engineering and Computer Science Department\nCleveland State University\n44115ClevelandOH",
"Electrical Engineering and Computer Science Department\nCleveland State University\n44115ClevelandOH"
]
| []
| We introduce the Beam, a collaborative autonomous mobile service robot, based on SuitableTech's Beam telepresence system. We present a set of enhancements to the telepresence system, including autonomy, human awareness, increased computation and sensing capabilities, and integration with the popular Robot Operating System (ROS) framework. Together, our improvements transform the Beam into a low-cost platform for research on service robots. We examine the Beam on target search and object delivery tasks and demonstrate that the robot achieves a 100% success rate. | null | [
"https://arxiv.org/pdf/1710.06831v1.pdf"
]
| 40,222,190 | 1710.06831 | 8906e3922b15060c1ed62a7c2744c75c415bcaa6 |
Setting Up the Beam for Human-Centered Service Tasks
Utkarsh Patel
Electrical Engineering and Computer Science Department
Cleveland State University
44115ClevelandOH
Emre Hatay [email protected]
Electrical Engineering and Computer Science Department
Cleveland State University
44115ClevelandOH
Mike D ' Arcy
Electrical Engineering and Computer Science Department
Cleveland State University
44115ClevelandOH
Ghazal Zand [email protected]
Electrical Engineering and Computer Science Department
Cleveland State University
44115ClevelandOH
Pooyan Fazli [email protected]
Electrical Engineering and Computer Science Department
Cleveland State University
44115ClevelandOH
Setting Up the Beam for Human-Centered Service Tasks
Service Robots · Autonomy · Human Awareness and Interaction
We introduce the Beam, a collaborative autonomous mobile service robot, based on SuitableTech's Beam telepresence system. We present a set of enhancements to the telepresence system, including autonomy, human awareness, increased computation and sensing capabilities, and integration with the popular Robot Operating System (ROS) framework. Together, our improvements transform the Beam into a low-cost platform for research on service robots. We examine the Beam on target search and object delivery tasks and demonstrate that the robot achieves a 100% success rate.
Introduction
In the past decade, there has been a significant growth of research and development on service robots due to their wide range of applications in real life. Although the current working environment of service robots is mostly industrial, these robots are gradually moving from factories and labs to homes, offices, schools, and healthcare facilities. In order for service robots to become an intrinsic part of human environments, they need to be able to perform tasks autonomously and interact with people efficiently.
Autonomy can be challenging to implement on service robots. Unlike many industrial settings, where robots perform repetitive preprogrammed tasks, service robots must act autonomously in dynamic, uncertain, and multi-goal environments. In addition, to be convenient for human users, service robots need to be able to interact with humans in a natural way, by understanding speech and language and reacting to human instructions appropriately.
The Beam is a mobile telepresence system developed by SuitableTech and offers an impressive hardware array for its price. In this paper, we present our modifications to the Beam, which make it possible to use the system as a low-cost platform for research on all aspects of service robotics, including autonomy, human-robot interaction, and multi-robot collaboration and coordination. Our modifications include:
-Hardware Enhancement: We add more hardware resources for increased computational and sensing capabilities. These include a laptop to handle expensive computations and multiple depth cameras for increased sensing of the environment. -Integration with Robot Operating System (ROS): We integrate the ROS middleware framework into the Beam to transform it into a programmable research platform. -Autonomy: Using ROS, we make the Beam capable of navigating in indoor environments and charging its battery autonomously. -Human Awareness and Interaction: We add human detection, recognition, and tracking capabilities to the Beam, incorporate a speech recognition system to allow the Beam to work with spoken input, and use the laptop's touchscreen to facilitate touch input. We also implement a web interface, shown in Figure 3, and an emailing system to enable remote human users to schedule tasks, monitor, or interact with the robot.
We investigate our improvements to the Beam in target search and object delivery tasks to verify their effectiveness.
Service Robot Platforms
Unlike industrial robots, which are usually designed to perform routine tasks, recent years have seen various service robots developed to assist human beings in complex, dynamic, and uncertain environments, such as healthcare facilities, hotels, homes, and offices. Nevertheless, these service robots still have limited functionalities. The lack of affordable and commercially available open platforms is a major obstacle in advancing the research on service robotics and has led many robotics researchers and developers to design custom-built platforms with specific capabilities to fit their research agenda. Below, we describe the features of several existing custom-built and commercially available research platforms. Our goal with the Beam is to produce a platform that is easier to set up than the custom platforms but more affordable than the commercial ones.
Custom-built Research Platforms
STanford AI Robot (STAIR) [8] is one of the earliest attempts at building a robot capable of doing service tasks in home and office environments. Research on STAIR was mainly focused on grasping previously unseen objects and opening elevators and new doors.
CoBot [11] is an autonomous service robot capable of performing tasks and interacting with humans robustly in a multi-floor building, and has serviced and traversed more than 1000 km to date. The robot was developed following a novel symbiotic autonomy approach, in which the robot is aware of its perceptual, physical, and reasoning limitations and is able to ask for help from humans proactively.
Herb 2.0 [10] is a robot built to work for and with humans, with a pair of Barrett WAM arms and a mobile base. It is made with a focus on home and office environments, and is therefore built with hardware and software safety features to prevent the robot from taking any unsafe actions. The programming of Herb 2.0 attempts to accomplish the assigned tasks without harming humans, the environment, or itself.
The STRANDS project [4] aims at deploying service robots in security and care scenarios for extended periods. In the security scenario, the robot monitors the environment and people to detect anomalous situations, and in the care scenario, the robot guides visitors and interacts with residents in an elderly care facility. The project focuses on long-term autonomy and learning of service robots in indoor human environments.
BWIBot [6] is a multi-robot platform capable of planning and reasoning in uncertain domains. It also interacts with humans, recognizes human activities, and understands natural language requests and instructions.
To evaluate the performance of domestic service robots, robots can compete in RoboCup@Home [5] challenges in semi-realistic home environments. This competition includes several challenges, each aimed at testing a particular service capability, such as human-robot interaction, manipulation, object recognition, speech recognition, or robust navigation and mapping. KeJia [3], the winner of RoboCup@Home in 2014, is a service robot capable of interacting with humans in natural language and performing manipulation tasks in indoor environments autonomously. This robot was developed following a cognitive approach based on open knowledge available as semistructured data.
Commercial Research Platforms
PR2 [1] is a mobile manipulation platform designed and built by Willow Garage based on Stanford's PR1 robot [14]. It has two 7-DOF arms, an adjustable-height torso, and several sensors, including three cameras and an accelerometer. PR2 is the ROS target platform and has been programmed to do many service tasks, including folding laundry, emptying a dishwasher, and opening doors.
Fetch [13], from Fetch Robotics, is a mobile service robot with a 7-DOF arm, differential drive base, depth camera on its head, and laser range sensor on its base. The robot can carry a 6 kg payload, making it suitable for tasks in warehouses and for a variety of research applications. Fetch is ROS-compatible and has a simulated version compatible with the Gazebo simulation software for easy prototyping.
TIAGo [9] is a mobile manipulator research robot by PAL robotics. In many ways, it is similar to Fetch, with an RGB-D camera, laser range finder sensor, and 7-DOF arm. Unlike Fetch, however, TIAGo defaults to having a five-fingered hand instead of a two-fingered gripper, and it can raise and lower its torso.
SoftBank Robotics' Pepper [7] is a wheeled humanoid robot. It has an omnidirectional base, two 6-DOF arms, and a 3-DOF leg. In addition, Pepper uses an Android tablet on its chest to communicate more effectively and express emotions. Its sensors include six laser sensors, two ultrasound sensors, two tactile sensors in its hands, a 3D camera, and two RGB cameras for human detection and recognition and identifying principal emotions in humans. SoftBank Robotics also makes Romeo [2], a bipedal humanoid robot for assisting elderly people and people with disabilities. Romeo can open doors, climb stairs and reach objects on a table.
Hardware Setup
The Beam has a 1.34 meter tall polymer body, and weighs about 17.7 kg. It can travel at up to 1 m/s with its a differential wheeled system, in which the front two wheels, powered by motors, can rotate independently and are responsible for steering and moving forward, and the rear two wheels are non-powered swivel casters. The Beam's head consists of a 10-inch screen, four microphones, and a speaker. The head also has two wide-angle 480p HDR cameras: one facing forwards (useful for HRI) and one facing downwards (useful for navigation). Behind the screen is the internal computer, which runs an Intel Celeron 1037U 1.80 GHz dual-core processor and has 1 GB of RAM. The Beam is powered by a 240 Wh battery, which is enough for about two hours of normal teleoperation.
Computation
Because the Beam's internal computer is not powerful enough to run expensive computations in real time, we mounted a laptop on the wheel base to do most of the processing. The laptop has an Intel Core i7-6500U 2.50 GHz quad-core processor and 12 GB of DDR4 2133 MHz RAM.
In our setup, the ROS master node is running on the laptop, so the laptop needs to be able to communicate with the Beam's motor board to send the driving commands. We achieve this by connecting the laptop to the Beam's internal computer through the network and having the internal computer communicate with the motor board. It should be noted that the Beam only has Wifi connectivity exposed by default, so it may struggle in areas of low signal strength and be subject to high latency. We achieved a faster and more robust connection by opening the Beam's head to reveal the internal Ethernet port attached to the motherboard.
Perception
For navigation and localization, we attached three Orbbec Astra Pro cameras to the Beam's base. One faces forward for obstacle detection and localization, and the other two face the side and are used only for localization. For face detection, an Asus Xtion Pro camera is attached on top of the Beam's head, facing the front. All of the cameras are powered by their USB connections, so they can simply be connected to the laptop without a need for an additional power source.
Software Setup
We integrate the Beam with ROS, a middleware framework that provides a standardized interface for programming robots. ROS makes it possible to abstract hardware details away, so tasks such as navigation can typically be accomplished just by creating a custom backend to send driving commands to a new hardware platform.
The Beam's operating system is a modified version of Ubuntu, which runs a closedsource program called texclient to communicate with the motor board. Instead of reverse engineering the motor communication protocol from the texclient executable, we used the rosbeam 1 package to send driving commands to the motor board. We made modifications to account for an update to the Beam's OS that broke some features of the package, and the updated source code can be found on our GitHub repository 2 . The rosbeam node injects the driving commands by intercepting the read-write system calls between the texclient process and the serial port device connected to the motor board.
Autonomy
We add two capabilities to the Beam to establish its autonomy. First, we make it capable of navigating autonomously in indoor environments. Second, we add an autonomous charging routine, so the robot can accomplish long-term missions by charging itself when the battery is low.
Autonomous Navigation
To enable autonomous navigation on the Beam, we use the ROS navigation stack 3 . It is a set of algorithms provided by the ROS community that takes in the data from the sensor streams and odometry and outputs safe velocity commands that are sent to the mobile base. The odometry and the sensor streams are used to localize the robot in the map.
The software architecture for localization and navigation is presented in Figure 2. The ROS interface node running on the Beam's computer reads the wheel encoder information from the motor board and publishes them via a ROS topic. The odometry node running on the laptop subscribes to this wheel encoder topic and calculates the odometry, which is later used by the local planner in the core navigation stack. To localize the robot, we use the amcl 4 (Adaptive Monte Carlo Localization) package, which takes in the laser scans and the pre-built map and outputs the estimated position of the robot. Because the sensor nodes running on the laptop publish depth images, we convert the images to 2D laser scans using the depthimage_to_laserscan 5 package. The 2D map of the environment is constructed using the rtabmap 6 package, which uses RGB-D Graph-Based SLAM approach.
Odometry requires tracking both the linear distance traveled and the orientation of the robot in real time. The linear distance is calculated based on the wheel encoders, but the raw encoder values are not reliable because the high inertia of the Beam's long neck causes the front wheels to lift from the ground during navigation. When the front wheels leave the ground, their speed increases for a short period of time, causing the encoders to register distance being traveled while the physical position of the robot does not actually change. To fix this, we defined a threshold for the encoder speed, and if the speed goes above the threshold then we stop increasing the encoder values. To calculate the orientation of the Beam, we used the readings from the built-in accelerometer on the Beam's motor board.
For obstacle detection, we only use the front camera, because running detection with all three cameras uses too much processing power for the laptop to handle well. For localization, however, we use both the front camera and the two side cameras. If only the front camera is used for localization, then the robot does not localize well when navigating through wide corridors. In such situations, the angle of view of the front camera is not large enough to see the side walls and thus the amcl node does not get enough data points to correctly localize the robot. The sensor streams from the two cameras facing sideways in opposite directions give amcl enough data points to localize the robot in wide corridors.
Autonomous Charging
With a fully charged battery, the Beam can navigate for up to 90 minutes. In order to have long-term autonomy, the Beam must be able to recharge without the need for human intervention. To this end, we implemented a self-charging system using AR (Augmented Reality) markers. AR markers are paper tags printed with a unique pattern of large black and white squares, making them easy for computer vision algorithms to recognize.
We attach AR markers to each charging base and predefine the coordinates of all the charging stations on the map. The Beam's battery status is monitored using the battery voltage provided by rosbeam, and if the battery is running low we set a navigation goal to the nearest charging station. When the robot comes within two meters of the charging station, it rotates to scan for the attached AR marker. As soon as the marker is detected, the robot uses its odometry and the coordinates of the marker to compute the velocity commands to dock itself. To charge the laptop mounted on the back of the robot, we made a circuit to convert and direct the~12 V potential from the Beam's battery to the~19 V needed by the laptop's battery, alternating the voltage based on feedback from the laptop.
Human Awareness and Interaction
We isolated four abilities that we believe the Beam should possess to enable interaction with humans. First, the Beam must be able to be aware of the presence of humans in its environment to know when human-sensitive actions are possible or necessary. It should also have a touchscreen, both to express information to people and to receive tactile inputs. However, limiting communication to touch can be inconvenient and unnatural, so the Beam should also be able to understand speech to enable natural communication with people. Finally, we created a website to allow people to schedule tasks with the robot and monitor its activities even when they are not physically nearby.
Human Detection, Recognition, and Tracking
To detect humans in the environment, we used the cob_people_detection package 7 , which implements the face detection algorithm proposed by Viola and Jones [12]. The package takes RGB-D image frames as input and outputs information about each detected face, including its pose, bounding box, and optionally a label if face recognition is enabled. We then transform the positions of detected people from the camera coordinate system to positions on the map. This approach allows us to detect, recognize, and track humans in the environment.
Screen and Touchscreen
The laptop we mounted on the Beam has a touchscreen that can be used for sending inputs and displaying information. In addition, we can display information on the Beam's built-in screen, although this is not a touchscreen. Because our current laptop mounting position on the base of the Beam is near the ground, which is not very convenient for users, we are working on adding a touch capability to the built-in screen using a touch overlay.
Speech Recognition and Synthesis
We added speech-to-text capabilities to the Beam to enable it to understand spoken commands. We created a ROS service that takes in audio and outputs plain text. This ROS service is simply a wrapper around our core speech recognition code, which uses the Python SpeechRecognition module with the Wit Speech API 8 as the backend to do the speech recognition. In addition to taking speech as input, we enabled the Beam to produce speech as output by installing the festival speech synthesis system 9 . Figure 3 shows the Beam's web interface, which allows users to schedule tasks with the Beam and also monitor it through its cameras. The web server is a separate computer in the building, running ROS for communication with the robot and NodeJS to serve the website. When a user selects a task from a drop-down menu on the website along with a set of corresponding task parameters, the NodeJS server receives the request and constructs a ROS message with the task information. This message is sent to the laptop mounted on the Beam, which begins executing the task.
Web Interface and Email
In addition to the website, we also added email capabilities to the robot. This allows it to receive task requests through email, and also allows it to send status notifications when it has completed a task or if it encounters a problem while executing a task. Our email system works through a Gmail account we set up for the robot, using IMAP to receive emails and SMTP to send them.
Experiments and Results
We extensively examined the Beam's ability to navigate and charge autonomously. In total, the Beam has traversed over 12 km across four buildings connected by bridges at Cleveland State University.
We additionally did proof-of-concept experiments investigating the Beam's performance in two service tasks: target search and object delivery. All experiments took place on the third floor of Fenn Hall Building at Cleveland State University, a map of which is shown in Figure 3.
Target Search
In the target search task, the Beam had to search the building for an object and report its location. To ensure that this would be a test of Beam's abilities and not of the object recognition library implementation, we simplified the object recognition component of this task by attaching AR markers to each object.
For each trial of this task, we hide an AR marker in a random location on the map and instruct the Beam to search for it. The search algorithm consists of the Beam randomly visiting a sequence of predefined possible item locations, illustrated by red stars in the map in Figure 3. Upon reaching a potential location, it rotates for 30 seconds to scan for AR markers. If it sees a marker, it says "I found the object" using its text-tospeech capability.
We conducted 10 trials of this experiment and found that the Beam successfully found the object in all 10.
Object Delivery
In the object delivery task, the Beam had to move objects from one location to another. For each object, we gave the Beam a pickup location and a delivery location, each of which were selected randomly from the map. One of the pick-up and drop-off locations are illustrated by cyan stars in the map in Figure 3. We then started the Beam from a random location. The task was for the Beam to travel to the pickup location, get the object, and then travel to the delivery location.
To facilitate the carrying of objects, we attach a small basket to the Beam, and a human is present at the pickup and delivery locations to load and unload the object. When the Beam arrives at the pickup location, it must say "please load the object", after which the human loads the object and presses a button on the laptop touchscreen to indicate that the Beam can move on. Similarly, at the dropoff location the Beam says "please unload the object" and the human uses the touchscreen to indicate when the object had been unloaded.
We tested 10 trials of the object delivery task and recorded the number of times the Beam was able to successfully complete the delivery. We found that the Beam was able to pickup and deliver the object in all 10 trials.
Conclusion and Future Work
We presented a set of modifications to SuitableTech's Beam telepresence system to transform it into a collaborative autonomous service robot, and made it usable as a low-cost platform for research on different aspects of service robotics, including longterm autonomy and lifelong learning, physical/cognitive/social human-robot interac-tion, multi-robot collaboration and coordination, and human multi-robot interaction. We are currently developing a Universal Robotic Description Format (URDF) model of the Beam to make a simulated version available for the Gazebo simulator software, which would make it much faster and easier to prototype new algorithms and operating environments.
Fig. 1 .
1Hardware setup of the Beam
Fig. 2 .
2The Beam's navigation and localization module
Fig. 3 .
3The Beam's web interface. The laser map in the GUI shows the third floor of Fenn Hall building at Cleveland State University
https://github.com/xlz/rosbeam 2 https://github.com/people-robots/rosbeam
http://wiki.ros.org/navigation 4 http://wiki.ros.org/amcl 5 http://wiki.ros.org/depthimage_to_laserscan 6 http://wiki.ros.org/rtabmap
http://wiki.ros.org/cob_people_detection
http://wit.ai 9 http://www.cstr.ed.ac.uk/projects/festival/
Overview of the PR2 robot. Overview of the PR2 robot. http://www.willowgarage.com/pages/pr2/overview 2. Project romeo. https://projetromeo.com/
Developing high-level cognitive functions for service robots. X Chen, J Ji, J Jiang, G Jin, F Wang, J Xie, Proceedings of the International Conference on Autonomous Agents and Multiagent Systems. the International Conference on Autonomous Agents and Multiagent SystemsChen, X., Ji, J., Jiang, J., Jin, G., Wang, F., Xie, J.: Developing high-level cognitive functions for service robots. In: Proceedings of the International Conference on Autonomous Agents and Multiagent Systems, AAMAS. pp. 989-996 (2010)
N Hawes, C Burbridge, F Jovan, L Kunze, B Lacerda, L Mudrová, J Young, J L Wyatt, D Hebesberger, T Körtner, R Ambrus, N Bore, J Folkesson, P Jensfelt, L Beyer, A Hermans, B Leibe, A Aldoma, T Faulhammer, M Zillich, M Vincze, M Al-Omari, E Chinellato, P Duckworth, Y Gatsoulis, D C Hogg, A G Cohn, C Dondrup, J P Fentanes, T Krajník, J M Santos, T Duckett, M Hanheide, arXiv:1604.04384v2The STRANDS project: Long-term autonomy in everyday environments. cs.ROHawes, N., Burbridge, C., Jovan, F., Kunze, L., Lacerda, B., Mudrová, L., Young, J., Wyatt, J.L., Hebesberger, D., Körtner, T., Ambrus, R., Bore, N., Folkesson, J., Jensfelt, P., Beyer, L., Hermans, A., Leibe, B., Aldoma, A., Faulhammer, T., Zillich, M., Vincze, M., Al-Omari, M., Chinellato, E., Duckworth, P., Gatsoulis, Y., Hogg, D.C., Cohn, A.G., Dondrup, C., Fentanes, J.P., Krajník, T., Santos, J.M., Duckett, T., Hanheide, M.: The STRANDS project: Long-term autonomy in everyday environments. arXiv:1604.04384v2 [cs.RO] (2016)
Benchmarking intelligent service robots through scientific competitions: the robocup@home approach. D Holz, L Iocchi, T Van Der Zant, Proceedings of the AAAI Spring Symposium, Designing Intelligent Robots: Reintegrating AI II. the AAAI Spring Symposium, Designing Intelligent Robots: Reintegrating AI IIHolz, D., Iocchi, L., Van Der Zant, T.: Benchmarking intelligent service robots through scien- tific competitions: the robocup@home approach. In: Proceedings of the AAAI Spring Sym- posium, Designing Intelligent Robots: Reintegrating AI II (2013)
BWIBots: A platform for bridging the gap between AI and human-robot interaction research. P Khandelwal, S Zhang, J Sinapov, M Leonetti, J Thomason, F Yang, I Gori, M Svetlik, P Khante, V Lifschitz, J K Aggarwal, R Mooney, P Stone, The International Journal of Robotics Research. Khandelwal, P., Zhang, S., Sinapov, J., Leonetti, M., Thomason, J., Yang, F., Gori, I., Svetlik, M., Khante, P., Lifschitz, V., Aggarwal, J.K., Mooney, R., Stone, P.: BWIBots: A platform for bridging the gap between AI and human-robot interaction research. The International Journal of Robotics Research pp. 635-659 (2017)
Linear model predictive control of the locomotion of pepper, a humanoid robot with omnidirectional wheels. J Lafaye, D Gouaillier, P B Wieber, Proceedings of the IEEE-RAS International Conference on Humanoid Robots, Humanoids. the IEEE-RAS International Conference on Humanoid Robots, HumanoidsLafaye, J., Gouaillier, D., Wieber, P.B.: Linear model predictive control of the locomotion of pepper, a humanoid robot with omnidirectional wheels. In: Proceedings of the IEEE-RAS International Conference on Humanoid Robots, Humanoids. pp. 336-341 (2014)
M Quigley, E Berger, A Y Ng, STAIR: Hardware and software architecture. Proceedings of the AAAI Robotics WorkshopQuigley, M., Berger, E., Ng, A.Y., et al.: STAIR: Hardware and software architecture. In: Proceedings of the AAAI Robotics Workshop (2007)
Tiago mobile manipulator. P Robotics, Robotics, P.: Tiago mobile manipulator. http://tiago.pal-robotics.com/
Herb 2.0: Lessons learned from developing a mobile manipulator for the home. S Srinivasa, D Berenson, M Cakmak, A C Romea, M Dogar, A Dragan, R A Knepper, T D Niemueller, K Strabala, J M Vandeweghe, J Ziegler, Proceedings of the IEEE. 1008Srinivasa, S., Berenson, D., Cakmak, M., Romea, A.C., Dogar, M., Dragan, A., Knep- per, R.A., Niemueller, T.D., Strabala, K., Vandeweghe, J.M., Ziegler, J.: Herb 2.0: Lessons learned from developing a mobile manipulator for the home. Proceedings of the IEEE 100(8), 2410-2428 (2012)
Cobots: Robust symbiotic autonomous mobile service robots. M Veloso, J Biswas, B Coltin, S Rosenthal, Proceedings of the International Joint Conference on Artificial Intelligence, IJCAI. the International Joint Conference on Artificial Intelligence, IJCAIVeloso, M., Biswas, J., Coltin, B., Rosenthal, S.: Cobots: Robust symbiotic autonomous mobile service robots. In: Proceedings of the International Joint Conference on Artificial Intelligence, IJCAI. pp. 4423-4429 (2015)
Robust real-time face detection. P Viola, M J Jones, International Journal of Computer Vision. 572Viola, P., Jones, M.J.: Robust real-time face detection. International Journal of Computer Vision 57(2), 137-154 (05 2004)
Fetch and freight: Standard platforms for service robot applications. M Wise, M Ferguson, D King, E Diehr, D Dymesich, Proceedings of IJCAI Workshop on Autonomous Mobile Service Robots. IJCAI Workshop on Autonomous Mobile Service RobotsWise, M., Ferguson, M., King, D., Diehr, E., Dymesich, D.: Fetch and freight: Standard platforms for service robot applications. In: Proceedings of IJCAI Workshop on Autonomous Mobile Service Robots (2016)
Towards a personal robotics development platform: Rationale and design of an intrinsically safe personal robot. K A Wyrobek, E H Berger, H F M V Der Loos, J K Salisbury, Proceedings of the IEEE International Conference on Robotics and Automation, ICRA. the IEEE International Conference on Robotics and Automation, ICRAWyrobek, K.A., Berger, E.H., der Loos, H.F.M.V., Salisbury, J.K.: Towards a personal robotics development platform: Rationale and design of an intrinsically safe personal robot. In: Proceedings of the IEEE International Conference on Robotics and Automation, ICRA. pp. 2165-2170 (2008)
| [
"https://github.com/xlz/rosbeam",
"https://github.com/people-robots/rosbeam"
]
|
[
"Ultra low density of CdTe quantum dots grown by MBE",
"Ultra low density of CdTe quantum dots grown by MBE"
]
| [
"J Kobak \nInstitute of Experimental Physics\nFaculty of Physics\nUniversity of Warsaw\nul. Hoża 6902-681WarsawPoland\n",
"J-G Rousset \nInstitute of Experimental Physics\nFaculty of Physics\nUniversity of Warsaw\nul. Hoża 6902-681WarsawPoland\n",
"R Rudniewski \nInstitute of Experimental Physics\nFaculty of Physics\nUniversity of Warsaw\nul. Hoża 6902-681WarsawPoland\n",
"E Janik \nInstitute of Experimental Physics\nFaculty of Physics\nUniversity of Warsaw\nul. Hoża 6902-681WarsawPoland\n",
"T S Lupiński \nInstitute of Experimental Physics\nFaculty of Physics\nUniversity of Warsaw\nul. Hoża 6902-681WarsawPoland\n",
"P Kossacki \nInstitute of Experimental Physics\nFaculty of Physics\nUniversity of Warsaw\nul. Hoża 6902-681WarsawPoland\n",
"A Golnik \nInstitute of Experimental Physics\nFaculty of Physics\nUniversity of Warsaw\nul. Hoża 6902-681WarsawPoland\n",
"W Pacuski \nInstitute of Experimental Physics\nFaculty of Physics\nUniversity of Warsaw\nul. Hoża 6902-681WarsawPoland\n"
]
| [
"Institute of Experimental Physics\nFaculty of Physics\nUniversity of Warsaw\nul. Hoża 6902-681WarsawPoland",
"Institute of Experimental Physics\nFaculty of Physics\nUniversity of Warsaw\nul. Hoża 6902-681WarsawPoland",
"Institute of Experimental Physics\nFaculty of Physics\nUniversity of Warsaw\nul. Hoża 6902-681WarsawPoland",
"Institute of Experimental Physics\nFaculty of Physics\nUniversity of Warsaw\nul. Hoża 6902-681WarsawPoland",
"Institute of Experimental Physics\nFaculty of Physics\nUniversity of Warsaw\nul. Hoża 6902-681WarsawPoland",
"Institute of Experimental Physics\nFaculty of Physics\nUniversity of Warsaw\nul. Hoża 6902-681WarsawPoland",
"Institute of Experimental Physics\nFaculty of Physics\nUniversity of Warsaw\nul. Hoża 6902-681WarsawPoland",
"Institute of Experimental Physics\nFaculty of Physics\nUniversity of Warsaw\nul. Hoża 6902-681WarsawPoland"
]
| []
| This work presents methods of controlling the density of self-assembled CdTe quantum dots (QDs) grown by molecular beam epitaxy. Two approaches are discussed: increasing the deposition temperature of CdTe and the reduction of CdTe layer thickness. Photoluminescence (PL) measurements at low temperature confirms that both methods can be used for significant reduction of QDs density from 10 10 QD/cm 2 to 10 7 -10 8 QD/cm 2 . For very low QDs density, identification of all QDs lines observed in the spectrum is possible. | 10.1016/j.jcrysgro.2012.12.133 | [
"https://arxiv.org/pdf/1210.2946v2.pdf"
]
| 98,145,342 | 1210.2946 | 52f3789a1d0d669d26de948df40eb42c003c64a4 |
Ultra low density of CdTe quantum dots grown by MBE
J Kobak
Institute of Experimental Physics
Faculty of Physics
University of Warsaw
ul. Hoża 6902-681WarsawPoland
J-G Rousset
Institute of Experimental Physics
Faculty of Physics
University of Warsaw
ul. Hoża 6902-681WarsawPoland
R Rudniewski
Institute of Experimental Physics
Faculty of Physics
University of Warsaw
ul. Hoża 6902-681WarsawPoland
E Janik
Institute of Experimental Physics
Faculty of Physics
University of Warsaw
ul. Hoża 6902-681WarsawPoland
T S Lupiński
Institute of Experimental Physics
Faculty of Physics
University of Warsaw
ul. Hoża 6902-681WarsawPoland
P Kossacki
Institute of Experimental Physics
Faculty of Physics
University of Warsaw
ul. Hoża 6902-681WarsawPoland
A Golnik
Institute of Experimental Physics
Faculty of Physics
University of Warsaw
ul. Hoża 6902-681WarsawPoland
W Pacuski
Institute of Experimental Physics
Faculty of Physics
University of Warsaw
ul. Hoża 6902-681WarsawPoland
Ultra low density of CdTe quantum dots grown by MBE
numbers: 8107Ta7867Hc Keywords: Atomic layer epitaxyMolecular beam epitaxyCadmium compoundsZinc compoundsTel- luridesSemiconducting II-VI materials
This work presents methods of controlling the density of self-assembled CdTe quantum dots (QDs) grown by molecular beam epitaxy. Two approaches are discussed: increasing the deposition temperature of CdTe and the reduction of CdTe layer thickness. Photoluminescence (PL) measurements at low temperature confirms that both methods can be used for significant reduction of QDs density from 10 10 QD/cm 2 to 10 7 -10 8 QD/cm 2 . For very low QDs density, identification of all QDs lines observed in the spectrum is possible.
I. INTRODUCTION
The density of QDs in a sample is the crucial limiting factor for individual QDs spectroscopy [1][2][3] . Typically II-VI systems, such as self-assembled CdTe/ZnTe QDs, exhibit such a high density of dots 4-6 (order of 10 10 QD/cm 2 ) that the typical laser spot in microscope system (φ = 2 µm) excites hundreds of a dots simultaneously. Good spectral separation of single QD is possible only for dots with particularly low emission energy [6][7][8][9][10] . In order to reduce the number of observed QDs one can use masks 11,12 , etch mesa structures 13 , or use near field spectroscopy 10,14 but all such techniques affect the optical properties such as polarization. A much better solution is to find growth conditions which result in the formation of very low density QDs. This is the motivation of our study. We received good hints from Wojnar et al. 15 , who showed that QDs density can be reduced by thermal annealing (but still hundreds of QDs per laser spot were observed). We developed growth conditions which result in ultra low density of QDs e.g. 1 -3 QDs per laser spot. Our preliminary results have shown that CdTe QDs density depends on growth temperature, therefore we systematically investigated the influence of deposition temperature of CdTe layer and ZnTe cap layer on the density of QDs. In order to distinguish between the effect of thermal annealing and effect of CdTe desorption, we also made a reference series of samples grown with various thicknesses of the CdTe layer from which dots were formed.
II. GROWTH
Samples were grown using molecular beam epitaxy (MBE) in growth chamber model SVT-35. We used GaAs:Si (100) substrates covered by 1 µm thick ZnTe buffer layers. The scheme of the structure is shown in Fig. 1.
In order to verify that our ZnTe buffer is smooth the intensity of RHEED (Reflection High Energy Electron Diffraction) signal was recorded as a function of time. After growth interruption, well-defined RHEED signal oscillations (Fig. 2) due to ZnTe growth were present for a relatively long time (about 50 s) what was considered as a proof of good-quality ZnTe buffer. The buffer was grown always at the same substrate temperature T = 365 • C. Since in our system the thermocouple is in a radiative thermal contact with the substrate, the substrate temperature calibration was obtained by determination of sublimation rate of CdTe and comparing it with data from Ref. 16 , where thermocouple was in contact with the substrate. This allowed us to determine the relation between the temperature shown by our thermocouple and real substrate temperature. Consequently, for the temperature range 300-400 • C the precision of our substrate temperature measurement is ± 5 • C. CdTe QDs were formed using a method of amorphous tellurium desorption proposed by Tinjod et al. 5 . The thin CdTe layer was deposited by ALE (Atomic Layer Epitaxy), then the substrate was cooled down in the presence of tellurium flux in order to deposit amorphous tellurium, next the substrate was heated to growth temperature. Tellurium was evaporated and a ZnTe cap was grown. Two series of samples were grown with two approaches for the deposition of a thin CdTe layer. In the first one we used a fixed number (12) of ALE cycles of CdTe which at 334 • C corresponds to 3 monolayers (MLs) of CdTe and we varied the temperature during growth in order to influence QDs density. The low density of QDs was obtained when we strongly increased both deposition temperature of the CdTe layer and deposition temperature of ZnTe cap. In each case the deposition temperature of the CdTe thin layer and the ZnTe cap was the same. Second series of samples was obtained with various thickness of CdTe layer. This parameter varied from 4 to 16 ALE cycles of CdTe. During each of ALE loops 0.25 of monolayer has been deposited. Both the deposition temperature of CdTe layer and the ZnTe cap layer were fixed at 334 • C. We found that samples with thinner CdTe layer exhibit lower QDs density in PL measurements. For each of the described series of samples, the characteristic transformation of RHEED image from 2D to 3D, related to QDs formation 5 , was observed (Fig. 3).
III. MICROPHOTOLUMINESCENCE RESULTS
Low-temperature (T = 7 K) microphotoluminescence (µPL) measurements were conducted in typical setup with the microscope objective giving laser spot with about 2 µm diameter. Excitation was by a blue laser diode (405 nm). Spectral resolution was 0.08 meV. For both series of samples we observed sharp emission lines related to single QDs, which we consider as a confirmation of formation of QDs in all tested growth conditions (Figs. 4 and 5). The main difference between results obtained for various samples was the number of sharp PL lines observed in the spectrum. At the limit of the low excitation power each QD gives only a few intense lines (Figs. 6 and 7): neutral exciton line (X) and one or two trion lines (X + and/or X − ) 17 . This allows us to estimate the number of emitting dots per area of the laser spot, and consequently, the QDs density. Example spectra are presented in Fig. 4 for series of samples with different deposition temperature of CdTe layer. The increasing of CdTe deposition temperature results in decrease in the number of observed QDs lines. Also the averaged emission energy of the QDs ensemble is increasing. It shows that for higher growth temperature we observe smaller QDs density and the typical size of the dot is smaller or typical potential depth is more shallow. Such effects could be caused by temperature induced evaporation of CdTe layer. The decrease of the potential depth could be caused by temperature induced mixing of CdTe in QDs with ZnTe from the barrier. Depending on the temperature conditions we obtained samples with various estimated densities of QDs: with typical 5 values of the density of QDs (10 10 QD/cm 2 ), a sample with low-density of QDs (10 9 QD/cm 2 ) and samples with ultra-low-density of QDs (10 7 -10 8 QD/cm 2 ). We obtained very similar results for samples grown with density of QDs controlled by CdTe layer thickness (µPL spectra presented in Fig. 5). Decreasing the thickness of the CdTe layer results in reduction of density of QDs and increase of the averaged photon energy emission. We interpret this effect in the following way: thinner CdTe layer results in smaller size and number of QDs.
Both approaches to the reduction of QDs density: increasing CdTe deposition temperature and decreasing amount of deposited CdTe results in a similar evolution of optical properties of QDs ensemble (Figs. 4 and 5). We conclude that the key parameter in both cases is the amount of CdTe material overgrown by ZnTe cap. The main impact of the increased temperature during CdTe deposition is therefore desorption of CdTe material. We note here that both methods of controlling QDs density were combined in the method using amorphous tellurium 5 . Other methods of CdTe/ZnTe QDs formation result generally in higher QDs densities 9 and reduction of the QDs requires separated study.
IV. NEW OPPORTUNITIES COMING FROM LOW-DENSITY OF DOTS
Reduction of the number of QDs allows easy identification (Fig. 6) and study of properties 17,18 almost all emission lines in broad µPL spectrum. This is unavailable without using additional treatments like mesas in usual samples. The QDs grown by us with ultra low density exhibit typical properties of CdTe QDs: characteristic pattern of emission lines associated with different excitonic optical transitions in single QD. We identified strong lines related to neutral exciton in the highest energy, trions in intermediate energies, biexction in lowest energy, and weaker lines associated with higher charged states 9,18,19 . Emission lines were identified by a combination of various methods: measurement of luminescence intensity as a function of excitation power, linear polarization anisotropy measurements (Fig. 7) and Zeeman effect. Studying QDs with ultra low density opened for us the possibility to study QDs with various sizes and emission energies, including QDs with typical dimensions, with PL in the middle of the QDs ensemble.
V. CONCLUSIONS
We developed two methods of obtaining samples with ultra low density of QDs: increasing CdTe deposition temperature and decreasing CdTe layer thickness. Both methods give expected results -tunable QDs density. It is difficult to distinguish which method is better. Ultra low density of QDs allows the identification of almost all emission lines in whole PL spectra and the QDs show typical properties reported before for CdTe/ZnTe system. Successful control of QDs density opens new perspectives for spectroscopic studies of QDs.
VI. ACKNOWLEDGEMENTS
We wish to acknowledge helpful discussions with Piotr Wojnar. Project was carried out with the use of CePT, CeZaMat and NLTK infrastructures financed by the European Union -the European Regional Development Fund within the Operational Programme "Innovative economy" for 2007-2013.
FIG. 1 :
1(color online) Structure of samples -CdTe/ZnTe QDs on GaAs (100) substrates.
FIG
. 2: (color online) RHEED signal intensity oscillations observed after growth interruption of ZnTe buffer.
FIG. 3 :
3(color online) Typical RHEED image of CdTe layer a) before and b) after desorption of amorphous tellurium which lead to characteristic transformation from 2D to 3D image.
FIG
. 4: (color online) PL spectra of CdTe/ZnTe QDs for various CdTe deposition temperature. Influence of the deposition temperature on the density of QDs and optical properties. Estimated numbers of QDs excited by laser spot (φ = 2 µm) are given.
FIG
. 6: (color online) Low-density of QDs allows the identification of almost all emission lines in PL spectra. We identified emission lines from three QDs: neutral excitons (X, XX), and charged trions (X + , X − , X 2− ,XX − ).(NCN projects DEC-2011/01/B/ST3/02406 and DEC-2011/02/A/ST3/00131 and NCBiR project LIDER/30/13/L-2/10/NCBiR/2011).
FIG
. 7: (color online) Emission lines show typical anisotropy properties: neutral exciton and biexciton have opposite linear polarization and trions are not linearly polarized. otechnology 23, 015605 (2012). 4 G. Karczewski, S. Maćkowski, M. Kutrowski, T. Wojtowicz, and J. Kossut, Appl. Phys. Lett. 74, 3011 (1999).
. J Brown, F Wu, P M Petroff, J S Speck, Appl. Phys. Lett. 84690J. Brown, F. Wu, P. M. Petroff, and J. S. Speck, Appl. Phys. Lett. 84, 690 (2004).
. A Dousse, J Suffczyński, Alexiosbeveratos , O Krebs, A Lematre, I Sagnes, J Bloch, P Voisin, P Senellart, Nature. 466217A. Dousse, J. Suffczyński, AlexiosBeveratos, O. Krebs, A. Lematre, I. Sagnes, J. Bloch, P. Voisin, and P. Senellart, Nature 466, 217 (2010).
. R Rödel, A Bauer, S Kremling, S Reitzenstein, S Höfling, M Kamp, L Worschech, A Forchel, NanR. Rödel, A. Bauer, S. Kremling, S. Reitzenstein, S. Höfling, M. Kamp, L. Worschech, and A. Forchel, Nan-
. F Tinjod, B Gilles, S Moehl, K Kheng, H Mariette, Appl. Phys. Lett. 824340F. Tinjod, B. Gilles, S. Moehl, K. Kheng, and H. Mariette, Appl. Phys. Lett. 82, 4340 (2003).
. H S Lee, A Rastelli, M Benyoucef, F Ding, T W Kim, H L Park, O G Schmidt, Nanotechnology. 2075705H. S. Lee, A. Rastelli, M. Benyoucef, F. Ding, T. W. Kim, H. L. Park, and O. G. Schmidt, Nanotechnology 20, 075705 (2009).
. J Suffczyński, T Kazimierczuk, M Goryca, B Piechal, A Trajnerowicz, K Kowalik, P Kossacki, A Golnik, K Korona, M Nawrocki, J A Gaj, Phys. Rev. B. 7485319J. Suffczyński, T. Kazimierczuk, M. Goryca, B. Piechal, A. Trajnerowicz, K. Kowalik, P. Kossacki, A. Golnik, K. Korona, M. Nawrocki, and J. A. Gaj, Phys. Rev. B 74, 085319 (2006).
. J A Gaj, T Kazimierczuk, M Goryca, M Koperski, A Golnik, P Kossacki, M Nawrocki, P Wojnar, G Karczewski, Acta Phys. Pol. A. 116795J. A. Gaj, T. Kazimierczuk, M. Goryca, M. Koperski, A. Golnik, P. Kossacki, M. Nawrocki, P. Wojnar, and G. Karczewski, Acta Phys. Pol. A 116, 795 (2009).
. J Kobak, W Pacuski, T Jakubczyk, T Kazimierczuk, A Golnik, K Frank, A Rosenauer, C Kruse, D Hommel, J A Gaj, Acta Phys. Pol. A. 119627J. Kobak, W. Pacuski, T. Jakubczyk, T. Kazimierczuk, A. Golnik, K. Frank, A. Rosenauer, C. Kruse, D. Hommel, and J. A. Gaj, Acta Phys. Pol. A 119, 627 (2011).
. R E Pimpinella, A M Mintairov, X Liu, T H Kosel, J L Merz, J K Furdyna, M Dobrowolska, J. Vac. Sci. Technol. B. 29R. E. Pimpinella, A. M. Mintairov, X. Liu, T. H. Kosel, J. L. Merz, J. K. Furdyna, and M. Dobrowolska, J. Vac. Sci. Technol. B 29, 03C119 (2011).
. R J Warburton, C Schaflein, D Haft, F Bickel, A Lorke, K Karrai, J M Garcia, W Schoenfeld, P M Petroff, Nature. 405926R. J. Warburton, C. Schaflein, D. Haft, F. Bickel, A. Lorke, K. Karrai, J. M. Garcia, W. Schoenfeld, and P. M. Petroff, Nature 405, 926 (2000).
. A Zrenner, J. Chem. Phys. 1127790A. Zrenner, J. Chem. Phys. 112, 7790 (2000).
. M Bayer, O Stern, P Hawrylak, S Fafard, A Forchel, Nature. 405923M. Bayer, O. Stern, P. Hawrylak, S. Fafard, and A. Forchel, Nature 405, 923 (2000).
. M Brun, S Huant, J Woehl, J.-F Motte, L Marsal, H Mariette, Solid State Communications. 121407M. Brun, S. Huant, J. Woehl, J.-F. Motte, L. Marsal, and H. Mariette, Solid State Communications 121, 407 (2002).
. P Wojnar, G Karczewski, T Wojtowicz, J Kossut, Acta Phys. Pol. A. 112283P. Wojnar, G. Karczewski, T. Wojtowicz, and J. Kossut, Acta Phys. Pol. A 112, 283 (2007).
. S Tatarenko, B Daudin, D Brun, Appl. Phys. Lett. 65734S. Tatarenko, B. Daudin, and D. Brun, Appl. Phys. Lett. 65, 734 (1994).
. T Kazimierczuk, M Goryca, M Koperski, A Golnik, J A Gaj, M Nawrocki, P Wojnar, P Kossacki, Phys. Rev. B. 81155313T. Kazimierczuk, M. Goryca, M. Koperski, A. Golnik, J. A. Gaj, M. Nawrocki, P. Wojnar, and P. Kossacki, Phys. Rev. B 81, 155313 (2010).
. T Kazimierczuk, T Smoleński, M Goryca, L Lopotowski, P Wojnar, K Fronc, A Golnik, M Nawrocki, J A Gaj, P Kossacki, Phys. Rev. B. 84165319T. Kazimierczuk, T. Smoleński, M. Goryca, L. K lopotowski, P. Wojnar, K. Fronc, A. Golnik, M. Nawrocki, J. A. Gaj, and P. Kossacki, Phys. Rev. B 84, 165319 (2011).
. C Kruse, W Pacuski, T Jakubczyk, J Kobak, J A Gaj, K Frank, M Schowalter, A Rosenauer, M Florian, F Jahnke, D Hommel, Nanotechnology. 22285204C. Kruse, W. Pacuski, T. Jakubczyk, J. Kobak, J. A. Gaj, K. Frank, M. Schowalter, A. Rosenauer, M. Florian, F. Jahnke, and D. Hommel, Nanotechnology 22, 285204 (2011).
| []
|
[
"INTEGRAL CONDITIONS FOR NONUNIFORM µ-DICHOTOMY",
"INTEGRAL CONDITIONS FOR NONUNIFORM µ-DICHOTOMY",
"INTEGRAL CONDITIONS FOR NONUNIFORM µ-DICHOTOMY",
"INTEGRAL CONDITIONS FOR NONUNIFORM µ-DICHOTOMY"
]
| [
"António J G Bento ",
"Nicolae Lupa ",
"ANDMihail Megan ",
"César M Silva ",
"António J G Bento ",
"Nicolae Lupa ",
"ANDMihail Megan ",
"César M Silva "
]
| []
| []
| We give necessary integral conditions and sufficient ones for a general concept of dichotomy for evolution operators which includes as particular cases the well-known concepts of nonuniform exponential dichotomy and nonuniform polynomial dichotomy and also contains new situations.2010 Mathematics Subject Classification. 47D06,34D09. | 10.3934/dcdsb.2017163 | [
"https://arxiv.org/pdf/1405.2946v3.pdf"
]
| 119,567,722 | 1405.2946 | d72f33e0085c0348724b5d3ad4a4d8669f857a5b |
INTEGRAL CONDITIONS FOR NONUNIFORM µ-DICHOTOMY
12 May 2014
António J G Bento
Nicolae Lupa
ANDMihail Megan
César M Silva
INTEGRAL CONDITIONS FOR NONUNIFORM µ-DICHOTOMY
12 May 2014
We give necessary integral conditions and sufficient ones for a general concept of dichotomy for evolution operators which includes as particular cases the well-known concepts of nonuniform exponential dichotomy and nonuniform polynomial dichotomy and also contains new situations.2010 Mathematics Subject Classification. 47D06,34D09.
Introduction
The notion of exponential dichotomy is a fundamental tool in the study of stability of difference and differential equations and can be traced back to the work of Perron [17] on the stability of ordinary differential equations, and of Li [13] for discrete time systems. We also refer to the book of Chicone and Latushkin [11] for important results in infinite-dimensional spaces.
In some situations, in particular for nonautonomous systems, the concept of exponential dichotomy is too restrictive and it is important to look for more general hyperbolic behavior.
We can identify, at least, two ways to generalize this concept: allow some loss of hyperbolicity along the trajectories, a path leading to notions similar to Pesin's nonuniform hyperbolicity [18,20,19], and consider asymptotic behavior that is not necessarily exponential, an approach followed by Naulin and Pinto in [16,21], where the authors considered uniform dichotomies with asymptotic behavior given by general growth rates.
In recent years, a large number of papers study different aspects of the dynamical behavior of systems with nonuniform exponential dichotomies, a type of dichotomic behavior where some exponential loss of hyperbolicity along the trajectories is allowed (see for example the work of Barreira and Valls [4] and papers [14,22,24]). Also, several results were obtained in [1,2,3,5,6,7,8,9,10] for dichotomic behaviors that are both nonuniform and not necessarily exponential.
One of the most important results in the stability theory of evolution operators is due to Datko [12] which has given an integral characterization of uniform exponential stability. This characterization is used to obtain a necessary and sufficient condition for uniform exponential stability in terms of Lyapunov functions. Preda and Megan extend Datko theorem to uniform exponential dichotomy [23]. Generalizations of this result in the case of nonuniform exponential dichotomy are given in [15,14,22]. For more details and history about Datko theorem we refer the reader to [25].
In this paper, we consider a notion of dichotomy which is both nonuniform and not necessarily exponential in the general context of evolution operators, with the purpose of obtaining necessary conditions and sufficient ones in the spirit of Datko's results. We emphasize that this type of dichotomy includes as particular cases the notions of nonuniform exponential dichotomy and nonuniform polynomial dichotomy, respectively. We show that our results extend previous theorems and also contain new situations. Also, we note that we do not need to assume the invertibility of the evolution operators on the whole space, which allow us to apply our results to compact operators defined in infinite dimensional spaces.
Notions and preliminaries
Let X be a Banach space and let B(X) be the Banach algebra of all bounded linear operators on X. We denote by Id the identity operator of B(X). Throughout this paper, we also denote by R + 0 the set of non-negative real numbers and we consider ∆ the set defined by ∆ = (t, s) ∈ R + 0 × R + 0 : t ≥ s . We first recall the definition of an evolution operator:
Definition 2.
1. An operator valued function U : ∆ → B(X) is said to be an evolution operator if (1) U (t, t) = Id for every t ≥ 0;
(2) U (t, τ )U (τ, s) = U (t, s) for all t ≥ τ ≥ s ≥ 0;
(3) (t, s) → U (t, s)x is continuous for every x ∈ X.
Definition 2.2. We say that an increasing function µ : R + 0 → [1, +∞) is a growth rate if µ(0) = 1 and lim t→+∞ µ(t) = +∞. Lemma 2.3. Let µ : R + 0 → [1, +∞) be a differentiable growth rate and K > 0. The following statements are equivalent:
(i) sup t≥0 µ ′ (t) µ(t) ≤ K; (ii) µ(t) ≤ µ(t 0 )e K(t−t 0 ) for every t ≥ t 0 ≥ 0; (iii) sup t≥0 µ(t + δ) µ(t) ≤ e Kδ for every δ > 0. Proof. (i) ⇒ (ii). Let t ≥ t 0 ≥ 0 and τ ∈ [t 0 , t]. We have that µ ′ (τ ) µ(τ ) ≤ K,
which is equivalent to
d dτ log µ(τ ) ≤ K, τ ∈ [t 0 , t].
Integrating from t 0 to t in the last inequality, we deduce that
log µ(t) − log µ(t 0 ) ≤ K(t − t 0 ).
This implies that (ii) holds. Implication (ii) ⇒ (iii) is straightforward.
(iii) ⇒ (i). Let t ≥ 0. By (iii) we have µ(t + δ) − µ(t) δ µ(t) = µ(t + δ) µ(t) − 1 δ ≤ e Kδ − 1 δ ,
for all δ > 0. Setting δ → 0 in the relation above and using the fact that µ is a differentiable function, we obtain
µ ′ (t) µ(t) ≤ K, for all t ≥ 0,
and hence (i) holds.
Definition 2.4. A strongly continuous function P :
R + 0 → B(X) is called a projection valued function if P 2 (t) = P (t) for every t ≥ 0.
Given a projection valued function P : R + 0 → B(X), we denote by Q the complementary projection valued function, that is Q(t) = Id−P (t) for every t ≥ 0. Definition 2.5. We say that a projection valued function P : R + 0 → B(X) is compatible with an evolution operator U : ∆ → B(X) if, for all t ≥ s ≥ 0, we have (1) P (t)U (t, s) = U (t, s)P (s);
(2) the restriction U (t, s)| Q(s)X : Q(s)X → Q(t)X is an isomorphism and we denote its inverse by U Q (s, t).
Remark 2.6. If P : R + 0 → B(X) is a projection valued function compatible with an evolution operator U : ∆ → B(X), then for every (t, s) ∈ ∆, it follows
(i) U (t, s)U Q (s, t)Q(t) = Q(t); (ii) U Q (s, t)U (t, s)Q(s) = Q(s). Moreover, for all t ≥ τ ≥ s ≥ 0, we have that U Q (s, τ )U Q (τ, t) = U Q (s, t).
Definition 2.7. Given a growth rate µ : R + 0 → [1, +∞) and a projection valued function P : R + 0 → B(X) compatible with an evolution operator U : ∆ → B(X), we say that U has a nonuniform µ-dichotomy with projection valued function P if there exist constants a, b > 0, ε ≥ 0 and N 1 , N 2 ≥ 1 such that, for all t ≥ s ≥ 0, we have
(1) U (t, s)P (s) ≤ N 1 µ(t) µ(s) −a µ(s) ε ; (2) U Q (s, t)Q(t) ≤ N 2 µ(t) µ(s) −b µ(t) ε .
When ε = 0, we say that U has a uniform µ-dichotomy with projection valued function P .
In the following we consider particular cases of the notion of nonuniform µ-dichotomy:
(1) if µ(t) = e t , then we recover the notion of nonuniform exponential dichotomy (in the sense of Barreira-Valls) [4] and in particular (when ε = 0) the classical notion of uniform exponential dichotomy; (2) if µ(t) = t + 1, then we recover the notion of nonuniform polynomial dichotomy [5,7,8].
Example 2.8. Let µ : R + 0 → [1, +∞)
be a continuous growth rate and ε ≥ 0 be a non-negative real number. On X = R 2 endowed with the norm (x 1 , x 2 ) = max{|x 1 |, |x 2 |}, we consider the projection valued function
P (t)(x 1 , x 2 ) = (x 1 + (µ(t) ε − 1)x 2 , 0)
and its complementary projection valued function
Q(t)(x 1 , x 2 ) = ((1 − µ(t) ε )x 2 , x 2 ).
Obviously, we have that
P (t) = µ(t) ε and Q(t) = max{µ(t) ε − 1, 1} ≤ µ(t) ε .(1)
Given a, b > 0, we consider the evolution operator U : ∆ → B(R 2 ),
U (t, s) = µ(t) µ(s) −a P (s) + µ(t) µ(s) b Q(t).
Since P (t)P (s) = P (s), Q(t)Q(s) = Q(t) and Q(t)P (s) = 0, we have that P is a projection valued function compatible with U . Moreover, it follows that
U (t, s)P (s) = µ(t) µ(s) −a P (s) and U Q (s, t)Q(t) = µ(t) µ(s) −b Q(s).
By (1) and using the relations above, we deduce that
U (t, s)P (s) = µ(t) µ(s) −a µ(s) ε and U Q (s, t)Q(t) ≤ µ(t) µ(s) −b µ(s) ε ≤ µ(t) µ(s) −b µ(t) ε ,
for t ≥ s ≥ 0, which shows that the evolution operator U has a nonuniform µ-dichotomy with projection valued function P . We will now show that for ε > 0 the nonuniform µ-dichotomy with projection valued function P is not a uniform µ-dichotomy with projection valued function P . Assume that U has a uniform µ-dichotomy with projection valued function P , then there exist ν > 0 and N ≥ 1 such that
U (t, s)P (s) ≤ N µ(t) µ(s) −ν , for all t ≥ s ≥ 0,
which is equivalent to
µ(t) µ(s) −a µ(s) ε ≤ N µ(t) µ(s) −ν , for all t ≥ s ≥ 0.(2)
Setting t = s in (2) we have
µ(s) ε ≤ N for all s ≥ 0
and this is absurd because lim t→+∞ µ(t) = +∞. Therefore, when ε > 0 the evolution operator U does not have a uniform µ-dichotomy with projection valued function P .
The main results
For a given evolution operator U : ∆ → B(X) and a projection valued function P : R + 0 → B(X) compatible with U , we denote the Green function associated to the evolution operator U and the projection valued function P compatible with U by
G(t, s) := U (t, s)P (s) for t > s ≥ 0 −U Q (t, s)Q(s) for s > t ≥ 0 .
We have the following result Theorem 3.1. Let p > 0 and µ : R + 0 → [1, +∞) be a differentiable growth rate. If the evolution operator U : ∆ → B(X) has a nonuniform µ-dichotomy with a dichotomy projection valued function P : R + 0 → B(X), then for every positive constant γ < min{a, b} it follows that
+∞ 0 µ ′ (τ ) µ(τ ) µ(τ ) µ(t) pγ sign(τ −t) G(τ, t)x p dτ ≤ Dµ(t) pε x p ,(3)
for every (t, x) ∈ R + 0 × X, where
D = N p 1 p(a − γ) + N p 2 p(b − γ) ·(4)
Proof. For (t, x) ∈ R + 0 × X and 0 < γ < min{a, b}, we have
+∞ 0 µ ′ (τ ) µ(τ ) µ(τ ) µ(t) pγ sign(τ −t) G(τ, t)x p dτ = +∞ t µ ′ (τ ) µ(τ ) µ(τ ) µ(t) pγ U (τ, t)P (t)x p dτ + t 0 µ ′ (τ ) µ(τ ) µ(t) µ(τ ) pγ U Q (τ, t)Q(t)x p dτ ≤ N p 1 +∞ t µ ′ (τ ) µ(τ ) µ(τ ) µ(t) pγ µ(τ ) µ(t) −pa µ(t) pε x p dτ + N p 2 t 0 µ ′ (τ ) µ(τ ) µ(t) µ(τ ) pγ µ(t) µ(τ ) −pb µ(t) pε x p dτ = N p 1 µ(t) pε µ(t) p(a−γ) +∞ t µ(τ ) −p(a−γ)−1 µ ′ (τ )dτ x p + N p 2 µ(t) pε µ(t) −p(b−γ) t 0 µ(τ ) p(b−γ)−1 µ ′ (τ )dτ x p ≤ N p 1 µ(t) pε µ(t) p(a−γ) µ(t) −p(a−γ) p(a − γ) x p + N p 2 µ(t) pε µ(t) −p(b−γ) µ(t) p(b−γ) p(b − γ) x p = Dµ(t) pε x p ,
and this proves the result.
The following result is a partial converse of Theorem 3.1 and it can be considered a Datko type theorem [12,14] for the existence of nonuniform µ-dichotomy.
Theorem 3.2. Let µ : R + 0 → [1, +∞) be a differentiable growth rate such that
K µ := sup t≥0 µ ′ (t) µ(t) < +∞.(5)
Assume that U : ∆ → B(X) is an evolution operator and P : R + 0 → B(X) is a projection valued function compatible with U such that
G(t, s) ≤ M µ ′ (s) µ(s) µ(t) µ(s) ω sign(t−s) µ(s) α , t, s ≥ 0, t = s,(6)
for some ω > 0, α ≥ 0 and M ≥ 1. If (3) holds for some p ≥ 1, γ > α, ε ≥ 0 and D > 0, then U has a nonuniform µ-dichotomy with projection valued function P .
Proof. Let x ∈ X. By (6), (5) and Lemma 2.3, if t ≥ s + 1, we have
U (t, s)P (s)x p = t t−1 U (t, s)P (s)x p dτ ≤ M p t t−1 µ ′ (τ ) µ(τ ) p µ(t) µ(τ ) pω µ(τ ) pα U (τ, s)P (s)x p dτ ≤ M p K p−1 µ µ(t) pα × t t−1 µ ′ (τ ) µ(τ ) µ(t) µ(τ ) pω µ(τ ) µ(s) −pγ µ(τ ) µ(s) pγ U (τ, s)P (s)x p dτ = M p K p−1 µ µ(t) pα µ(t) µ(s) −pγ × t t−1 µ ′ (τ ) µ(τ ) µ(t) µ(τ ) p(ω+γ) µ(τ ) µ(s) pγ U (τ, s)P (s)x p dτ ≤ M p K p−1 µ e Kµ(ω+γ)p µ(t) pα µ(t) µ(s) −pγ × t t−1 µ ′ (τ ) µ(τ ) µ(τ ) µ(s) pγ U (τ, s)P (s)x p dτ ≤ M p K p−1 µ e Kµ(ω+γ)p µ(t) pα µ(t) µ(s) −pγ × ∞ s µ ′ (τ ) µ(τ ) µ(τ ) µ(s) pγ U (τ, s)P (s)x p dτ ≤ DM p K p−1 µ e Kµ(ω+γ)p µ(t) pα µ(t) µ(s) −pγ µ(s) pε x p = DM p K p−1 µ e Kµ(ω+γ)p µ(t) µ(s) −p(γ−α) µ(s) p(ε+α) x p(7)
and, if s ≤ t < s + 1, we get
U (t, s)P (s)x ≤ M µ ′ (s) µ(s) µ(t) µ(s) ω µ(s) α x ≤ M K µ e Kµ(ω+γ−α) µ(t) µ(s) −(γ−α) µ(s) ε+α x .(8)
On the other hand, for t ≥ s + 1, we have
U Q (s, t)Q(t)x p = s+1 s U Q (s, t)Q(t)x p dτ ≤ M p s+1 s µ ′ (τ ) µ(τ ) p µ(τ ) µ(s) pω µ(τ ) pα U Q (τ, t)Q(t)x p dτ ≤ M p K p−1 µ µ(s + 1) pα × s+1 s µ ′ (τ ) µ(τ ) µ(τ ) µ(s) pω µ(t) µ(τ ) −pγ µ(t) µ(τ ) pγ U Q (τ, t)Q(t)x p dτ ≤ M p K p−1 µ e Kµαp µ(s) pα µ(t) µ(s) −pγ × s+1 s µ ′ (τ ) µ(τ ) µ(τ ) µ(s) p(ω+γ) µ(t) µ(τ ) pγ U Q (τ, t)Q(t)x p dτ ≤ M p K p−1 µ e Kµ(α+ω+γ)p µ(s) pα µ(t) µ(s) −pγ × s+1 s µ ′ (τ ) µ(τ ) µ(t) µ(τ ) pγ U Q (τ, t)Q(t)x p dτ ≤ M p K p−1 µ e Kµ(α+ω+γ)p µ(s) pα µ(t) µ(s) −pγ × t 0 µ ′ (τ ) µ(τ ) µ(t) µ(τ ) pγ U Q (τ, t)Q(t)x p dτ ≤ DM p K p−1 µ e Kµ(α+ω+γ)p µ(s) pα µ(t) µ(s) −pγ µ(t) pε x p = DM p K p−1 µ e Kµ(α+ω+γ)p µ(t) µ(s) −p(γ+α) µ(t) p(ε+α) x p(9)
and, for s ≤ t < s + 1, we get
U Q (s, t)Q(t)x ≤ M µ ′ (t) µ(t) µ(t) µ(s) ω µ(t) α x ≤ M K µ e Kµ(ω+γ+α) µ(t) µ(s) −(γ+α) µ(t) ε+α x .(10)
By (7), (8), (9) and (10) we obtain the result.
In the particular case when µ(t) = e t , we recover Theorem 1 in [14]: If there exist p ≥ 1, γ > α, ε ≥ 0 and D > 0 such that
+∞ 0 e pγ|τ −t| G(τ, t)x p dτ ≤ De pεt x p ,
for every (t, x) ∈ R + 0 × X, then U has a nonuniform exponential dichotomy with projection valued function P .
A similar result to the one above can be obtained in the case of nonuniform polynomial dichotomy: Corollary 3.4. We assume that U : ∆ → B(X) is an evolution operator and P : R + 0 → B(X) is a projection valued function compatible with U such that there exist constants ω > 0, α ≥ 0 and M ≥ 1 with
G(t, s) ≤ M 1 s + 1 t + 1 s + 1 ω sign(t−s) (s + 1) α , for t, s ≥ 0, t = s.
If there exist p ≥ 1, γ > α, ε ≥ 0 and D > 0 such that
+∞ 0 1 τ + 1 τ + 1 t + 1 pγ sign(τ −t) G(τ, t)x p dτ ≤ D(t + 1) pε x p ,
for every (t, x) ∈ R + 0 × X, then U has a nonuniform polynomial dichotomy with projection valued function P .
In the following we consider an evolution operator that has a nonuniform µ-dichotomy for a given growth rate µ, different from both exponential and polynomial functions.
Example 3.5. Let µ(t) = t + √ t 2 + 1, t ≥ 0. Given a, b > 1 and α ≥ 0 with α + 1 < min{a, b}, we consider the evolution operator U : ∆ → B(R 2 ),
U (t, s)(x 1 , x 2 ) = (U 1 (t, s)x 1 , U 2 (t, s)x 2 ) , where U 1 (t, s)x 1 = µ ′ (s) µ ′ (t) µ(t) µ(s) −a e α sin 2 s log µ(s)−α sin 2 t log µ(t) x 1 , U 2 (t, s)x 2 = µ ′ (s) µ ′ (t) µ(s) µ(t) −b e α sin 2 t log µ(t)−α sin 2 s log µ(s) x 2 .
Obviously µ is a differentiable growth rate such that
µ ′ (t) = 1 + t √ t 2 + 1 ≥ 1 and µ ′ (t) µ(t) = 1 √ t 2 + 1 ≤ 1, for all t ≥ 0. (11)
Using Theorem 3.2, we will prove that U has a nonuniform µ-dichotomy with the projection valued function P (t)(x 1 , x 2 ) = (x 1 , 0). Indeed, by (11), we have that
G(t, s) ≤ µ ′ (s) µ(s) µ(t) µ(s) ω sign(t−s) µ(s) α+1 , t, s ≥ 0, t = s,
for each ω > 0.
Furthermore, proceeding in a similar manner to the proof of Theorem 3.1, we obtain
+∞ 0 µ ′ (τ ) µ(τ ) µ(τ ) µ(t) γ sign(τ −t) G(τ, t)x dτ ≤ Dµ(t) α+1 x , (t, x) ∈ R + 0 ×R 2 ,
for γ ∈ (α + 1, min{a, b}) and D = 1
a − γ + 1 b − γ
, which shows that U has a nonuniform µ-dichotomy with projection valued function P . such that
H(t)x ≤ µ ′ (t) µ(t) 1/p µ(t) γ P (t)x + µ ′ (t) µ(t) 1/p µ(t) −γ Q(t)x ,(12)
for every t ≥ 0 and every x ∈ X.
Theorem 3.6. Let p ≥ 1 and µ : R + 0 → [1, +∞) be a differentiable growth rate. If the evolution operator U : ∆ → B(X) has a nonuniform µ-dichotomy with a dichotomy projection valued function P : R + 0 → B(X), then for every positive constant γ < min{a, b} and every H ∈ H µ γ,p (P ) there is a function L :
R + 0 × X → R such that, for all (t, s, x) ∈ ∆ × X, we have (i) L(t, U (t, s)x) + t s H(τ )U (τ, s)x p dτ ≤ L(s, x); (ii) L(t, P (t)x) ≥ 0 and L(t, Q(t)x) ≤ 0; (iii) µ(t) −pγ L(t, P (t)x) − µ(t) pγ L(t, Q(t)x) ≤ 2 p−1 Dµ(t) pε x p ;
where the constant D is given by (4).
Proof. Let
L(t, x) = 2 p−1 +∞ t H(τ )U (τ, t)P (t)x p dτ − t 0 H(r)U Q (τ, t)Q(t)x p dτ .
We have
L(t, U (t, s)x) + t s H(τ )U (τ, s)x p dτ = 2 p−1 +∞ t H(τ )U (τ, s)P (s)x p dτ − 2 p−1 t 0 H(τ )U Q (τ, t)Q(t)U (t, s)x p dτ + t s H(τ )U (τ, s)x p dτ.(13)
We first compute
t 0 H(τ )U Q (τ, t)Q(t)U (t, s)x p dτ = s 0 H(τ )U Q (τ, t)Q(t)U (t, s)x p dτ + t s H(τ )U Q (τ, t)Q(t)U (t, s)x p dτ = s 0 H(τ )U Q (τ, s)Q(s)x p dτ + t s H(τ )U (τ, s)Q(s)x p dτ.
On the other hand, using the inequality
x + y p ≤ 2 p−1 x p + 2 p−1 y p , for x, y ∈ X, it follows that
Now, we have, by (13) and (14),
L(t, U (t, s)x) + t s H(τ )U (τ, s)x p dτ ≤ 2 p−1 +∞ s H(τ )U (τ, s)P (s)x p dτ − 2 p−1 s 0 H(τ )U Q (τ, s)Q(s)x p dτ = L(s, x),
for all (t, s, x) ∈ ∆ × X. Clearly L(t, P (t)x) ≥ 0 and L(t, Q(t)x) ≤ 0. Moreover, by Theorem 3.1, we deduce that
µ(t) −pγ L(t, P (t)x) − µ(t) pγ L(t, Q(t)x) = 2 p−1 µ(t) −pγ +∞ t H(τ )U (τ, t)P (t)x p dτ + 2 p−1 µ(t) pγ t 0 H(τ )U Q (τ, t)Q(t)x p dτ ≤ 2 p−1 +∞ 0 µ ′ (τ ) µ(τ ) µ(τ ) µ(t) pγ sign(τ −t) G(τ, t)x p dτ ≤ 2 p−1 Dµ(t) pε x p ,
for all (t, x) ∈ R + 0 × X. This ends the proof. Theorem 3.7. Let µ : R + 0 → [1, +∞) be a differentiable growth rate that verifies (5). Assume that U : ∆ → B(X) is an evolution operator and P : R + 0 → B(X) is a projection valued function compatible with U such that (6) holds. If for some p ≥ 1, γ > α and every H ∈ H µ γ,p (P ) there is a function L : R + 0 × X → R that satisfies conditions (i)-(iii) in Theorem 3.6 for some ε ≥ 0 and D > 0, then U has a nonuniform µ-dichotomy with projection valued function P .
Proof. Let
H(t)x = µ ′ (t) µ(t) 1/p µ(t) γ P (t)x + µ ′ (t) µ(t) 1/p µ(t) −γ Q(t)x.
Then H ∈ H µ γ,p (P ) and, by (i) and (ii) in Theorem 3.6, we have
u t µ ′ (τ ) µ(τ ) µ(τ ) µ(t) pγ U (τ, t)P (t)x p dτ = µ(t) −pγ u t H(τ )U (τ, t)P (t)x p dτ ≤ µ(t) −pγ [L(t, P (t)x) − L(u, U (u, t)P (t)x)]
≤ µ(t) −pγ L(t, P (t)x), for every u ≥ t, which implies
+∞ t µ ′ (τ ) µ(τ ) µ(τ ) µ(t) pγ U (τ, t)P (t)x p dτ ≤ µ(t) −pγ L(t, P (t)x).(15)
On the other hand, ≤ µ(t) pγ |L(t, Q(t)x)|.
By (15), (16) and using (iii) in Theorem 3.6, it follows
+∞ 0 µ ′ (τ ) µ(τ ) µ(τ ) µ(t) pγ sign(τ −t)
G(τ, t)x p dτ ≤ µ(t) −pγ L(t, P (t)x) − µ(t) pγ L(t, Q(t)x)
≤ 2 p−1 Dµ(t) pε x p .
By Theorem 3.2 we deduce now that U has a nonuniform µ-dichotomy with projection valued function P .
If X is a Hilbert space, we obtain the following result:
Corollary 3.8. Let µ : R + 0 → [1, +∞) be a differentiable growth rate that verifies (5). Assume that U : ∆ → B(X) is an evolution operator on a Hilbert space X and P : R + 0 → B(X) is a projection valued function compatible with U such that (6) holds. If for some γ > α and every H ∈ H µ γ,2 (P ) there is a strongly continuous operator valued function W : R + 0 → B(X) with W (t) * = W (t), which satisfies for (t, s, x) ∈ ∆ × X, U (t, s) * W (t)U (t, s)x + t s U (τ, s) * H(τ ) * H(τ )U (τ, s)x dτ, x ≤ W (s)x, x and, for (t, x) ∈ R + 0 × X, verifies W (t)P (t)x, P (t)x ≥ 0, W (t)Q(t)x, Q(t)x ≤ 0 and µ(t) −2γ W (t)P (t)x, P (t)x − µ(t) 2γ W (t)Q(t)x,Q(t)x ≤ Dµ(t) 2ε x 2 , for some ε ≥ 0 and D > 0, then U has a nonuniform µ-dichotomy with projection valued function P .
Proof. It follows from Theorem 3.7 for p = 2, setting L(t, x) = W (t)x, x , for (t, x) ∈ R + 0 × X.
Corollary 3. 3 .
3Assume that U : ∆ → B(X) is an evolution operator and P : R + 0 → B(X) is a projection valued function compatible with U such that there exist constants ω > 0, α ≥ 0 and M ≥ 1 with G(t, s) ≤ M e αs e ω|t−s| , for t, s ≥ 0, t = s.
Corollary 3. 3 ,
3Corollary 3.4 and Example 3.5 lead to the conclusion that the assumption (6) in Theorem 3.2 is not to restrictive. Given a differentiable growth rate µ : R + 0 → [1, +∞), a projection valued function P : R + 0 → B(X) and constants γ > 0 and p ≥ 1, we denote by H µ γ,p (P ) the set of all strongly continuous operator-valued functions H : R + 0 → B(X)
τ )U (τ, s)P (s)x + H(τ )U (τ, s)Q(s)x p dτ τ )U (τ, s)Q(s)x p dτ.
H
(τ )U Q (τ, t)Q(t)x p dτ ≤ µ(t) pγ [L(0, U Q (0, t)Q(t)x) − L(t, Q(t)x)]
ANTÓNIO J. G. BENTO, NICOLAE LUPA, MIHAIL MEGAN, AND CÉSAR M. SILVA
ANTÓNIO J. G. BENTO, NICOLAE LUPA, MIHAIL MEGAN, AND CÉSAR M. SILVA
AcknowledgmentsThe work of A. Bento and C.
On (h, k)-dichotomies for nonautonomous linear difference equations in Banach spaces. M G Babuţia, M Megan, I.-L Popa, 10.1155/2013/761680ID 761680Int. J. Differ. Equ. 7M. G. Babuţia, M. Megan, I.-L. Popa, On (h, k)-dichotomies for nonautonomous linear difference equations in Banach spaces, Int. J. Differ. Equ. (2013) Art. ID 761680, 7 pages. URL http://dx.doi.org/10.1155/2013/761680
Lyapunov functions for general nonuniform dichotomies. L Barreira, J Chu, C Valls, 10.1007/s00032-013-0198-yMilan J. Math. 811L. Barreira, J. Chu, C. Valls, Lyapunov functions for general nonuniform dichotomies, Milan J. Math. 81 (1) (2013) 153-169. URL http://dx.doi.org/10.1007/s00032-013-0198-y
L Barreira, C Valls, 10.3934/dcds.2008.22.509Growth rates and nonuniform hyperbolicity. 22L. Barreira, C. Valls, Growth rates and nonuniform hyperbolicity, Discrete Contin. Dyn. Syst. 22 (3) (2008) 509-528. URL http://dx.doi.org/10.3934/dcds.2008.22.509
L Barreira, C Valls, 10.1007/978-3-540-74775-8Stability of nonautonomous differential equations. BerlinSpringer1926L. Barreira, C. Valls, Stability of nonautonomous differential equations, vol. 1926 of Lecture Notes in Mathematics, Springer, Berlin, 2008. URL http://dx.doi.org/10.1007/978-3-540-74775-8
Polynomial growth rates. L Barreira, C Valls, 10.1016/j.na.2009.04.005Nonlinear Anal. 7111L. Barreira, C. Valls, Polynomial growth rates, Nonlinear Anal. 71 (11) (2009) 5208- 5219. URL http://dx.doi.org/10.1016/j.na.2009.04.005
A J G Bento, C Silva, Nonautonomous equations, generalized dichotomies and stable manifolds. A. J. G. Bento, C. Silva, Nonautonomous equations, generalized dichotomies and stable manifolds, ArXiv e-prints. URL http://arxiv.org/abs/0905.4935
Stable manifolds for nonuniform polynomial dichotomies. A J G Bento, C Silva, 10.1016/j.jfa.2009.01.032J. Funct. Anal. 2571A. J. G. Bento, C. Silva, Stable manifolds for nonuniform polynomial dichotomies, J. Funct. Anal. 257 (1) (2009) 122-148. URL http://dx.doi.org/10.1016/j.jfa.2009.01.032
Stable manifolds for non-autonomous equations with non-uniform polynomial dichotomies. A J G Bento, C M Silva, 10.1093/qmath/haq047Q. J. Math. 632A. J. G. Bento, C. M. Silva, Stable manifolds for non-autonomous equations with non-uniform polynomial dichotomies, Q. J. Math. 63 (2) (2012) 275-308. URL http://dx.doi.org/10.1093/qmath/haq047
Generalized nonuniform dichotomies and local stable manifolds. A J G Bento, C M Silva, 10.1007/s10884-013-9331-4J. Dynam. Differential Equations. 254A. J. G. Bento, C. M. Silva, Generalized nonuniform dichotomies and local stable manifolds, J. Dynam. Differential Equations 25 (4) (2013) 1139-1158. URL http://dx.doi.org/10.1007/s10884-013-9331-4
Robustness of nonuniform (µ, ν)-dichotomies in Banach spaces. X Chang, J Zhang, J Qin, 10.1016/j.jmaa.2011.09.026J. Math. Anal. Appl. 3872X. Chang, J. Zhang, J. Qin, Robustness of nonuniform (µ, ν)-dichotomies in Banach spaces, J. Math. Anal. Appl. 387 (2) (2012) 582-594. URL http://dx.doi.org/10.1016/j.jmaa.2011.09.026
Evolution semigroups in dynamical systems and differential equations. C Chicone, Y Latushkin, 10.1090/surv/070Mathematical Surveys and Monographs. 70American Mathematical SocietyC. Chicone, Y. Latushkin, Evolution semigroups in dynamical systems and differential equations, vol. 70 of Mathematical Surveys and Monographs, American Mathematical Society, Providence, RI, 1999. URL http://dx.doi.org/10.1090/surv/070
Uniform asymptotic stability of evolutionary processes in a Banach space. R Datko, 10.1137/0503042SIAM J. Math. Anal. 33R. Datko, Uniform asymptotic stability of evolutionary processes in a Banach space, SIAM J. Math. Anal. 3 (3) (1972) 428-445. URL http://dx.doi.org/10.1137/0503042
Die Stabilitätsfrage bei Differenzengleichungen. T Li, 10.1007/BF02547352Acta Math. 631T. Li, Die Stabilitätsfrage bei Differenzengleichungen, Acta Math. 63 (1) (1934) 99- 141. URL http://dx.doi.org/10.1007/BF02547352
Exponential dichotomies of evolution operators in Banach spaces. N Lupa, M Megan, 10.1007/s00605-013-0517-yMonatsh. Math. 1742N. Lupa, M. Megan, Exponential dichotomies of evolution operators in Banach spaces, Monatsh. Math. 174 (2) (2014) 265-284. URL http://dx.doi.org/10.1007/s00605-013-0517-y
On (h, k)-dichotomy of evolution operators in Banach spaces. M Megan, Dynam. Systems Appl. 52M. Megan, On (h, k)-dichotomy of evolution operators in Banach spaces, Dynam. Systems Appl. 5 (2) (1996) 189-195.
Roughness of (h, k)-dichotomies. R Naulin, M Pinto, 10.1006/jdeq.1995.1065J. Differential Equations. 1181R. Naulin, M. Pinto, Roughness of (h, k)-dichotomies, J. Differential Equations 118 (1) (1995) 20-35. URL http://dx.doi.org/10.1006/jdeq.1995.1065
Die Stabilitätsfrage bei Differentialgleichungen. O Perron, 10.1007/BF01194662Math. Z. 321O. Perron, Die Stabilitätsfrage bei Differentialgleichungen, Math. Z. 32 (1) (1930) 703-728. URL http://dx.doi.org/10.1007/BF01194662
Families of invariant manifolds that corresponding to nonzero characteristic exponents. Y Pesin, 10.1070/IM1976v010n06ABEH001835Russian) English transl. Math. USSR-Izv. 406Izv. Akad. Nauk SSSR Ser. Mat.Y. Pesin, Families of invariant manifolds that corresponding to nonzero characteristic exponents, Izv. Akad. Nauk SSSR Ser. Mat. 40 (6) (1976) 1332-1379, (Russian) English transl. Math. USSR-Izv. 10 (1976), 1261-1305. URL http://dx.doi.org/10.1070/IM1976v010n06ABEH001835
Characteristic Ljapunov exponents, and smooth ergodic theory, Uspehi Mat. Y Pesin, 10.1070/RM1977v032n04ABEH001639Russian) English transl. Russ. Math. Surv. 324NaukY. Pesin, Characteristic Ljapunov exponents, and smooth ergodic theory, Uspehi Mat. Nauk 32 (4) (1977) 55-112, (Russian) English transl. Russ. Math. Surv. 32 (1977) 55-114. URL http://dx.doi.org/10.1070/RM1977v032n04ABEH001639
Geodesic flows in closed Riemannian manifolds without focal points. Y Pesin, 10.1070/IM1977v011n06ABEH001766Russian) English transl. Math. USSR-Izv. 416Izv. Akad. Nauk SSSR Ser. Mat.Y. Pesin, Geodesic flows in closed Riemannian manifolds without focal points, Izv. Akad. Nauk SSSR Ser. Mat. 41 (6) (1977) 1252-1288, (Russian) English transl. Math. USSR-Izv. 11 (1977) 1195-1228. URL http://dx.doi.org/doi:10.1070/IM1977v011n06ABEH001766
Discrete dichotomies. M Pinto, 10.1016/0898-1221(94)00114-6Comput. Math. Appl. 281-3M. Pinto, Discrete dichotomies, Comput. Math. Appl. 28 (1-3) (1994) 259-270. URL http://dx.doi.org/10.1016/0898-1221(94)00114-6
A version of a theorem of R. Datko for nonuniform exponential contractions. C Preda, P Preda, A Craciunescu, 10.1016/j.jmaa.2011.06.082J. Math. Anal. Appl. 3851C. Preda, P. Preda, A. Craciunescu, A version of a theorem of R. Datko for nonuni- form exponential contractions, J. Math. Anal. Appl. 385 (1) (2012) 572-581. URL http://dx.doi.org/10.1016/j.jmaa.2011.06.082
Exponential dichotomy of evolutionary processes in Banach spaces. P Preda, M Megan, Czechoslovak Math. J. 35110P. Preda, M. Megan, Exponential dichotomy of evolutionary processes in Banach spaces, Czechoslovak Math. J. 35(110) (2) (1985) 312-323. URL http://dml.cz/handle/10338.dmlcz/102019
Admissibility and nonuniform exponential dichotomy on the half-line. A L Sasu, M G Babuţia, B Sasu, 10.1016/j.bulsci.2012.11.002Bull. Sci. Math. 1374A. L. Sasu, M. G. Babuţia, B. Sasu, Admissibility and nonuniform exponential di- chotomy on the half-line, Bull. Sci. Math. 137 (4) (2013) 466-484. URL http://dx.doi.org/10.1016/j.bulsci.2012.11.002
Integral conditions for exponential dichotomy: a nonlinear approach. B Sasu, 10.1016/j.bulsci.2009.06.006Bull. Sci. Math. 1343B. Sasu, Integral conditions for exponential dichotomy: a nonlinear approach, Bull. Sci. Math. 134 (3) (2010) 235-246. URL http://dx.doi.org/10.1016/j.bulsci.2009.06.006
Departamento de Matemática. J G António, Bento, [email protected]ã, Portugal E-mail addressUniversidade da Beira InteriorAntónio J. G. Bento, Departamento de Matemática, Universidade da Beira Interior, 6201-001 Covilhã, Portugal E-mail address: [email protected]
University of Timişoara, Victoriei Square 2, 300006 Timişoara, Romania E-mail address: nicolae. Nicolae Lupa, Department of Mathematics. [email protected] Lupa, Department of Mathematics, "Politehnica" University of Timişoara, Victoriei Square 2, 300006 Timişoara, Romania E-mail address: [email protected]
Academy of Romanian Scientists. Mihail Megan, 5450094Mihail Megan, Academy of Romanian Scientists, Independenţei 54, 050094
. Blvd. V. Pârvan. 4Faculty of Mathematics and Computer Science, West University of TimişoaraBucharest, Romania Current address: Faculty of Mathematics and Computer Science, West University of Timişoara, Blvd. V. Pârvan 4, 300223 Timişoara, Romania E-mail address: [email protected]
Universidade da Beira Interior, 6201-001 Covilhã, Portugal E-mail address: csilva@ubi. M César, Departamento Silva, De Matemática, ptCésar M. Silva, Departamento de Matemática, Universidade da Beira Inte- rior, 6201-001 Covilhã, Portugal E-mail address: [email protected]
| []
|
[
"THE MAXIMUM SIZE OF A PARTIAL SPREAD II: UPPER BOUNDS",
"THE MAXIMUM SIZE OF A PARTIAL SPREAD II: UPPER BOUNDS"
]
| [
"Esmeralda Nȃstase \nMATHEMATICS DEPARTMENT\nILLINOIS STATE UNIVERSITY NORMAL\nXAVIER UNIVERSITY CINCINNATI\n45207, 61790OHIO, ILLINOISUSA, USA\n"
]
| [
"MATHEMATICS DEPARTMENT\nILLINOIS STATE UNIVERSITY NORMAL\nXAVIER UNIVERSITY CINCINNATI\n45207, 61790OHIO, ILLINOISUSA, USA"
]
| []
| Let n and t be positive integers with t < n, and let q be a prime power. A partial (t − 1)-spread of PG(n − 1, q) is a set of (t − 1)-dimensional subspaces of PG(n − 1, q) that are pairwise disjoint. Let r ≡ n (mod t) with 0 ≤ r < t, and let Θi = (q i − 1)/(q − 1). We essentially prove that if 2 ≤ r < t ≤ Θr, then the maximum size of a partial (t − 1)-spread of PG(n − 1, q) is bounded from above by (Θn − Θt+r)/Θt + q r − (q − 1)(t − 3) + 1. We actually give tighter bounds when certain divisibility conditions are satisfied. These bounds improve on the previously known upper bound for the maximum size partial (t − 1)-spreads of PG(n − 1, q); for instance, when ⌈ Θr 2 ⌉ + 4 ≤ t ≤ Θr and q > 2. The exact value of the maximum size partial (t − 1)-spread has been recently determined for t > Θr by the authors of this paper (see ). | 10.1016/j.disc.2017.02.001 | [
"https://arxiv.org/pdf/1606.09208v2.pdf"
]
| 27,906,579 | 1606.09208 | 318e270e6928c3726652fc297dd9abe81d13ebc1 |
THE MAXIMUM SIZE OF A PARTIAL SPREAD II: UPPER BOUNDS
3 Jul 2017
Esmeralda Nȃstase
MATHEMATICS DEPARTMENT
ILLINOIS STATE UNIVERSITY NORMAL
XAVIER UNIVERSITY CINCINNATI
45207, 61790OHIO, ILLINOISUSA, USA
THE MAXIMUM SIZE OF A PARTIAL SPREAD II: UPPER BOUNDS
3 Jul 2017arXiv:1606.09208v2 [math.CO]Galois geometrypartial spreadssubspace partitionssubspace codes Mathematics Subject Classification: 51E2305B2594B25
Let n and t be positive integers with t < n, and let q be a prime power. A partial (t − 1)-spread of PG(n − 1, q) is a set of (t − 1)-dimensional subspaces of PG(n − 1, q) that are pairwise disjoint. Let r ≡ n (mod t) with 0 ≤ r < t, and let Θi = (q i − 1)/(q − 1). We essentially prove that if 2 ≤ r < t ≤ Θr, then the maximum size of a partial (t − 1)-spread of PG(n − 1, q) is bounded from above by (Θn − Θt+r)/Θt + q r − (q − 1)(t − 3) + 1. We actually give tighter bounds when certain divisibility conditions are satisfied. These bounds improve on the previously known upper bound for the maximum size partial (t − 1)-spreads of PG(n − 1, q); for instance, when ⌈ Θr 2 ⌉ + 4 ≤ t ≤ Θr and q > 2. The exact value of the maximum size partial (t − 1)-spread has been recently determined for t > Θr by the authors of this paper (see ).
Introduction
Let n and t be positive integers with t < n, and let q be a prime power. Let PG(n − 1, q) denote the (n − 1)-dimensional projective space over the finite field F q . A partial (t − 1)-spread S of PG(n − 1, q) is a collection of (t − 1)-dimensional subspaces of PG(n − 1, q) that are pairwise disjoint. If S contains all the points of PG(n − 1, q), then it is called a (t − 1)-spread. It is well-known that a (t − 1)-spread of PG(n − 1, q) exists if and only if t divides n (e.g., see [3, p. 29]). Besides their traditional relevance to Galois geometry [6,11,13,17], partial (t − 1)-spreads are used to build byte-correcting codes (e.g., see [7,16]), 1-perfect mixed error-correcting codes (e.g., see [15,16]), orthogonal arrays (e.g., see [4]), and subspace codes (e.g., see [8,10,18]).
Convention: For the rest of the paper, we assume that q is a prime power, and n, t, and r are integers such that n > t > r ≥ 0 and r ≡ n (mod t). We also use µ q (n, t) to denote the maximum size of any partial (t − 1)-spread of PG(n − 1, q).
The problem of determining µ q (n, t) is a long standing open problem. Currently, the best general upper bound for µ q (n, t) is given by the following theorem of Drake and Freeman [4]. Theorem 1. If r > 0, then µ q (n, t) ≤ q n −q t+r q t −1 + q r − ⌊ω⌋ − 1, where 2ω = 4q t (q t − q r ) + 1 − (2q t − 2q r + 1).
The following result is attributed to André [1] and Segre [22] for r = 0. For r = 1, it is due to Hong and Patel [16] when q = 2, and Beutelspacher [2] when q > 2.
Theorem 2. If 0 ≤ r < t, then µ q (n, t) ≥ q n −q t+r q t −1 + 1, and equality holds if r ∈ {0, 1}. In light of Theorem 2, it was later conjectured (e.g., see [5,16]) that the value of µ q (n, t) is given by the lower bound in Theorem 2. However, this conjecture was disproved by El-Zanati, Jordon, Seelinger, Sissokho, and Spence [9] who proved the following result. Recently, Kurz [19] proved the following theorem which upholds the lower bound for µ q (n, t) when q = 2, r = 2, and t > 3.
Theorem 4. If n > t > 3 and n mod t = 2, then µ 2 (n, t) = 2 n −2 t+2 2 t −1 + 1. For any integer i ≥ 1, let
(1) Θ i = (q i − 1)/(q − 1).
Still recently, the authors of this paper affirmed the conjecture (e.g., see [5,16]) on the value of µ q (n, t) for t > Θ r and any prime power q, by proving the following general result (see [21]).
Theorem 5. If t > Θ r , then µ q (n, t) = q n −q t+r q t −1 + 1. In light of Theorem 5, it remains to determine the value of µ q (n, t) for 2 ≤ r < t ≤ Θ r . In this paper, we apply the hyperplane averaging method that we devised in [21] to prove the following results 1 . The rest of the paper is devoted to their proofs.
Theorem 6. Let c 1 ≡ (t − 2) (mod q), 0 ≤ c 1 < q, and c 2 = q if q 2 | ((q − 1)(t − 2) + c 1 ) 0 if q 2 ∤ ((q − 1)(t − 2) + c 1 ) . If 2 ≤ r < t ≤ Θ r , then µ q (n, t) ≤ q n − q t+r q t − 1 + q r − (q − 1)(t − 2) − c 1 + c 2 .
Consequently,
µ q (n, t) ≤ q n − q t+r q t − 1 + q r − (q − 1)(t − 3) + 1.
Remark 7.
The best possible bound in Theorem 6 is obtained when t ≡ aq + 1 (mod q 2 ), 1 ≤ a ≤ q − 1 (equivalently, when t ≡ 1 (mod q) but t ≡ 1 (mod q 2 )). In this case, we can check that c 1 = q − 1 and c 2 = 0, which implies that
µ q (n, t) ≤ q n − q t+r q t − 1 + q r − (q − 1)(t − 1).
This was already noted in [21, Lemma 10 and Remark 11] for r ≥ 2 and t = Θ r = (q r − 1)/(q − 1).
Corollary 8. Let f q (n, t) denote the upper bound for µ q (n, t) in Theorem 1 and let g q (n, t)denote the upper bound for µ q (n, t) in Theorem 6. Let c 1 and c 2 be as defined in Theorem 6. If r ≥ 2 and 2r ≤ t ≤ Θ r then
g q (n, t) − f q (n, p) = q r 2 − (q − 1)(t − 2) − c 1 + c 2 .
Consequently, for ⌈ Θr 2 ⌉ + 4 ≤ t ≤ Θ r with q > 2, and for ⌈ Θr 2 ⌉ + 5 ≤ t ≤ Θ r with q = 2, we have g q (n, t) − f q (n, p) < 0, and thus the upper bound for µ q (n, t) given in Theorem 6 is tighter than the Drake-Freeman bound in Theorem 1.
In Section 2, we present some auxiliary results from the area of subspace partitions, and in Section 3 we prove Theorem 6 and Corollary 8.
Subspace partitions
Let V = V (n, q) denote the vector space of dimension n over F q . For any subspace U of V , let U * denote the set of nonzero vectors in U . A d-subspace of V (n, q) is a d-dimensional subspace of V (n, q); this is equivalent to a (d − 1)-subspace in PG(n − 1, q).
A subspace partition P of V , also known as a vector space partition, is a collection of nontrivial subspaces of V such that each vector of V * is in exactly one subspace of P (e.g., see Heden [13] for a survey on subspace partitions). The size of a subspace partition P, denoted by |P|, is the number of subspaces in P.
Suppose that there are s distinct integers, d s > · · · > d 1 , that occur as dimensions of subspaces in a subspace partition P, and let n i denote the number of i-subspaces in P. Then the expression [d n ds s , . . . , d n d 1 1 ] is called the type of P. Remark 9. A partial (t−1)-spread of PG(n−1, q) of size n t is a partial t-spread of V (n, q) of size n t . This is equivalent to a subspace partition of V (n, q) of type [t nt , 1 n 1 ], where n 1 = Θ n − n t Θ t . We will use this subspace partition formulation in the proof of Lemma 14.
Also, we will use the following theorem due to Heden [12] in the proof of Lemma 14.
Theorem 10. [12, Theorem 1] Let P be a subspace partition of V (n, q) of type [d n ds s , . . . , d n d 1 1 ], where d s > . . . > d 1 . Then, (i) if q d 2 −d 1 does not divide n d 1 and if d 2 < 2d 1 , then n d 1 ≥ q d 1 + 1. (ii) if q d 2 −d 1 does not divide n d 1 and d 2 ≥ 2d 1 , then either n d 1 = (q d 2 − 1)/(q d 1 − 1) or n d 1 > 2q d 2 −d 1 . (iii) if q d 2 −d 1 divides n d 1 and d 2 < 2d 1 , then n d 1 ≥ q d 2 − q d 1 + q d 2 −d 1 . (iv) if q d 2 −d 1 divides n d 1 and d 2 ≥ 2d 1 , then n d 1 ≥ q d 2 .
To state the next lemmas, we need the following definitions. Recall that for any integer i ≥ 1,
Θ i = (q i − 1)/(q − 1).
Then, for i ≥ 1, Θ i is the number of 1-subspaces in an i-subspace of V (n, q). Let P be a subspace partition of V = V (n, q) of type [d
B = {b H : H is a hyperplane of V }.
For any b ∈ B, let s b denote the number of hyperplanes of V of type b.
We will also use Lemma 11 and Lemma 12 by Heden and Lehmann [14] in the proof of Lemma 14.
Lemma 11. [14, Equation (1)] Let P be a subspace partition of V (n, q) of type [d n ds s , . . . , d n d 1 1 ].
If H is a hyperplane of V (n, q) and b H,d is as defined above, then
|P| = 1 + s i=1 b H,d i q d i .1 ≤ d ≤ n − 1, we have b∈B b d s b = n d Θ n−d .
Proofs of the main results
Recall that q is a prime power, and n, t, and r are integers such that n > t > r ≥ 0, and r ≡ n (mod t). To prove our main result, we first need to prove the following two technical lemmas.
Lemma 13. Let x be an integer such that 0 < x < q r . For any positive integer i, let δ i = q i · ⌈xq −i Θ i ⌉ − xΘ i . Then the following properties hold:
(i) ⌈xq −t Θ t ⌉ = ⌈ x q−1 ⌉. (ii) for 1 ≤ i ≤ t, we have 0 ≤ δ i < q i , q | (x + δ i+1 ), and δ i = q −1 (x + δ i+1 ) mod q i . (iii) δ i = 0 if and only if q i | x.
Proof. Let α and β be integers such that x = α(q − 1) + β, α ≥ 0, and 0 ≤ β < q − 1. Since 0 < x < q r and r < t hold by hypothesis, it follows that (2) 0 ≤ α < x < q r < q t and α(q − 1) ≤ x < q r < q t .
If β = 0, then by (2), we obtain
xq −t Θ t = α(q t − 1) q t = α − α q t = α = x q − 1 .(3)
Now suppose 1 ≤ β < q − 1. First, since β ≥ 1, it follows from (2) that
xq −t Θ t = [α(q − 1) + β](q t − 1) q t (q − 1) ≥ [α(q − 1) + 1](q t − 1) q t (q − 1) = α + (q t − 1) − α(q − 1) q t (q − 1) = α + 1.(4)
Second, since β < q − 1, it follows from (2) and the properties of the ceiling function that
xq −t Θ t = [α(q − 1) + β](q t − 1) q t (q − 1) ≤ (α + 1)(q t − 1) q t = α + 1 − α + 1 q t = α + 1.(5)
Then (4) and (5) imply that for 1 ≤ β < q − 1,
⌈xq −t Θ t ⌉ = α + 1 = x q − 1 ,
which completes the proof of (i).
We now prove (ii). Since 0 ≤ ⌈a⌉ − a < 1 holds for any real number a, we have
0 ≤ ⌈q −i xΘ i ⌉ − q −i xΘ i < 1 =⇒ δ i = q i ⌈xq −i Θ i ⌉ − xΘ i < q i and δ i ≥ 0.
By the definition of δ i , we have that
x + δ i+1 = x + q i+1 · ⌈xq −i−1 Θ i+1 ⌉ − xΘ i+1 = q(q i · ⌈xq −i−1 Θ i+1 ⌉ − xΘ i ),
and thus,
q −1 (x + δ i+1 ) ≡ q i · ⌈xq −i−1 Θ i+1 ⌉ − xΘ i ≡ −xΘ i ≡ q i · ⌈xq −i Θ i ⌉ − xΘ i ≡ δ i (mod q i ).(6)
Finally, we prove (iii). Since gcd(q i , Θ i ) = 1 for any positive integer i, we have
δ i = q i · ⌈xq −i Θ i ⌉ − xΘ i = 0 ⇐⇒ ⌈xq −i Θ i ⌉ = xq −i Θ i ⇐⇒ q i |x.
We now prove our main lemma. Lemma 14. Let x be a positive integer such that q | x and q 2 ∤ x. Let ℓ = (q n−t − q r )/(q t − 1). If r ≥ 2 and t ≥ Θ r − ⌈x/(q − 1)⌉ + 2, then µ q (n, t) ≤ ℓq t + x.
Proof. If x ≥ q r , then Theorem 1 implies the nonexistence of a partial t-spread of size ℓq t + x. Thus, we can assume that x < q r .
Recall that Θ i = (q i − 1)/(q − 1) for any integer i ≥ 1. For an integer i, with 2 ≤ i ≤ t, let
(7) δ i = q i · ⌈xq −i Θ i ⌉ − xΘ i .
Applying Lemma 13(i), we let
(8) h := ⌈q −t xΘ t ⌉ = x q − 1 .
The proof is by contradiction. So assume that µ q (n, t) > ℓq t + x. Then PG(n − 1, q) has a (t − 1)-partial spread of size ℓq t + 1 + x. Thus, it follows from Remark 9 that there exists a subspace partition P 0 of V (n, q) of type [t nt , 1 n 1 ], with n t = ℓq t + 1 + x, and
n 1 = q t Θ r − xΘ t = q t (Θ r − ⌈q −t xΘ t ⌉) + (q t ⌈q −t xΘ t ⌉ − xΘ t ) = q t (Θ r − h) + δ t ,(9)
where h is given by (8) and δ t is given by (7).
We will prove by induction that for each integer j with 0 ≤ j ≤ t − 2, there exists a subspace partition P j of H j ∼ = V (n − j, q) of type (10) [
t m j,t , (t − 1) m j,t−1 , . . . , (t − j) m j,t−j , 1 m j,1 ],
where m j,t , . . . , m j,t−j are nonnegative integers such that
(11) t i=t−j m j,i = n t = ℓq t + 1 + x,
and where m j,1 and c j are integers such that
(12) m j,1 = c j q t−j + δ t−j , and 0 ≤ c j ≤ max{Θ r − h − j, 0}.
The base case, j = 0, holds since P 0 is a subspace partition of H 0 = V (n, q) with type [t nt , 1 n 1 ], and letting m 0,t = n t and m 0,1 = n 1 , P 0 is of type given in (10), and it satisfies the properties given in (11) and (12). For the inductive step, suppose that for some j, with 0 ≤ j < t − 2, we have constructed a subspace partition P j of H j ∼ = V (n − j, q) of the type given in (10), and with the properties given in (11) and (12). We then use Lemma 12 to determine the average, b avg,1 , of the values b H,1 over all hyperplanes H of H j . We have
b avg,1 := m j,1 Θ n−1−j Θ n−j = c j q t−j + δ t−j q n−1−j − 1 q n−j − 1 < (c j q t−j + δ t−j )q −1 = c j q t−j−1 + q −1 δ t−j .(13)
It follows from (13) that there exists a hyperplane H j+1 of H j with (14) b
H j+1 ,1 ≤ b avg,1 < c j q t−j−1 + q −1 δ t−j .
Next, we apply Lemma 11 to the subspace partition P j and the hyperplane H j+1 of H j to obtain:
1 + b H j+1 ,1 q + t i=t−j b H j+1 ,i q i = |P j | = n t + m j,1 = ℓq t + 1 + x + c j q t−j + δ t−j ,(15)
where 0 ≤ c j ≤ max{Θ r − h − j, 0}. Simplifying (15) yields
b H j+1 ,1 + t i=t−j b H j+1 ,i q i−1 = ℓq t−1 + c j q t−j−1 + q −1 (x + δ t−j ).(16)
Then, it follows from Lemma 13(ii) and (16) that
(17) b H j+1 ,1 ≡ q −1 (x + δ t−j ) ≡ δ t−j−1 (mod q t−j−1 ).
Since 0 ≤ q −1 δ t−j < q t−j−1 by Lemma 13(ii), it follows from (14) and (17) that there exists a nonnegative integer c j+1 such that
b H j+1 ,1 = c j+1 q t−j−1 + δ t−j−1 and 0 ≤ c j+1 ≤ max{c j − 1, 0} ≤ max{Θ r − h − j − 1, 0}.(18)
Let P j+1 be the subspace partition of H j+1 defined by:
P j+1 = {W ∩ H j+1 : W ∈ P j },
and by the definition made in (18), let m j+1,1 = b H j+1 ,1 . Since t − j > 2 and dim(W ∩ H j+1 ) ∈ {dim W, dim W − 1} for each W ∈ P j , it follows that P j+1 is a subspace partition of H j+1 of type (19) [
t m j+1,t , (t − 1) m j+1,t−1 , . . . , (t − j − 1) m j+1,t−j−1 , 1 m j+1,1 ],
where m j+1,t , m j+1,t−1 , . . . , m j+1,t−j−1 are nonnegative integers such that
(20) t i=t−j−1 m j+1,i = t i=t−j m j,i = n t .
The inductive step follows since P j+1 is a subspace partition of H j+1 ∼ = V (n − j − 1, q) of the type given in (19), which satisfies the conditions in (18) and (20).
Thus far, we have shown that the desired subspace partition P j of H j exists for any integer j such that 0 ≤ j ≤ t − 2. Since q 2 ∤ x by hypothesis, Lemma 13(iii) implies that δ t−j = 0 for j ∈ [0, t − 2]. Thus, m j,1 = c j q t−j + δ t−j = 0 for j ∈ [0, t − 2]. If j ∈ [Θ r − h, t − 2], then it follows from (12) that c j = 0, and thus, m j,1 = δ j = 0. In particular, since t ≥ Θ r − h + 2, we have c t−2 = 0 and m t−2,1 = δ 2 = 0. For the final part of the proof, we set j = t − 2, and then show that the existence of the subspace partition P t−2 of H t−2 leads to a contradiction.
It follows from the above observations and Lemma 13(ii) that (21) m t−2,1 = δ 2 = q 2 ⌈xq −2 Θ 2 ⌉ − xΘ 2 and 0 < δ 2 < q 2 .
Since m t−1,2 > 0, the smallest dimension of a subspace in P t−2 is 1. So let s ≥ 2 be the second smallest dimension of a subspace in P t−2 . (Note that the existence of s follows from (11).) To derive the final contradiction, we consider the following cases.
Case 1: s ≥ 3. Then by applying Theorem 10(ii)&(iv) to the subspace partition P t−2 with d 2 = s and d 1 = 1, we obtain m t−2,1 ≥ min{(q s − 1)/(q − 1), 2q s−1 , q s } > q 2 , which contradicts the fact that m t−2,1 < q 2 given by (21).
Case 2: s = 2.
Since q | x by hypothesis, it follows from (21) that q | m t−2,1 . Thus, by applying Theorem 10(iv) to P t−2 with d 2 = s = 2 and d 1 = 1, we obtain m t−2,1 ≥ q 2 , which contradicts the fact that m t−2,1 < q 2 given by (21).
We are now ready to prove Theorem 6 and Corollary 8.
Proof of Theorem 6. Recall that (22)
c 1 ≡ t − 2 (mod q), 0 ≤ c 1 < q, and c 2 = q if q 2 | ((q − 1)(t − 2) + c 1 ) , 0 if q 2 ∤ ((q − 1)(t − 2) + c 1 ) . Define (23) x := q r − (q − 1)(t − 2) − c 1 + c 2 .
Since r ≥ 2, it follows from (22) and (23) that: (a) If q 2 | ((q − 1)(t − 2) + c 1 ), then c 2 = q, and also,
q 2 | (q r − (q − 1)(t − 2) − c 1 ). Thus, x ≡ q ≡ 0 (mod q 2 ). (b) If q 2 ∤ ((q − 1)(t − 2) + c 1 ), then c 2 = 0, and also, q 2 ∤ (q r − (q − 1)(t − 2) − c 1 ). Thus, x = q r − (q − 1)(t − 2) − c 1 ≡ 0 (mod q 2 )
. Thus, q 2 ∤ x holds in all cases.
Also, since c 1 ≡ t − 2 (mod q) by (22), we have t − 2 = αq + c 1 for some nonnegative integer α. Thus, it follows from (23) that
(24) x = q r − αq(q − 1) − c 1 q + c 2 .
Since c 2 ∈ {0, q} by (22), it follows from (24) that q | x. Moreover, since 0 ≤ c 1 ≤ q − 1 and c 2 ∈ {0, q}, we obtain
x = q r − (q − 1)(t − 2) − c 1 + c 2 ≥ q r − (q − 1)(t − 2) − (q − 1) =⇒ x q − 1 ≥ q r − 1 q − 1 + 1 q − 1 − t + 1 =⇒ x q − 1 ≥ q r − 1 q − 1 − t + 2 =⇒ t ≥ Θ r − x q − 1 + 2.(25)
Since the hypothesis holds from the above observations, Lemma 14 yields
µ q (n, t) ≤ ℓq t + x = q n − q t+r q t − 1 + q r − (q − 1)(t − 2) − c 1 + c 2 .
Moreover, since −q + 1 ≤ −c 1 + c 2 ≤ q, it follows that
µ q (n, t) ≤ q n − q t+r q t − 1 + q r − (q − 1)(t − 2) − c 1 + c 2 ≤ q n − q t+r q t − 1 + q r − (q − 1)(t − 2) + q = q n − q t+r q t − 1 + q r − (q − 1)(t − 3) + 1,
which concludes the proof of Theorem 6.
Proof of Corollary 8. Let f q (n, t) and g q (n, t) be as defined in the statement of the corollary. Then (26) g q (n, t) = q n − q t+r q t − 1 + q r − (q − 1)(t − 2) − c 1 + c 2 ,
where c 1 and c 2 are as in (22), and
(27) f q (n, t) = q n − q t+r q t − 1 + q r − ⌊ω⌋ − 1, where 2ω = 4q t (q t − q r ) + 1 − (2q t − 2q r + 1). If r ≥ 1 and t ≥ 2r, then it is straightforward to show that (e.g.,see [19, Lemma 2]) (28) ⌊ω⌋ = q r − 2 2 = q r 2 − 1.
Now it follows from (26)-(28) that if t ≥ 2r, then (29) g q (n, t) − f q (n, p) = q r 2 − (q − 1)(t − 2) − c 1 + c 2 .
We now prove the second part of the corollary for q > 2. If ⌈ Θr 2 ⌉ + 4 ≤ t ≤ Θ r , then by applying (29) with 0 ≤ c 1 < q and c 2 ∈ {0, q}, we obtain
g q (n, t) − f q (n, p) ≤ q r 2 − (q − 1)(t − 2) + q ≤ q r 2 − (q − 1) Θ r 2 + 2 + q = q r 2 − (q − 1) q r − 1 2(q − 1) − q + 2 ≤ q r 2 − (q − 1)
q r − 1 2(q − 1) − q + 2 = 5/2 − q < 0 (since q > 2).
If q = 2, then by doing the same analysis as above with t ≥ Θr 2 + 5 instead of t ≥ Θr 2 + 4, we obtain g q (n, t) − f q (n, p) < 0. This completes the proof of the corollary.
Theorem 3 .
3If n ≥ 8 and n mod 3 = 2, then µ 2 (n, 3) = 2 n
n ds s , . . . , d n d 1 1 ]. For any hyperplane H of V , let b H,d be the number of d-subspaces in P that are contained in H and set b H = [b H,ds , . . . , b H,d 1 ]. Define the set B of hyperplane types as follows:
Lemma 12 .
12[14, Equation (2) and Corollary 5] Let P be a subspace partition of V (n, q), and let B and s b be as defined above. Then b∈B s b = Θ n , and for
Also see[20] for a recent preprint in this area.
Acknowledgement:We thank the referees for their detailed comments, suggestions, and corrections which have greatly improved the paper.
Über nicht-Desarguessche Ebenen mit transitiver Translationsgruppe. J André, Math Zeit. 60J. André,Über nicht-Desarguessche Ebenen mit transitiver Translationsgruppe, Math Zeit. 60 (1954), 156-186.
Partial spreads in finite projective spaces and partial designs. A Beutelspacher, Math. Zeit. 145A. Beutelspacher, Partial spreads in finite projective spaces and partial designs, Math. Zeit. 145 (1975), 211- 229.
. P Dembowski, Finite Geometries, Springer Classics in MathematicsP. Dembowski, Finite Geometries, Springer Classics in Mathematics, 1997.
Partial t-spreads and group constructible (s, r, µ)-nets. D Drake, J Freeman, J. Geom. 13D. Drake and J. Freeman, Partial t-spreads and group constructible (s, r, µ)-nets, J. Geom. 13 (1979), 211-216.
Partial) t-spreads and minimal t-covers in finite spaces, Lecture notes from the Socrates Intensive Course in Finite Geometry and its Applications. J Eisfeld, L Storme, GhentPublished electronically atJ. Eisfeld and L. Storme, (Partial) t-spreads and minimal t-covers in finite spaces, Lecture notes from the Socrates Intensive Course in Finite Geometry and its Applications, Ghent, April 2000, Published electronically at http://www.maths.qmul.ac.uk/∼leonard/partialspreads/eisfeldstorme.ps.
On the spectrum of the sizes of maximal partial line spreads in P G(2n, q), n ≥ 3. J Eisfeld, L Storme, P Sziklai, Des. Codes Cryptogr. 36J. Eisfeld , L. Storme , and P. Sziklai, On the spectrum of the sizes of maximal partial line spreads in P G(2n, q), n ≥ 3, Des. Codes Cryptogr. 36 (2005), 101-110.
Perfect byte-correcting codes. T Etzion, IEEE Trans. Inf. Theory. 44T. Etzion, Perfect byte-correcting codes, IEEE Trans. Inf. Theory 44 (1998), 3140-3146.
Error-correcting codes in projective space. T , Etzion A Vardy, IEEE Trans. Inf. Theory. 57T. Etzion A. Vardy, Error-correcting codes in projective space, IEEE Trans. Inf. Theory 57 (1998), 1165-1173.
The maximum size of a partial 3-spread in a finite vector space over GF(2). S El-Zanati, H Jordon, G Seelinger, P Sissokho, L Spence, Des. Codes Cryptogr. 54S. El-Zanati, H. Jordon, G. Seelinger, P. Sissokho, and L. Spence, The maximum size of a partial 3-spread in a finite vector space over GF(2), Des. Codes Cryptogr. 54 (2010), 101-107.
Partial spreads in random network coding. E Gorla, A Ravagnani, Fin. Fields Appl. 26E. Gorla and A. Ravagnani, Partial spreads in random network coding, Fin. Fields Appl. 26 (2014), 104-115.
On maximal partial spreads in P G(n, q). A Gács, T Szönyi, Des. Codes Cryptogr. 29A. Gács and T. Szönyi, On maximal partial spreads in P G(n, q), Des. Codes Cryptogr. 29 (2003), 123-129.
On the length of the tail of a vector space partition. O Heden, Discrete Math. 309O. Heden, On the length of the tail of a vector space partition, Discrete Math. 309 (2009), 6169-6180.
A survey of the different types of vector space partitions. O Heden, Disc. Math. Algo. Appl. 4O. Heden, A survey of the different types of vector space partitions, Disc. Math. Algo. Appl. 4 (2012), 1-14.
Some necessary conditions for vector space partitions. O Heden, J Lehmann, Discrete Math. 312O. Heden and J. Lehmann, Some necessary conditions for vector space partitions, Discrete Math. 312 (2012), 351-361.
Group partition, factorization and the vector covering problem. M Herzog, J Schönheim, Canad. Math. Bull. 152M. Herzog and J. Schönheim, Group partition, factorization and the vector covering problem, Canad. Math. Bull. 15(2) (1972), 207-214.
A general class of maximal codes for computer applications. S Hong, A Patel, IEEE Trans. Comput. C. 21S. Hong and A. Patel, A general class of maximal codes for computer applications, IEEE Trans. Comput. C-21 (1972), 1322-1331.
A note on maximal partial spreads with deficiency q + 1, q even. D Jungnickel, L Storme, J. Combin. Theory Ser. A. 102D. Jungnickel and L. Storme, A note on maximal partial spreads with deficiency q + 1, q even, J. Combin. Theory Ser. A 102 (2003), 443-446.
A general class of maximal codes for computer applications. R Köetter, F Kschischang, IEEE Trans. Inf. Theory. 54R. Köetter and F. Kschischang, A general class of maximal codes for computer applications, IEEE Trans. Inf. Theory 54 (2008), 3575-3591.
Improved upper bounds for partial spreads. S Kurz, 10.1007/s10623-016-0290-8Des. Codes Cryptogr. S. Kurz, Improved upper bounds for partial spreads, Des. Codes Cryptogr. DOI:10.1007/s10623-016-0290-8 (2016).
Upper bounds for partial spreads. S Kurz, S. Kurz, Upper bounds for partial spreads, https://arxiv.org/pdf/1606.08581.pdf.
The maximum size of a partial spread in a finite projective space. E Nȃstase, P Sissokho, SubmittedE. Nȃstase and P. Sissokho, The maximum size of a partial spread in a finite projective space, http://arxiv.org/pdf/1605.04824. Submitted.
B Segre, Teoria di Galois, fibrazioni proiettive e geometrie non desarguesiane. 64B. Segre, Teoria di Galois, fibrazioni proiettive e geometrie non desarguesiane, Ann. Mat. pura Appl. 64 (1964), 1-76.
| []
|
[
"Transparency and granularity in the SP Theory of Intelligence and its realisation in the SP Computer Model *",
"Transparency and granularity in the SP Theory of Intelligence and its realisation in the SP Computer Model *"
]
| [
"J Gerard Wolff [email protected] \nCognitionResearch.org\nMenai BridgeUK\n"
]
| [
"CognitionResearch.org\nMenai BridgeUK"
]
| []
| This chapter describes how the SP System, meaning the SP Theory of Intelligence, and its realisation as the SP Computer Model, may promote transparency and granularity in AI, and some other areas of application. The chapter describes how transparency in the workings and output of the SP Computer Model may be achieved via three routes: 1) the program provides a very full audit trail for such processes as recognition, reasoning, analysis of language, and so on. There is also an explicit audit trail for the unsupervised learning of new knowledge; 2) knowledge from the system is likely to be granular and easy for people to understand; and 3) there are seven principles for the organisation of knowledge which are central in the workings of the SP System and also very familiar to people (eg chunkingwith-codes, part-whole hierarchies, and class-inclusion hierarchies), and that kind of familiarity in the way knowledge is structured by the system, is likely to be important in the interpretability, explainability, and transparency of that knowledge. Examples from the SP Computer Model are shown throughout the chapter. | 10.1007/978-3-030-64949-4_7 | [
"https://arxiv.org/pdf/2009.06370v2.pdf"
]
| 221,655,237 | 2009.06370 | 2c2a700d9923bfc28ae8ab4b5f74857cf51b2b81 |
Transparency and granularity in the SP Theory of Intelligence and its realisation in the SP Computer Model *
9 May 2021
J Gerard Wolff [email protected]
CognitionResearch.org
Menai BridgeUK
Transparency and granularity in the SP Theory of Intelligence and its realisation in the SP Computer Model *
9 May 202110.1007/978-3-030-64949-4* Published in the book Interpretable Artificial Intelligence: A Perspective of Granular Com-puting, Witold Pedrycz and Shyi-Ming Chen (editors), Springer: Heidelberg, 2021,transparencygranularitySP Theory of IntelligenceSP Computer Modelinformation compressionSP-multiple-alignment
This chapter describes how the SP System, meaning the SP Theory of Intelligence, and its realisation as the SP Computer Model, may promote transparency and granularity in AI, and some other areas of application. The chapter describes how transparency in the workings and output of the SP Computer Model may be achieved via three routes: 1) the program provides a very full audit trail for such processes as recognition, reasoning, analysis of language, and so on. There is also an explicit audit trail for the unsupervised learning of new knowledge; 2) knowledge from the system is likely to be granular and easy for people to understand; and 3) there are seven principles for the organisation of knowledge which are central in the workings of the SP System and also very familiar to people (eg chunkingwith-codes, part-whole hierarchies, and class-inclusion hierarchies), and that kind of familiarity in the way knowledge is structured by the system, is likely to be important in the interpretability, explainability, and transparency of that knowledge. Examples from the SP Computer Model are shown throughout the chapter.
Introduction 2 Introduction to transparency
In the words of the 'call for chapters' for this book: "It is desirable that the models of AI are transparent so that the results being produced have to be easily interpretable and explainable." (emphasis added). Thus transparency' in an AI program means that processing by the program and results from it are comprehensible by people, and thus interpretable and explainable. Hence, the main emphasis in this chapter is on transparency rather than the more specific concepts of interpretable and explainable, but see Section 8.
Transparency in AI systems is a matter of concern, chiefly because of shortcomings in deep neural networks (DNNs). Despite their striking successes in several different areas of application, it is normally difficult to understand how DNN results are achieved.
Transparency is particularly important when there is a need to diagnose what has gone wrong when there are outright failures of DNNs, and these can be dramatic. For example, a DNN may fail to recognise something in a picture that, to a
Information compression
Right from the beginning of this research, a unifying theme, which has proved its value in spades, is that IC in the SP System is likely to be part of the solution to the goal of simplification and integration across a broad canvass. This is largely because of an accumulation of evidence from many studies, beginning with pioneering research by Fred Attneave [8] and Horace Barlow [9,10], and others, showing the importance of IC in HLPC [11].
The letters 'S' and 'P' in the name 'SP' may be seen to stand for 'Simplicity' and 'Power'. This is because: 1) a good theory should combine conceptual 'Simplicity' with explanatory or descriptive 'Power'; and 2) IC, which is central in the organisation and workings of the SP System, may be seen as a process which increases the 'Simplicity' of a body of information, I, by the extraction of unnecessary repetition or redundancy in I, and at the same time retains as much as possible of its explanatory and descriptive 'Power'.
There is more detail about IC in the SP System in Sections 4.4, 4.5, and 5.
Abstract view of the SP System
At a high level of abstraction, the SP System may be seen to be like a brain which takes in New information (with a capital 'N') through its senses and stores all or part of it in a repository of Old information (with a capital 'O'), as shown schematically in Figure 1.
Basic structures in the SP System for representing knowledge
In the SP System, SP-patterns are the vehicle for storing all kinds knowledge.
Here an SP-pattern is an array of atomic SP-symbols in one or two dimensions. The SP Computer Model has not yet been developed to process two-dimensional SP-patterns, but the aim is for that to be possible in later versions of the program. An SP-symbol is simply a mark that can be matched with any other symbol to yield a 'same' or 'different' answer. The meaning of any SP-symbol is provided exclusively by its association with one or more other SP-symbols. There is nothing like + or × in arithmetic, where the meaning of each of those two symbols is hidden from view. As described in Section 4.4, below, SP-symbols gain expressive power via their roles in SP-multiple-alignments.
An SP-pattern that is 'New' is raw information from the system's 'environment', brought in via its 'senses'. An example of such a New SP-pattern is the sentence 'f o r t u n e f a v o u r s t h e b r a v e' in row 0 in Figure 3, below.
Each SP-pattern that is Old has 'ID' SP-symbols at the beginning and end which are used in the building of 'SP-multiple-alignments' and in the encoding process, as outlined in Section 4.4, below.
An example of an Old SP-pattern as just described is the SP-pattern 'N 4 f o r t u n e #N' in row 4 in Figure 3, below. Here, the SP-symbols 'N', '4', and '#N' are examples of the 'ID' SP-symbols mentioned above. The ID SP-symbols 'N' and '#N' serve in marking the start and finish of the SP-pattern and also in classifying the SP-pattern as a noun. The ID SP-symbol '4' distinguishes this noun from others in the set of stored Old SP-patterns.
Although SP-patterns and SP-symbols are very simple, they gain expressive power via their roles in SP-multiple-alignments, (see, Section 4.4,next).
Provided that SP-patterns have been created via unsupervised learning that achieves high levels of IC (Section 4.5), it seems likely that they would be amongst the groupings recognised as 'granules', and also 'chunks' (Section 5.5.1) and 'objects' or 'entities' (Section 5.5.2).
The concept of SP-multiple-alignment
A central part of the SP Computer Model, is the concept of SP-multiple-alignment (SPMA). It is a concept that has been adapted from the concept of 'multiple sequence alignment' in bioinformatics.
This concept is responsible for most of the existing and potential versatility of the SP System in all areas of AI except unsupervised learning, but even in unsupervised learning it has a major role to play. Existing and potential strengths of the SP System are summarised in Sections 4.6 and 4.6.2.
Multiple sequence alignment
As an introduction to the concept of SPMA, Figure 2 shows an example of a multiple sequence alignment. representing a sentence to be parsed and a set of Old SP-patterns supplied by the user (including those in rows 1 to 9, one Old SP-pattern per row), each of which represents a grammatical category, and that includes words. Reproduced from Figure 2 in [12], with permission.
The SP-patterns in rows 1 to 9 of the figure, one SP-pattern per row, are Old SP-patterns, drawn from a much larger repository of Old SP-patterns.
The main features that distinguish an SPMA from a multiple sequence alignment are described in [5,Section 4] and [6, Sections 3.4 to 3.7].
The creation of SP-multiple-alignments
As with the building of multiple sequence alignments, it is necessary to use heuristic search in the building of SPMAs to obtain reasonably good results in a reasonable time.
The creation of an SPMA like the one shown in Figure 3 begins with the New SP-pattern shown in row 0 of that figure and the repository of Old SP-patterns mentioned above which includes the ones shown in rows 1 to 9 of the figure.
At first, each of the Old SP-patterns in the repository is matched with the New SP-pattern as outlined in Section 5.1, below, including the kinds of discontinuous matching outlined in Section 5.2. Each match is evaluated in terms of its potential to compress the New SP-pattern. From the best of those matches, an SPMA is created from the New SP-pattern and one Old SP-pattern.
In subsequent processing, each newly-created SPMA is treated as if it was a single SP-pattern. As such, it may be matched with the New SP-pattern, any of the Old SP-patterns, and any of the other SPMAs in the current run of the program. As before, the best matches are selected and corresponding SPMAs are created, and then the cycle is repeated until no more good matches can be found.
If a good match is found between two 'parent' SPMAs, the 'child' SPMA that is formed from that match includes all the SP-patterns in both parents. Likewise for a good match between an SPMA and an SP-pattern. In this way, SPMAs with many SP-patterns can build up quickly.
Versatility of the SP-multiple-alignment construct
The SPMA concept is largely responsible for the versatility of the SP System, which is outlined in Section 4.6, below. In all areas except unsupervised learning (Section 4.5), it is almost exclusively responsible for that versatility, but it also has major role in unsupervised learning, together with other processing.
Unsupervised learning
In broad terms, unsupervised learning in the SP System means compressing a relatively large body of New SP-patterns from the system's environment to create a smaller body of Old SP-patterns which may be added to the repository of Old SP-patterns, in keeping with the schematic view of the SP System shown in Figure 1, Section 4.2. For a given body of New SP-patterns, that smaller body of Old SP-patterns is called its SP grammar.
It should be mentioned that, although some useful results have been achieved with unsupervised learning in the SP Computer Model (see [6,Chapter 9]), there are some unsolved problems with unsupervised learning in the program, noted in [5,Section 3.3]. For those reasons, with the example in Figure 3 and examples shown in later sections, it has been necessary to provide the model with appropriate SP-patterns rather than allowing the model to learn those SP-patterns for itself.
As we shall see in Section 5.4, the SP System, via unsupervised learning, can bootstrap a knowledge of granular structures such as words, and grammatical rules from samples of an English-like artificial language in which all punctuation and spaces between words have been removed.
Existing and potential strengths of the SP System
In keeping with the aim of simplifying and integrating observations and concepts across a broad canvass (mentioned at the beginning of Section 4), the SP System has strengths and potential in several different areas, as summarised in [13,Section 3.7].
In brief, the strengths and potential of the SP Computer Model in AI include unsupervised learning, pattern recognition, several kinds of reasoning, the processing of natural language, planning, problem solving, and more. Likewise, it has strengths in the representation of several different kinds of knowledge. And because these things all flow largely from the SPMA construct, there is clear potential for the seamless integration of different aspects of AI and different kinds of knowledge, in any combination.
On the strength of this evidence, and evidence summarised in the next two subsections, it seems fair to say that the SP System provides a relatively promising foundation for the development of artificial general intelligence.
What is said in this chapter about transparency and granularity is likely to apply to the evidence summarised in this section, in the next subsection, and in Section 4.6.2.
Potential to help solve AI-related problems
labelpotential-with-ai-related-problems-section Apart from the existing and potential strengths just described, the SP System has clear potential to help solve several problems in AI research. Many of these have been described by leading researchers in AI in interviews with science writer Martin Ford and, after any corrections by the interviewees, reported in Ford's book Architects of Intelligence [14]. The potential of the SP System to help solve many of those problems, and some others, is described in [15].
Areas of application apart from AI
Apart from AI, the SP System has clear potential in other areas. Relevant papers may be downloaded via links in www.cognitionresearch.org/sp.htm. They include: the management of big data [16]; computer vision and the understanding of natural vision [17]; the development of intelligent databases [18]; medical diagnosis [19]; and more.
SP-Neural
The SP System has been developed largely as an abstract model, with well-known features of HLPC as its main touchstones of success. But it is a matter of some interest to discover whether the main features of the SP System may be reproduced with with neural tissue, and if so how.
In the SP programme of research, a first tentative model in this area is called SP-Neural, described and discussed in [12].
It seems that a case can be made for: modelling SP-symbols with single neurons or, more plausibly, with small clusters of neurons; SP-patterns may be modelled with arrays of neural symbols; and, very tentatively, spike potentials travelling along axons connecting neural SP-symbols and neural SP-patterns may achieve the effect of building SPMAs, and perhaps, unsupervised learning.
As with the non-neural SP System, it seems likely that the creation of a computer model of SP-Neural will help to clarify issues where there is uncertainty at present.
Future developments
It is envisaged that the SP Theory of Intelligence and the SP Computer Model will be developed into a highly parallel "SP-Machine", as described in [20], and shown schematically in Figure 4.
It is envisaged that this will provide a foundation for further work by researchers anywhere, singly or in teams, towards the development of a system with the strength and robustness for large-scale applications.
SP Theory and SP Computer Model
SP MACHINE
Information compression and the representation and processing of knowledge in the SP System
Before getting on to transparency and granularity in the SP System (Sections 6, 7, and 8), something needs to be said about IC and how it relates to the representation of knowledge in the SP System, and how it is processed.
Information compression via the matching and unification of patterns
A working hypothesis in the SP programme of research is that IC may always be understood as the product of a search for patterns that match each other and the merging or 'unifying' of patterns that are the same. The expression "Information Compression Via the Matching and Unification of Patterns" will be abbreviated as 'ICMUP'. This idea is illustrated in the upper part of Figure In this instance, there is compression of information because two instances of 'INFORMATION' are reduced to one. The rest of the figure is considered in Section 5.3.2, below.
To achieve IC via ICMUP in a given body of raw information, I, the patterns that are to be unified must be relatively frequent in I and no less frequent than would occur by chance.
That figure for chance varies inversely with the size of the pattern, so that, with a given I, a large pattern like this: 'pneumonoultramicroscopicsilicovolcanoconiosis' may exceed the threshold with a frequency as low as 2, while smaller patterns like 'si' may only reach the threshold with a higher frequency.
Discontinuous patterns
An important point to mention in connection with ICMUP is that the concept of 'pattern' in the SP programme of research includes patterns that are 'discontinuous' in the sense that they may be interwoven with other information.
For example, a pattern like 'ABC' may be seen in 'LMANOPBQCSTU', and likewise in many other sequences.
From prominent features of HLPC, such as the way we can recognise a familiar pattern like a car despite interfering patterns such as iron railings or the branches of a plant, it has been understood from the beginning of the SP research that there would be a need for that kind of capability in the SP Computer Model.
A first goal was to create a system that could recognise good full and partial matches between sequences (where 'good' means good in terms of IC), and could deliver two or more such solutions where they exist. The method which has been adopted, and incorporated in successive versions of the SP Computer Model, is described in [6, Appendix A].
Seven variants of ICMUP
ICMUP is an intrinsically simple idea, but it comes in seven main variants which add a lot in terms of its descriptive and explanatory value. The variants are described in [13,Section 5] and [11,Sections 6,7,and 8], and are outlined more briefly here.
In general, these structures would arise from IC via unsupervised learning, as outlined in Section 4.5.
In general, these structures that are very widely used and are likely to be familiar to most people. Some comments to that effect are in Section 7.
Basic ICMUP
Basic ICMUP is our first variant of ICMUP, essentially what is described at the beginning of Section 5.1: if two or more patterns match each other, they may be unified to create one copy, with a corresponding compression of information.
Chunking-with-codes and ICMUP
A problem with Basic ICMUP is that, for each set of unified patterns, information is lost about their locations in I, except for the unified pattern itself, assuming it retains a place in I.
A solution to this problem, called Chunking-with-codes, is to assign a relatively short identifier or code to the unified pattern-which is commonly referred to as a chunk of information-and then to place a copy of the chunk, with its code, in a separate 'dictionary' of patterns. Then replace each copy of the chunk in I with the code for the chunk. This is illustrated in the lower part of Figure 5, where the chunk of information is 'INFORMATION' and the code for that chunk is 'w62'.
Because in general the code should be smaller than the chunk it represents, there should be an overall compression of I.
Schema-plus-correction and ICMUP
An interesting variant of MICMUP is known as Schema-plus-correction. Perhaps the best-known example is a menu of dishes that are available in a cafe or restaurant.
The menu itself may be seen as special kind of 'chunk' of information which, as with chunking-with-codes, has a relatively short name, identifier, or 'code'something like 'Menu', 'Your choices', 'As you like it', and so on.
What makes it different from an ordinary example of chunking-with-codes is that it provides the means of introducing variations into the chunk.
A typical menu offers three or more places where variations may be introduced. These would be parts of the menu such as 'starter', 'main course', and 'pudding'. With each of these there would be a selection of dishes that the diner may choose, such as 'soup', 'antipasto', and so on for the starter, 'vegan chickpea curry', 'shepherd's pie', and so on for the main course, and 'ice cream', 'apple crumble', and so on for the pudding.
Run-length encoding and ICMUP
Run-length encoding may be applied where there is a sequence of two or more matching patterns, each one contiguous with next one. Then a sequence like 'INFORMATIONINFORMATIONINFORMATIONINFORMATIONINFORMATION' may be reduced to something like 'INFORMATION ×5', or, more generally as 'INFORMATION*', where the '*' indicates that the given pattern is repeated but without specifying how many times it is repeated.
Part-whole hierarchies and ICMUP
A fifth variant of ICMUP is the way things can be organised as part-whole hierarchies. A car may be seen to be divided to engine, wheels, body, and so on. And each of these things may be divided into parts and subparts, and so on.
Economies arise because, at any one level in a part-whole hierarchy, all the alternatives at that level share the same place in the hierarchy, which saves having to repeat that information for every one of the alternatives.
For example, someone buying a particular model of a car may be offered a choice of two or three different engines. Each of the alternatives may be described without the need, for each alternative, to describe the rest of the car. Hence, those several copies of the context of 'engine' have been merged into a single copy, in accordance with ICMUP.
Class-inclusion hierarchies and ICMUP
One of the meanings of the word 'class' is that it is a collection of things that share certain features. So the class 'table' applies to things that have a horizontal top that may be used as a temporary place to put things, especially plates, knives and forks and so on at meal times, often with four legs, often made of wood, and so on.
This may be seen as an example of ICMUP because, across all the many examples of tables, the features that they have in common have been seen to match each other and have been unified to create the list of features for the class 'table'.
It is true that there are many exceptions and special cases-for example, not all tables are made of wood-but that does not alter the great economies that can be achieved, in both thinking and communication, from the use of classes like 'table'. The class saves having to describe all the features of a table every time one wants to talk about tables or simply remember something about tables, such as the way a table may be used to help in the changing of a light bulb.
From this idea of a class, it is a short step to the idea of a hierarchy of subclasses, subsubclasses, and so on. At each level in the hierarchy, there are features that are inherited by all the higher levels.
SP-multiple-alignment and ICMUP
The last of the seven variants of ICMUP is the concept of SPMA that has been described already in Section 4.4.
Out of the seven variants of ICMUP, it appears that SPMA can be, with appropriate data, the most effective means of compressing information, largely because the matching and unification of patterns may occur at several different levels, not just one level.
And this seventh variant of ICMUP has a special status amongst the seven variants because SPMA may be seen to encompass all of the other six variants, and, within any one SPMA, there can be a seamless integration of the other six variants.
It appears that it is this versatility which is largely responsible for the versatility of SPMAs in modelling diverse aspects of intelligence, in the representation of diverse kinds of knowledge, and in the seamless integration of aspects of intelligence and kinds of knowledge, in any combination (Section 4.6).
The DONSVIC principle
An idea which is fundamental in the workings of the SP System is the 'DONSVIC' principle, meaning the "Discovery Of Natural Structures Via Information Com-pression" It is described quite fully in [5,Section 5.2].
It seems that the reason that IC does not normally have the effect of revealing 'natural' structures is that, largely because of the low power of early computers, most systems for IC have been designed to be 'quick and dirty', sacrificing accuracy for speed on those low-powered computers. Now that computers are more powerful, one can be more ambitious.
The same section of the paper [5, Section 5.2] describes how the MK10 program for unsupervised learning of segmental structures-with IC as its driving principle-may discover structures in natural language such as words and phrases, and this without any prior knowledge of any of those structures, and without any markers in the raw data such as punctuation and spaces between words to show the beginnings and ends of segmental structures [21]. And in a similar way, the SNPR program for unsupervised learning of grammars-with IC as its central principle-demonstrates successful learning of the grammars of English-like artificial languages (ibid).
In that connection, there is evidence that a first language can be learned by children without 'reinforcement' as normally understood, or any other kind of explicit teaching or the correction of errors (see, for example, [22,23,24]). It seems likely that the same applies to the learning of non-syntactic structures as well.
As a rough generalisation, structures that may be discovered via the DONSVIC principle from a given body of information, I, are ones that are useful in compressing I and are likely to be useful in compressing any later body of information that has a similar structure.
These observations are in accord with substantial evidence for the significance of IC in HLPC [11].
The DONSVIC principle and granularity
It is assumed in this research that, in HLPC, the DONSVIC principle applies to the unsupervised learning of the great majority of entities, structures, or events, that we recognise 'naturally', including the kinds of SP-patterns shown in rows 1 to 9 of Figure 3.
If it is accepted that most of the "entities, structures, or events, that we recognise 'naturally' " would also qualify as 'granules', we should also accept that granules and the ways in which they are structured are likely to emerge via learning processes that conform to the DONSVIC principle, either in human brains, or in artificial unsupervised learning of the future.
The DONSVIC principle and familiarity
In the same way that the DONSVIC principle suggests that information granules and their structures emerge from a search for patterns with an optimum combination of size and frequency, it is likely that the way in which those granules are structured (as described in Section 5.3) will also be familiar to people.
The familiarity of those kinds of structures-such as chunking-with-codes, runlength coding, part-whole hierarchies, and class-inclusion hierarchies-will clearly be important in ensuring the interpretability, explainability, and transparency of knowledge created via unsupervised learning in the SP System, and via the building of SPMAs.
Ideas related to the concept of a granule
This section briefly discusses two ideas that appear to be relevant to the concept of a granule, and also to key ideas in the SP System.
The concept of a chunk of information
As we have seen in Sections 5.3.2 and 5.3.3, and elsewhere above, the concept of 'chunk' can be useful in describing any small coherent body of information. As such, it is similar to the concept of a 'granule'.
It appears that the concept of a 'chunk' in cognition-related research, was first introduced in George Miller's much-quoted paper on "The magical number seven, plus or minus two" [25] where he argued that:
"... we must recognize the importance of grouping or organizing [information] into units or chunks. Since the memory span [of a typical person] is a fixed number of chunks, we can increase the number of bits of information that it contains simply by building larger and larger chunks, each chunk containing more information than before." [25, p. 93].
In keeping with that description, Section 5.1, above, suggests how chunks of information may be discovered via the matching and unification of patterns.
Since Miller's seminal paper, the concept of a chunk of information has been and still is widely used in the academic literature in cognitive science and cognitive psychology. Now the word 'chunk', apparently without reference to Miller's concept, has been associated with the word 'granule' like this: "Granular computing ... is a research area focused on representing, reasoning, and processing basic chunks of information, namely granules." [26, p. 1835], emphasis added.
Apart from that connection between 'chunk' and 'granule', a search of the literature suggests that there is at present little interest in examining possible synergies between the two areas of research.
As noted in Section 4.3, the concept of an 'SP-pattern' in the SP System appears to capture much of what is meant by the concepts of 'granule' and 'chunk'.
Object-oriented programming
Another thing with much of the flavour of the concepts of 'granule', 'SP-pattern', and 'chunk', is the concept of a discrete entity or object in object-oriented programming. From small beginnings in Norway [27], this paradigm for programming has grown to be a widely-adopted feature in the design of programming languages, and in the design of software systems.
For readers not already familiar with OO-programming and OO-design, the neat idea is that a software system should be a model of the system it is to serve, with a discrete software 'object' for each entity or object in the system to be modelled, and with hierarchies of 'classes' of object and with 'inheritance' of 'attributes' of objects from higher levels to lower levels (Section 5.3.6).
Some connections have been made between concepts of granularity and objectoriented programming (eg [28]) but it appears not to be a live issue.
Tying things together?
In view of what has been said earlier about information granules (Section 3), about SP-patterns (Section 4.3), about information chunks (Section 5.5.1), and about entities or objects in object-oriented programming (Section 5.5.2), there seems to be a case for tying these concepts together, perhaps within the framework of information compression. It seems likely that the SP System could accommodate them all.
Transparency via audit trails
Compared with DNNs, the SP System has the striking advantage that it provides a full audit of what it is doing, and it provides it in a form that people can understand. That advantage applies to unsupervised learning in the SP System, which means the creation of SP-grammars (Section 4.5), and it also applies to the building of SPMAs by the SP System (Section 4.4.3), which provides the means of modelling all the other AI-related things that the SP System can do, such as the processing of natural language, recognition of entities, several forms of reasoning, and so on, as summarised in Section 4.6. Figure 6 shows a bare-bones audit trail for the creation of the SPMA shown in Figure 3, to illustrate how, for each SPMA that is created by the SP Computer Model, the program provides detailed information about how the SPMA is built. The caption to Figure 6 describes how it should be interpreted. ID20 ID67 ID51 ID11 ID1 ID92 ID23 ID3 ID405 ID78 ID2 ID321 ID72 ID5 ID227 ID92 ID51 ID673 ID321 ID405 ID2075 ID673 ID227 Fig. 6. An audit trail for the creation of the SPMA shown in Figure 3. It should be interpreted as follows: the figure should be read from the bottom to the top, starting with a row about the SPMA in Figure 3; at the beginning of each row there is an identifier for an SPMA or an SP-pattern; in each row that describes an SPMA, there are two identifiers to the right, referencing the two structures that were matched and unified to create the given SPMA; those two structures might be one SP-pattern with another SP-pattern (or different parts of itself), or an SPMA with an SP-pattern (in either order), or an SPMA with an SPMA; in most cases, there is an arrow from each of the two identifiers to the SPMA or SP-pattern that it refers to; but with the SP-patterns 'ID1' and 'ID3', each of their identifiers (shown in colour) appears more than once in the figure, so to avoid undue clutter in the figure in each of those two cases, only one arrow is shown.
The information in Figure 6 is only an extract from the much fuller information that the SP Computer Model provides. With each SPMA referenced in the figure, and many others on paths that turned out to be blind alleys, the following information is provided:
• The full structure of the SPMA.
• The pairing that produced the given SPMA: an SP-pattern with an SPpattern, or an SPMA with an SP-pattern (in either order), or an SPMA with an SPMA (see Section 4.4.3).
• A full evaluation of each SPMA in terms of IC, described in detail in [6, Section 3.5] and [5, Section 4.1].
• Absolute and relative probabilities associated with each SPMA, calculated as shown in [6, Section 3.7] and [5,Section 4.4].
A similar level of detail is provided for the creation of SP-grammars by the SP Computer Model.
As noted above, this kind of transparency in the workings of the SP Computer Model contrasts with the considerable obscurity in the workings of DNNs (Section 2).
Transparency via granularity and familiarity
As we have seen in Section 5.4, it appears that the concepts of granularity and the DONSVIC principle are closely related. For that reason, they will be discussed together in subsections below which describe how the SP System exhibits granularity associated with each of the seven variants of ICMUP described in Section 5.3.
In addition to granularity, it seems that, because they are so widely used, each of those variants of ICMUP are likely to strike a chord of familiarity with most people.
Thus because of both granularity and familiarity, each of those seven variants of ICMUP are likely to contribute to interpretability and explainability, and a consequent transparency, in the operation of the SP System.
Granularity, familiarity, and Basic ICMUP
Although Basic ICMUP, as described in Section 5.3.1, is indeed remarkably basic and simple, it is also surprisingly powerful, providing examples of both granularity and the DONSVIC principle. It may, for example, be seen to provide the main mechanism for our perception of the world as being populated by three-dimensional objects.
In case this seems obscure, the learning and perception of 3D objects is a development envisioned for the SP System, described in [17, Sections 6.1 and 6.2], but it has not yet been implemented in the SP Computer Model.
In brief, the basic idea is that, with any kind of object that is new to us, we may view it from several different angles so that there is overlap between neighbouring fields of view. Then our brains can piece together a three-dimensional model of the object by merging the overlapping areas (via Basic ICMUP), much as a panoramic view of a scene may be made with a digital camera from several overlapping views of the scene. In a similar way, we may recognise 3D objects that we already know, and refine our knowledge of them.
The idea of creating 3D models from two or more views, called 'photogrammetry', is the basis of commercial and free systems that are available for creating 3D models from photographs. 1 The same kinds of process are at work in the creation of Google's 'Streetview'. Here, overlapping digital photographs taken of streets and junctions all over the world are pieced together to create very useful 3D digital models of those many streets and junctions.
Because objects are such a familiar aspect of how we perceive the world, Basic ICMUP contributes to familiarity as well as granularity, and both of them contribute to transparency in results produced by the SP System.
Even without objects, A little reflection will show that Basic ICMUP is something we do all the time. Whenever we recognise something that we know already, we are employing Basic ICMUP. And whenever we see something new but see a similarity to something we know already, we are employing Basic ICMUP.
Granularity, familiarity, and chunking-with-codes
From the perspective of the chunking-with-codes principle for achieving IC (Section 5.3.2), it seems that a 'chunk' of information qualifies as an information granule, and as an example of the DONSVIC principle at work.
It appears that chunking-with-codes is widespread in HLPC, especially in our use of language. For example, a word like 'house' may be understood as a relatively short identifier or code for the much larger chunk of information which is the meaning of 'house'. It matters not that the chunk may be relatively generalised so that it accommodates many of the different kinds of houses that people live in. Regardless of the complexity of that concept, the word 'house' serves as a short identifier for the concept.
A little reflection shows that, in any natural language, every noun, adjective, verb, and adverb, is in effect a short code for the much larger chunk of information which is what the word means.
Because this method for the economical encoding of information is so simple and effective, it is likely that non-verbal aspects of our thinking would be encoded in a similar way-although it is more difficult to obtain direct evidence for this than it is with the surface forms of natural languages.
As with Basic MUP, chunking-with-codes is an extremely common feature of how we conceive the world, and thus it exhibits familiarity as well as granularity, and both of them contribute to transparency.
How this grammar may be used in practice may be seen in the SPMA shown in Figure 8. A prominent difference between this SPMA and the one shown earlier in Figure 3 is that, in the earlier one, SP-patterns are arranged horizontally, while in the later one, they are arranged vertically. This has no theoretical significance and is purely a matter of which arrangement is the best fit for the page. Figure 7. Adapted from Figure 3 in [29].
The SPMA shown in Figure 8, is the best of several alternatives created by the SP Computer Model, starting with the New SP-pattern 'MU 0 4 1 #MU' (shown in column 0), and the SP-grammar shown in Figure 7. Old SP-patterns selected from that grammar appear in columns to to 4 in the figure, one SP-pattern per column.
The New SP-pattern, 'MU 0 4 1 #MU', is an economical description of a meal: the SP-symbols 'MU' and '#MU' at the top and bottom show that the New SPpattern is about the menu, the full version of which is shown in the Old SPpattern in Column 1; the SP-symbol '0' in column 0 shows that the starter is a dish of mussels; the SP-symbol '4' shows that the main course is a salad; and the SP-symbol '1' shows that the pudding is apple crumble.
Things like menus in cafes and restaurants are not quite as familiar as chunkingwith-codes but they are very much part of everyday life and so that we may say that they contribute to familiarity as well as granularity, both them promoting transparency in the workings of the SP System.
Granularity, familiarity, and run-length encoding
With the repeated instances of 'INFORMATION' in Section 5.3.4, illustrating the run-length coding concept, repetition of the pattern 'INFORMATION' would in itself suggest that it conforms to the DONSVIC principle, and would thus qualify as an information granule.
Whenever a sports coach, for example, says "keep doing push-ups until I say stop", he or she is employing run-length coding. It is very widely used and may thus contribute to familiarity and transparency for users of an SP System.
Granularity, familiarity, and part-whole hierarchies
A part-whole hierarchy is similar in some respects to the schema-plus-correction example shown in Figures 7 and 8. Perhaps the main difference is that a part-whole hierarchy would normally have more levels, as in the SPMA in Figure 9. Fig. 9. The SPMA shown here is the best of several that the SP Computer Model has created, beginning with several one-SP-symbol SP-patterns (in column 0) that describe some features of a car, and a grammar of Old patterns, some of which are the SP-patterns shown in columns 1 to 8. These SP-patterns include one for 'mycar', in column 4, and other SP-patterns that describe parts and sub-parts of 'mycar'.
With many simplifications, this SPMA shows how the SP Computer Model may create an analysis of 'mycar' into a part-whole hierarchy when it is presented with some features of 'mycar' in column 0 and a repository of Old SP-patterns which include those SP-patterns in columns 1 to 8 of Figure 9.
As with other examples in this chapter, it is clear that there is granularity in the SP-patterns shown in the figure because economies can be achieved as described in Section 5.3.5, and thus the SPMA is likely to conform to the DONSVIC principle as described in Section 5.4.
As with other variants of ICMUP, part-whole hierarchies are very familiar in everyday life, and that familiarity is likely to contribute to transparency in results from the SP System. Although the categories used by botanists to classify plants have a formal status, it is likely that they have a foundation in what seems 'natural', which is itself one facet of the DONSVIC principle (Section 5.4). More generally, categories like that may be seen as information granules. Column 0 shows some New SP-patterns that represent features of a plant that has not yet been identified, while the SP-pattern in column 1 shows that the unknown plant is probably a Meadow Buttercup (the name is shown near the bottom of the column), and the SP-patterns in columns 2 to 6 show higher-level categories such as the genus (column 6), the family (column 5), and so on.
Granularity, familiarity, and class-inclusion hierarchies
The way in which IC is served by Old SP-patterns like the ones shown can be seen in the way they make it possible to avoid unnecessary repetition of information. For example, the SP-pattern representing the high-level category 'Plants' (column 2) has the features 'haschlorophyll' and 'photosynthesises'. As can be seen from the SPMA, there is no need to repeat that information in the lower-level category 'Angiospermae' (column 3), or in the next category below, the category 'Ranunculales' (column 4), and so on down to the level of the Meadow Buttercup (column 1).
Much the same can be said about class-inclusion hierarchies as was said about part-whole hierarchies. They promote granularity, they are very familiar, and thus likely to contribute to transparency in the results from the SP System. 7.7 Granularity, familiarity, and SP-multiple-alignments As described in Section 5.3.7, the concept of SPMA is a generalisation of the other six versions of ICMUP described in Section 5.3. As such, it is likely to exhibit the same levels of granularity and familiarity as the other six, with corresponding benefits for transparency.
Interpretability and explainability
Although interpretability and explainability fall under the heading of transparency, considered in Sections 6 and 7, above, this section describes some recent studies that are more specific to those two concepts, with some brief comments from an SP perspective.
Quanshi Zhang and Song-Chun Zhu describe a survey of visual interpretability for deep learning [30]. Like other authors, they emphasise achievements with DNNs but lament how interpretability is always their Achilles' heel. Concentrating on convolutional neural networks (CNNs), they examine methods for discovering representations of pre-trained CNNs, including methods for 'disentangling' representations of pre-trained CNNs, and they examine how learning by CNNs may be achieved with 'disentangled' representations, and how 'middle-to-end' learning may be achieved with 'model interpretability'. Finally, they suggest that "In the future, we believe the middle-to-end learning will continuously be a fundamental research direction." [30, p. 37] In addition, they suggest that, "based on the se-mantic hierarchy of an interpretable network, debugging CNN representations at the semantic level will create new visual applications." (ibid.).
David Alvarez-Melis and Tommi Jaakkola describe research towards the development of neural networks that interpretable, self-explaining, and also robust [31]. In that connection, they propose three desirable features for neural networks: 'explicitness', 'faithfulness', and 'stability', and they show that, in general, existing methods do not satisfy them. Starting with linear classifiers, they have developed self-explaining models in stages, progressively generalizing them to meet their criteria of success. They say that experimental results show that the framework they have developed shows promise for reconciling the complexity of models and their interpretability.
Alejandro Barredo Arrieta and colleagues [32] present an overview of studies in "eXplainable Artificial Intelligence (XAI)", which they classify in two different categories: 1) "[Machine learning] models that feature some degree of transparency, [which are] thereby interpretable to an extent by themselves; and 2) "post-hoc XAI techniques devised to make ML models more interpretable."(p. 108). They have introduce a new classification of DNNs "giving rise to an alternative taxonomy that connects more closely with the specific domains in which explainability can be realized for Deep Learning models." (ibid.). Also, "Our reflections about the future of XAI, conveyed in the discussions held throughout this work, agree on the compelling need for a proper understanding of the potentiality and caveats opened up by XAI techniques. It is our vision that model interpretability must be addressed jointly with requirements and constraints related to data privacy, model confidentiality, fairness and accountability. (ibid.). Ruth Byrne [33] discusses how 'counterfactuals' (what would have happened if circumstances had been different) may provide evidence in support of explainable AI. In particular in this connection, she considers which kinds of counterfactual are most useful. "... to maximize their effectiveness, it will be useful for XAI to incorporate information from psychological experiments about the way people create and comprehend counterfactuals, for counterfactuals of different structure and content, and with various relations." [33, p. 6280].
David Gunning and colleagues [34] discuss how "... for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners" (p. 1). They describe several issues associated with explainability but do not reach conclusions.
Randy Goebel and colleages [35] note the problems with explainability in the results normally produced by DNNs. They suggest that one possible way forward is to develop DNNs that can create explanations in parallel with their main processing. Another possibility is some kind of hybrid process that leverages human intelligence in conjunction with machine intelligence.
These studies are only a small fraction of activity in the areas of interpretability and explainability. The impression one gains from these studies and others is that it is likely to be a struggle to develop DNNs, or varieties thereof, which provide what is needed in terms of transparency, interpretability, and explainability.
By contrast, the SP System has clear strengths in terms of 'transparency via audit trails' (Section 6), and in terms of 'transparency via granularity and familiarity' (Section 7).
Conclusion
This chapter describes how the SP System-which is the SP Theory of Intelligence and the SP Computer Model-may promote transparency and granularity in AI, and perhaps also in other areas of computing.
The SP System is introduced (Section 4), together with an account of the significance of IC in the representation and processing of knowledge in the SP System (Section 5). It seems that much of this IC can be seen as "IC via the matching and unification of patterns" (ICMUP, Section 5.1).
An important part of ICMUP in this context is the matching of patterns that are 'discontinuous', meaning that any given pattern may be interspersed with other information (Section 5.2).
Seven variants of ICMUP are described in Section 5.3. Amongst those seven variants, the most important is the concept of SP-multiple-alignment (SPMA) (Sections 5.3.7 and 4.4), a version that generalises the six other versions (Section 5.3).
Another important idea associated with the SP System is the concept of "Discovery Of Natural Structures Via Information Compression" ('DONSVIC') (Section 5.4). As described in Section 5.4, the DONSVIC principle seems to provide a basis for the concept of granularity in AI.
There may be a case for exploring what appears to be some common ground amongst such concepts as 'information granule', 'SP-pattern', 'information chunk', and 'entity' or 'object' in object-oriented programming (Section 5.6).
The main conclusions of this chapter are:
• Transparency via audit trails. For the creation of any given SPMA, the SP System provides very full information about how, via heuristic search, that SPMA has been created, with full information about all the false trails that were followed in that search (Section 6). For any given SPMA it is possible to plot an audit trail of all its ancestors.
The fact that such an audit trail can be created confirms the existence of clear, granular structures in the system's processing. There is very full information about all the SPMAs created on the path to 'good' SPMAs, and all the SPMAs created on false trails away from 'good' SPMAs.
• Transparency via granularity. The SP Computer Model has already demonstrated the unsupervised learning of words and grammatical classes from English-like artificial language without any punctuation or spaces to mark where one words ends and the next one begins. There is clear potential for further development along these lines. There is also potential for the unsupervised learning of 3D objects.
In general, the SP System, via the DONSVIC principle, has potential to bootstrap 'natural' structures in its knowledge, and thus to bootstrap granularity in that knowledge.
• Transparency via familiarity. Owing to the organisation and workings of the SP System, the seven variants of ICMUP described in Section 5.3 will be the mainstay of how its knowledge is organised.
In view of evidence that the same principles have a role to play in brains and nervous systems [11], the kinds of structures created by the SP System as it matures are likely to be similar to structures that people use themselveschunking-with-codes, part-whole hierarchies, class-inclusion hieararchies, and more. Consequently, the kinds of structures created by the SP System are likely to be familiar to people, helping to make those structures relatively easy to interpret and to explain, and correspondingly transparent.
Many recent studies of interpretability and explainability (Section 8) suggest that, in the quest for transparency in those two areas, it is likely to prove difficult to overcome fundamental weaknesses in DNNs. It may be better to make a fresh start with the SP System for development, especially since there is evidence that the SP System provides a relatively promising foundation for the development of artificial general intelligence. (Section 4.6).
Since the SP System has potential in several areas apart from AI (Section 4.6.2), there is potential for the advantages just described to be seen in those areas as well.
Fig. 1 .
1Schematic representation of the SP System. Adapted fromFigure 1in[5], with permission.
Fig. 3 .
3An SPMA produced by the SP Computer Model with a New SP-pattern (in row 0)
Fig. 4 .
4Schematic representation of the development and application of the SP Machine. Adapted from Figure 2 in [5], with permission.
5 .Fig. 5 .
55Here, in some 'raw data' shown at the top of the figure, two examples of the pattern 'INFORMATION' are merged or unified to create a single instance, shown immediately below. Raw data Compressed data ....w62................w62......... ....INFORMATION................INFORMATION........How two instances of the pattern 'INFORMATION' in a body of raw data may be merged to form a single 'unified' pattern or 'chunk' of information, shown below the 'raw data'. The rest of the figure is considered later. Adapted fromFigure 2.3 in[6], with permission.
Fig. 8 .
8The best SPMA created by the SP Computer Model with the New SP-pattern, 'MU 0 4 1 #MU', and the set of Old SP-patterns shown in
engine-control-unit -------------engine-control-unit crankshaft -------------------------------------------------------------crankshaft csbody ---------------------------------------------------------------------------------------------------csbody counterweights #counterweights ... #crankshaft ------------------------------------------------------------#crankshaft pistons #pistons valves #valves ... #engine -----------------------#engine wheels --wheels wheel1 ------------------------------------------------wheel1 wheel2 ... #wheels -#wheels body -------------body windscreen roof seats ------seats seat1 seat2 -------------------------------------------------------------------------------------------seat2 ... #seats -----#seats dashboard #dashboard doors --doors door1 -------------------------------------------------------------------
Figure 10
10shows an SPMA created by the SP Computer Model which, via classes and subclasses of plants, illustrates the concept of a class-inclusion hierarchy, as described in Section 5.3.6.
Fig. 10 .
10---------<phylum> Plants -----------Plants <feeding> has-chlorophyll ------------------has-chlorophyll photosynthesises <feeding> <structure> ------<structure> <shoot> <stem> ----------<stem> ----------------------------<stem> hairy -----------hairy </stem> ---------</stem> ---------------------------</stem> <leaves> --------------------------<leaves> compound palmately-cut </leaves> -------------------------</leaves> <flowers> -------------------<flowers> <arrangement> regular all-parts-free </arrangement> <sepals> --------------------------------------------------------<sepals> not-reflexed </sepals> -------------------------------------------------------</sepals> <petals> --------<petals> --------------------------------------------------------<petals> ---------<petals> <number> ---------<number> five </number> --------</number> <colour> --------------------------------------------------------<colour> yellow ----------yellow </colour> -------------------------------------------------------</colour> </petals> -------</petals> -------------------------------------------------------</petals> --------</petals> <hermaphrodite> <stamens> -------------------------------------------------------------------------<stamens> numerous --------------------------------------------------------------------------numerous </stamens> ------------------------------------------------------------------------structure>-----</structure> <habitat> -------<habitat> ------<habitat> meadows ---------meadows </habitat> ------</habitat> -----</habitat> <common-name> --<common-name> Meadow Buttercup </common-name> -</common-name> <food-value> -----------------------------------<food-value> poisonous </food-value> ----------------------------------</food-value> </phylum> --------</phylum> </class> -----</class> </order> -----</order> </family> --------</family> </genus> ---------------------------------------------------------------------------An SP-multiple-alignment created by the SP Computer Model.It is the best of several alternatives that the program creates, starting with a set of of New SP-patterns (in column 0) which are a description of an unknown plant, and an SP-grammar which includes Old SP-patterns shown in columns 1 to 6, which describe different categories of plant and a selection of their attributes. FromFigure 16in[5], reproduced with permission.
See, for example, Agisoft (www.agisoft.com/), All3DP (all3dp.com/), Sculpteo (www.sculpteo.com), and more.
Granularity, familiarity, and schema-plus-correctionWith regard to schema-plus-correction as a means of achieving IC (Section 5.3.3), granularity and the DONSVIC principle may be seen at work at two main levels: the schema itself and the 'chunks' of information which serve as 'corrections' to the schema.As we saw in Section 5.3.3, a menu in a restaurant or cafe is a good example of the schema-plus-correction means of achieving IC. This is illustrated inFigure 7, which is a relatively simple example of an SP-grammar composed of SP-patterns. MU ST #ST MC #MC PD #PD #MU | Prepare meal ST 0 mussels #ST | Starter: musselsSTFig. 7. An SP grammar composed of SP-patterns that represent a three-course meal. Each SP-pattern has a comment to the right which explains what it is about, with the marker '|' at the beginning of each comment. Adapted fromFigure 2in[29].
Intriguing properties of neural networks. C Szegedy, W Zaremba, I Sutskever, J Bruna, D Erhan, I Goodfellow, Fergus , R , arXiv:1312.6199v4Google Inc. and othersTechnical reportcs.CV. bit.ly/1elzRGM (PDFSzegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. Intriguing properties of neural networks. Technical re- port, Google Inc. and others (2014). arXiv:1312.6199v4 [cs.CV] 19 Feb 2014, bit.ly/1elzRGM (PDF).
Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. A Nguyen, J Yosinski, Clune , J , 10.1109/CVPR.2015.7298640Proceedings of the IEEE confernce on computer vision and pattern recognition (CVPR 2015). the IEEE confernce on computer vision and pattern recognition (CVPR 2015)Nguyen, A., Yosinski, J., and Clune, J. Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In Proceedings of the IEEE confernce on computer vision and pattern recognition (CVPR 2015), pages 427-436 (2015). doi:10.1109/CVPR.2015.7298640.
Toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. L A Zadeh, Fuzzy Sets and Systems. 90Zadeh, L. A. Toward a theory of fuzzy information granulation and its central- ity in human reasoning and fuzzy logic. Fuzzy Sets and Systems 90, 111-127 (1997).
Granular computing for data analytics: a manifesto of humancentric computing. W Pedrycz, IEEE/CAA Journal Of Automatica Sinica. 56Pedrycz, W. Granular computing for data analytics: a manifesto of human- centric computing. IEEE/CAA Journal Of Automatica Sinica 5(6), 1025-1034 (2018).
The SP Theory of Intelligence: an overview. J G Wolff, 10.3390/info4030283arXiv:1306.3888Information. 43cs.AI], bit.ly/1NOMJ6lWolff, J. G. The SP Theory of Intelligence: an overview. Information 4(3), 283-341 (2013). doi:10.3390/info4030283. arXiv:1306.3888 [cs.AI], bit.ly/1NOMJ6l.
Unifying Computing and Cognition: the SP Theory and Its Applications. J G Wolff, ISBNs: 0-9550726-0-3CognitionResearch.org. ebook edition. print edition). Distributors, including Amazon.com, are detailed on bit.ly/WmB1rsWolff, J. G. Unifying Computing and Cognition: the SP Theory and Its Appli- cations. CognitionResearch.org, Menai Bridge (2006). ISBNs: 0-9550726-0-3 (ebook edition), 0-9550726-1-1 (print edition). Distributors, including Ama- zon.com, are detailed on bit.ly/WmB1rs.
The Structure of Scientific Revolutions. T S Kuhn, University of Chicago PressChicago and Londonfourth, Kindle editionKuhn, T. S. The Structure of Scientific Revolutions. University of Chicago Press, Chicago and London, fourth, Kindle edition (2012).
Some informational aspects of visual perception. F Attneave, Psychological Review. 61Attneave, F. Some informational aspects of visual perception. Psychological Review 61, 183-193 (1954).
Sensory mechanisms, the reduction of redundancy, and intelligence. H B Barlow, HMSO, editor, The Mechanisation of Thought Processes. LondonHer Majesty's Stationery OfficeBarlow, H. B. Sensory mechanisms, the reduction of redundancy, and intel- ligence. In HMSO, editor, The Mechanisation of Thought Processes, pages 535-559. Her Majesty's Stationery Office, London (1959).
Trigger features, adaptation and economy of impulses. H B Barlow, Information Processes in the Nervous System. Leibovic, K. N.New YorkSpringerBarlow, H. B. Trigger features, adaptation and economy of impulses. In Leibovic, K. N., editor, Information Processes in the Nervous System, pages 209-230. Springer, New York (1969).
Information compression as a unifying principle in human learning, perception, and cognition. J G Wolff, 10.1155/2019/1879746.ArticleID1879746.viXra:1707.0161v3Complexity. Wolff, J. G. Information compression as a unifying principle in human learn- ing, perception, and cognition. Complexity 2019, 38 pages (2019). doi: 10.1155/2019/1879746. Article ID 1879746. viXra:1707.0161v3, hal-01624595 v2.
Information compression, multiple alignment, and the representation and processing of knowledge in the brain. J G Wolff, 10.3389/fpsyg.2016.01584arXiv:1604.05535Frontiers in Psychology. 71584cs.AI], bit.ly/2esmYytWolff, J. G. Information compression, multiple alignment, and the representa- tion and processing of knowledge in the brain. Frontiers in Psychology 7, 1584 (2016). ISSN 1664-1078. doi:10.3389/fpsyg.2016.01584. arXiv:1604.05535 [cs.AI], bit.ly/2esmYyt.
Mathematics as information compression via the matching and unification of patterns. J G Wolff, doi:10.1155/ 2019/6427493. Article ID 6427493Complexity. 201925Wolff, J. G. Mathematics as information compression via the matching and unification of patterns. Complexity 2019, 25 (2019). doi:10.1155/ 2019/6427493. Article ID 6427493, Archives: vixra.org/abs/1912.0100 and hal.archives-ouvertes.fr/hal-02395680.
Architects of Intelligence: the Truth About AI From the People Building It. M Ford, Packt PublishingBirmingham, UKKindle editionFord, M. Architects of Intelligence: the Truth About AI From the People Building It. Packt Publishing, Birmingham, UK, Kindle edition (2018).
Problems in AI research and how the SP System may help to solve them (2020). Download: tinyurl.com/y48m84t5. J G Wolff, submitted for publicationWolff, J. G. Problems in AI research and how the SP System may help to solve them (2020). Download: tinyurl.com/y48m84t5, submitted for publication.
Big data and the SP Theory of Intelligence. J G Wolff, 10.1109/ACCESS.2014.2315297arXiv:1306.3890Big Data: Storage, Sharing, and Security. Taylor & Francis LLCCRC Press2cs.DB], bit.ly/2qfSR3G. This paper, with minor revisions, is reproduced in Fei HuWolff, J. G. Big data and the SP Theory of Intelligence. IEEE Access 2, 301- 315 (2014). doi:10.1109/ACCESS.2014.2315297. arXiv:1306.3890 [cs.DB], bit.ly/2qfSR3G. This paper, with minor revisions, is reproduced in Fei Hu (Ed.), Big Data: Storage, Sharing, and Security, Taylor & Francis LLC, CRC Press, 2016, Chapter 6, pp. 143-170.
Application of the SP Theory of Intelligence to the understanding of natural vision and the development of computer vision. J G Wolff, 10.1186/2193-1801-3-552arXiv:1303.2071SpringerPlus. 31cs.CV], bit.ly/2oIpZB6Wolff, J. G. Application of the SP Theory of Intelligence to the understand- ing of natural vision and the development of computer vision. SpringerPlus 3(1), 552-570 (2014). doi:10.1186/2193-1801-3-552. arXiv:1303.2071 [cs.CV], bit.ly/2oIpZB6.
Towards an intelligent database system founded on the SP theory of computing and cognition. J G Wolff, 10.1016/j.datak.2006.04.003arXiv:cs/0311031Data & Knowledge Engineering. 60cs.DB], bit.ly/1CUldR6Wolff, J. G. Towards an intelligent database system founded on the SP theory of computing and cognition. Data & Knowledge Engineering 60, 596-624 (2007). doi:10.1016/j.datak.2006.04.003. arXiv:cs/0311031 [cs.DB], bit.ly/1CUldR6.
Medical diagnosis as pattern recognition in a framework of information compression by multiple alignment, unification and search. Decision Support Systems. J G Wolff, 10.1016/j.dss.2005.02.005arXiv:1409.805342cs.AI], bit.ly/1F366o7Wolff, J. G. Medical diagnosis as pattern recognition in a framework of in- formation compression by multiple alignment, unification and search. De- cision Support Systems 42, 608-625 (2006). doi:10.1016/j.dss.2005.02.005. arXiv:1409.8053 [cs.AI], bit.ly/1F366o7.
A roadmap for the development of the 'SP Machine' for artificial intelligence. V Palade, J G Wolff, 10.1093/comjnl/bxy126arXiv:1707.00614The Computer Journal. 62bit.ly/2tWb88MPalade, V. and Wolff, J. G. A roadmap for the development of the 'SP Machine' for artificial intelligence. The Computer Journal 62, 1584-1604 (2019). doi:10.1093/comjnl/bxy126. https://doi.org/10.1093/comjnl/bxy126, arXiv:1707.00614, bit.ly/2tWb88M.
Learning syntax and meanings through optimization and distributional analysis. J G Wolff, Categories and Processes in Language Acquisition. Levy, Y., Schlesinger, I. M., and Braine, M. D. S.Wolff, J. G. Learning syntax and meanings through optimization and dis- tributional analysis. In Levy, Y., Schlesinger, I. M., and Braine, M. D. S., editors, Categories and Processes in Language Acquisition, pages 179-215.
. Lawrence Erlbaum, bit.ly/ZIGjycHillsdale, NJLawrence Erlbaum, Hillsdale, NJ (1988). bit.ly/ZIGjyc.
Understanding language without the ability to speak: a case report. E H Lenneberg, Journal of Abnormal and Social Psychology. 65Lenneberg, E. H. Understanding language without the ability to speak: a case report. Journal of Abnormal and Social Psychology 65, 419-425 (1962).
. C Brown, My Left Foot. Vintage Digital. 1954Kindle editionBrown, C. My Left Foot. Vintage Digital, London, Kindle edition (2014). First published in 1954.
Ideal learning' of natural language: positive results about learning from positive evidence. N Chater, P Vitányi, 10.1016/j.jmp.2006.10.002Journal of Mathematical Psychology. 513Chater, N. and Vitányi, P. 'Ideal learning' of natural language: positive results about learning from positive evidence. Journal of Mathematical Psychology 51(3), 135-163 (2007). doi:10.1016/j.jmp.2006.10.002.
The magical number seven, plus or minus two: some limits on our capacity for processing information. G A Miller, Psychological Review. 63Miller, G. A. The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review 63, 81-97 (1956).
Resilience analysis of critical infrastructures: a cognitive approach based on granular computing. H Fujita, A Gaeta, V Loia, F Orciuoli, 10.1109/TCYB.2018.2815178IEEE Transactions on Cybernetics. 495Fujita, H., Gaeta, A., Loia, V., and Orciuoli, F. Resilience analysis of critical infrastructures: a cognitive approach based on granular computing. IEEE Transactions on Cybernetics 49(5), 1835-1848 (2018). doi:10.1109/TCYB. 2018.2815178.
. G M Birtwistle, O.-J Dahl, B Myhrhaug, K Nygaard, Simula Begin, Studentlitteratur, Lund, Birtwistle, G. M., Dahl, O.-J., Myhrhaug, B., and Nygaard, K. Simula Begin. Studentlitteratur, Lund (1973).
A multi-granularity locking model for concurrency control in object-oriented database systems. S.-Y Lee, R.-L Liou, IEEE Transactions on Knowledge and Data Engineering. 81Lee, S.-Y. and Liou, R.-L. A multi-granularity locking model for concurrency control in object-oriented database systems. IEEE Transactions on Knowledge and Data Engineering 8(1), 144-156 (1996).
Software engineering and the SP Theory of Intelligence. J G Wolff, arXiv:1708.06665Technical reportSubmitted for publication. cs.SE], bit.ly/2w99WzqWolff, J. G. Software engineering and the SP Theory of Intelligence. Technical report, CognitionResearch.org (2017). Submitted for publication. arXiv:1708.06665 [cs.SE], bit.ly/2w99Wzq.
Visual interpretability for deep learning: a survey. Q Zhang, S.-C Zhu, Frontiers of Information Technology & Electronic Engineering. 19Zhang, Q. and Zhu, S.-C. Visual interpretability for deep learning: a sur- vey. Frontiers of Information Technology & Electronic Engineering 19, 27-39 (2018).
Towards robust interpretabilitywith self-explaining neural networks. D Alvarez-Melis, T S Jaakkola, Proceedings of the 32nd Conference on Neural Information Processing Systems. the 32nd Conference on Neural Information Processing SystemsMontréal, CanadaAlvarez-Melis, D. and Jaakkola, T. S. Towards robust interpretabilitywith self-explaining neural networks. In Proceedings of the 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada (2018).
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. A B Arrieta, N Díaz-Rodríguez, J D Ser, A Bennetot, S Tabik, A Barbado, S Garcia, S Gil-Lopez, D Molina, R Benjamins, R Chatila, F Herrera, Information Fusion. 58Arrieta, A. B., Díaz-Rodríguez, N., Ser, J. D., Bennetot, A., Tabik, S., Bar- bado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., and Herrera, F. Explainable Artificial Intelligence (XAI): Concepts, tax- onomies, opportunities and challenges toward responsible AI. Information Fusion 58, 82-115 (2020).
Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. R M J Byrne, Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19. the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19Byrne, R. M. J. Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In Proceedings of the Twenty-Eighth Interna- tional Joint Conference on Artificial Intelligence (IJCAI-19, pages 6276-6282 (2019).
. D Gunning, M Stefik, J Choi, T Miller, S Stumpf, Yang , G.-Z Xai-Explainable, 10.1126/scirobotics.aay7120Artificial Intelligence. Science Robotics. 4377120Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., and Yang, G.-Z. XAI-Explainable Artificial Intelligence. Science Robotics 4(37), eaay7120 (2019). doi:10.1126/scirobotics.aay7120.
R Goebel, A Chander, K Holzinger, F Lecue, Z Akata, S Stumpf, P Kieseberg, A Holzinger, A I Explainable, The New 42? In CD-MAKE. Hamburg, Germany11015Goebel, R., Chander, A., Holzinger, K., Lecue, F., Akata, Z., Stumpf, S., Kieseberg, P., and Holzinger, A. Explainable AI: The New 42? In CD-MAKE 2018, 27-30 Aug 2018, Hamburg, Germany, Lecture Notes in Computer Sci- ence, volume 11015, pages 295-303 (2018).
| []
|
[
"Demystifying Inductive Biases for β-VAE Based Architectures",
"Demystifying Inductive Biases for β-VAE Based Architectures",
"Demystifying Inductive Biases for β-VAE Based Architectures",
"Demystifying Inductive Biases for β-VAE Based Architectures"
]
| [
"Dominik Zietlow [email protected] \nMax Planck Institute for Intelligent Systems\nMax-Planck-Ring 472076TübingenGermany\n",
"Michal Rolínek [email protected] \nMax Planck Institute for Intelligent Systems\nMax-Planck-Ring 472076TübingenGermany\n",
"Georg Martius [email protected] \nMax Planck Institute for Intelligent Systems\nMax-Planck-Ring 472076TübingenGermany\n",
"Dominik Zietlow [email protected] \nMax Planck Institute for Intelligent Systems\nMax-Planck-Ring 472076TübingenGermany\n",
"Michal Rolínek [email protected] \nMax Planck Institute for Intelligent Systems\nMax-Planck-Ring 472076TübingenGermany\n",
"Georg Martius [email protected] \nMax Planck Institute for Intelligent Systems\nMax-Planck-Ring 472076TübingenGermany\n"
]
| [
"Max Planck Institute for Intelligent Systems\nMax-Planck-Ring 472076TübingenGermany",
"Max Planck Institute for Intelligent Systems\nMax-Planck-Ring 472076TübingenGermany",
"Max Planck Institute for Intelligent Systems\nMax-Planck-Ring 472076TübingenGermany",
"Max Planck Institute for Intelligent Systems\nMax-Planck-Ring 472076TübingenGermany",
"Max Planck Institute for Intelligent Systems\nMax-Planck-Ring 472076TübingenGermany",
"Max Planck Institute for Intelligent Systems\nMax-Planck-Ring 472076TübingenGermany"
]
| []
| The performance of β-Variational-Autoencoders (β-VAEs) and their variants on learning semantically meaningful, disentangled representations is unparalleled. On the other hand, there are theoretical arguments suggesting the impossibility of unsupervised disentanglement. In this work, we shed light on the inductive bias responsible for the success of VAE-based architectures. We show that in classical datasets the structure of variance, induced by the generating factors, is conveniently aligned with the latent directions fostered by the VAE objective. This builds the pivotal bias on which the disentangling abilities of VAEs rely. By small, elaborate perturbations of existing datasets, we hide the convenient correlation structure that is easily exploited by a variety of architectures. To demonstrate this, we construct modified versions of standard datasets in which (i) the generative factors are perfectly preserved; (ii) each image undergoes a mild transformation causing a small change of variance; (iii) the leading VAE-based disentanglement architectures fail to produce disentangled representations whilst the performance of a nonvariational method remains unchanged. The construction of our modifications is nontrivial and relies on recent progress on mechanistic understanding of β-VAEs and their connection to PCA. We strengthen that connection by providing additional insights that are of stand-alone interest. | null | [
"https://arxiv.org/pdf/2102.06822v1.pdf"
]
| 231,924,564 | 2102.06822 | ff8e6ef95d24d7796ffa9174ec3b79e5ab4703a5 |
Demystifying Inductive Biases for β-VAE Based Architectures
Dominik Zietlow [email protected]
Max Planck Institute for Intelligent Systems
Max-Planck-Ring 472076TübingenGermany
Michal Rolínek [email protected]
Max Planck Institute for Intelligent Systems
Max-Planck-Ring 472076TübingenGermany
Georg Martius [email protected]
Max Planck Institute for Intelligent Systems
Max-Planck-Ring 472076TübingenGermany
Demystifying Inductive Biases for β-VAE Based Architectures
The performance of β-Variational-Autoencoders (β-VAEs) and their variants on learning semantically meaningful, disentangled representations is unparalleled. On the other hand, there are theoretical arguments suggesting the impossibility of unsupervised disentanglement. In this work, we shed light on the inductive bias responsible for the success of VAE-based architectures. We show that in classical datasets the structure of variance, induced by the generating factors, is conveniently aligned with the latent directions fostered by the VAE objective. This builds the pivotal bias on which the disentangling abilities of VAEs rely. By small, elaborate perturbations of existing datasets, we hide the convenient correlation structure that is easily exploited by a variety of architectures. To demonstrate this, we construct modified versions of standard datasets in which (i) the generative factors are perfectly preserved; (ii) each image undergoes a mild transformation causing a small change of variance; (iii) the leading VAE-based disentanglement architectures fail to produce disentangled representations whilst the performance of a nonvariational method remains unchanged. The construction of our modifications is nontrivial and relies on recent progress on mechanistic understanding of β-VAEs and their connection to PCA. We strengthen that connection by providing additional insights that are of stand-alone interest.
Introduction
The task of unsupervised learning of interpretable data representations has a long history. From classical approaches using linear algebra e.g. via Principal Component Analysis (PCA) Pearson (1901) or statistical methods such as Independent Component Analysis (ICA) Comon (1994) all the way to more recent approaches that rely on deep learning architectures.
The cornerstone architecture is the Variational Autoencoder Kingma and Welling (2014) (VAE) which clearly demonstrates both high semantic quality as well as good performance in terms of disentanglement. If we treat the overloaded term disentanglement to the highest of its aspirations, as the ability to recover the true generating factors of data, fundamental problems arise. As explained by Locatello et al. (2019), already the concept of generative factors is compromised from a statistical perspective: two (or in fact infinitely many) sets of generative factors can generate statistically indistinguishable datasets. Yet, the scores on the disentanglement benchmarks are high and continue to rise. This apparent contradiction stems from biases present in used datasets, metrics, and architectures. It was concluded in Locatello et al. (2020) that [...] future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision [...] which did not happen for the majority of existing ap-Demystifying Inductive Biases for β-VAE Based Architectures proaches. We close this gap for VAE-based architectures on the two most common datasets, namely dSprites Matthey et al. (2017) and Shapes3d Burgess and Kim (2018). The main hypothesis of this work is that all unsupervised, VAE-based disentanglement architectures are successful because they exploit the same structural bias in the data. The ground truth generating factors are well aligned with the nonlinear principal components that VAEs strive for. This bias can be reduced by introducing a small change of the local correlation structure of the input data, which, however, perfectly preserves the set of generative factors. We evaluate a set of approaches on slightly modified versions of the two leading datasets in which each image undergoes a modification inducing little variance. We report drastic drops of disentanglement performance on the altered datasets. On a technical level, we build on the findings by Rolinek et al. (2019) who argued that VAEs recover the nonlinear principal components of the data. In other words, they recover a set of scalars that embody the sources of variance through a nonlinear mapping, similarly to PCA in the linear setting. We extend their argument by an additional finding that further strengthens this connection. The small modifications of the datasets we propose aim to change the leading principal components by adding modest variance to a set of alternative candidates. The "to-be" leading principal components are specific to each dataset, but they are automatically determined in a consistent fashion.
Related work
The related work can be categorized into three research questions: i) defining disentanglement and metrics capturing the quality of latent representations; ii) architecture development for unsupervised learning of disentangled representations; and iii) understanding the inner workings of existing architectures, as for example of β-VAEs. This paper is built upon results from all three lines of work.
Defining disentanglement. Defining the term disentangled representation is an open question Higgins et al. (2018). The presence of learned representations in machine learning downstream tasks, such as object recognition, natural language processing, and others, created the need to "disentangle the factors of variation" Bengio et al. (2013) early on. This vague interpretation of disentanglement is inspired by the existence of a low dimensional manifold that captures the variance of higher dimensional data. As such, finding a factorized, statistically independent representation became a core ingredient of disentangled representation learning and dates back to classical ICA models Comon (1994); Bell and Sejnowski (1995). For some tasks, the desired feature of a disentangled representation is that it is semantically meaningful. Prominent examples can be found in computer vision Shu et al. (2017);Liao et al. (2020) and in research addressing the interpretability of machine learning models Adel et al. (2018); Kim (2019). Based on group theory and symmetry transformations, Higgins et al. (2018) provides the "first principled definition of a disentangled representation". Closely related to this concept is also the field of causality in machine learning (Schölkopf, 2019;Suter et al., 2019), more specifically the search for causal generative models Besserve et al. (2018Besserve et al. ( , 2020. In terms of implementable metrics, a variety of quantities have been introduced, such as the β-VAE score Higgins et al. (2017), SAP score Kumar et al. (2017), DCI scores Eastwood and Williams (2018) and the Mutual Information Gap (MIG, Chen et al. (2018)).
Architecture development. The leading architectures for disentangled representation learning are based on VAEs Kingma and Welling (2014). Despite originally developed as a generative modeling architecture, its variants have proven to excel at representation learning tasks. In particular, the β-VAE performs remarkably well. It exposes the trade-off between reconstruction and regularization via an additional hyperparameter. Other architectures have been proposed that additionally encourage statistical independence in the latent space, e.g. Factor-VAE (Kim and Mnih, 2018b) and β-TC-VAE (Chen et al., 2018). The DIP-VAE (Kumar et al., 2017) suggests using moment-matching to close the distribution gap introduced in the original VAE paper. Using data with auxiliary labels, e.g. time indices of time series data, for which the conditional prior latent distribution is factorized, allowed Khemakhem et al. (2020) to circumvent the unidentifiability of previous models. Similarly, Klindt et al. (2021) used a sparse temporal prior to develop an identifiable model that also performs well on natural data. In this work, we also compare against representations learned by Permutation Contrastive Learning (PCL) Hyvarinen and Morioka (2017). This non-variational method conducts nonlinear ICA also assuming temporal dependencies between the sources of variance. The PCL objective is based on logistic regression.
Understanding inner workings. With the rising success and development of VAE based architectures, the question of understanding their inner working principles became dominant in the community. One line of work tries to answer the question why these models disentangle at all . Another closely related line of work showed the tight connection between the vanilla (β-)VAE objective and (probabilistic) PCA (Tipping andBishop, 1999) (Rolinek et al., 2019;Lucas et al., 2019). Building on these findings, novel approaches for model selection were proposed (Duan et al., 2020), emphasizing the value of thoroughly understanding these methods. On a less technical side, Locatello et al. (2019) conducted a broad set of experiments, questioning the relevance of the specific model architecture compared to the choice of hyperparameters and the variance over restarts. They also formalized the necessity of inductive biases as a strict requirement for unsupervised learning of disentangled representations. Our experiments are built on their code-base.
Background
Quantifying Disentanglement
Among the different viewpoints on disentanglement, we follow the recent literature and focus on the connection between the discovered data representation and a set of generative factors. Multiple metrics have been proposed to quantify this connection. Most of them are based on the understanding that, ideally, each generative factor is encoded in precisely one latent variable. This was captured concisely by Chen et al. (2018), who proposed the Mutual Information Gap (MIG) -the mean difference (over the N w generative factors) of the two highest mutual information between a latent coordinate and the single generating factor, normalized by its entropy. For the entropy H(w i ) of a generating factor and the mutual information I(w i ; z k ) between a generating factor and a latent coordinate, the MIG is defined as
1 N w Nw i=1 1 H(w i ) max k I (w i ; z k ) − max k =k I (w i ; z k ) ,(1)
where k = arg max κ I (w i , z κ ). More details about MIG, its implementation, and an extension to discrete variables can be found in (Chen et al., 2018;Rolinek et al., 2019
Variational Autoencoders and the Mystery of a Specific Alignment
Variational autoencoders hide many intricacies and attempting to compress their exposition would not do them justice. For this reason, we limit ourselves to what is crucial for understanding this work: the objective function. For a well-presented description of VAEs, we refer the reader to Doersch (2016). As is common in generative models, VAEs aim to maximize the log-likelihood objective
N i=1 log p x (i) ,(2)in which {x (i) } N i=1 = X is a dataset consisting of N i.i.d. samples x (i)
of a multivariate random variable X that follows the true data distribution. The quantity p(x (i) ) captures the probability density of generating the training data point x (i) under the current parameters of the model. This objective is, however, intractable in its general form. For this reason, Kingma and Welling (2014) follow the standard technique of variational inference and introduce a tractable Evidence Lower Bound (ELBO):
E q(z|x (i) ) log p x (i) | z + D KL q(z | x (i) ) p(z) . (3)
Here, z are the latent variables used to generate samples from X via a parameterized stochastic decoder p(x (i) | z). The fundamental question of "How do these objectives promote disentanglement?" was first asked by . This is indeed far from obvious; in disentanglement the aim is to encode a fixed generative factor in precisely one latent variable. From a geometric viewpoint, this requires the latent representation to be axis-aligned (one axis corresponding to one generative factor). This question becomes yet more intriguing after noticing (and formally proving) that both objective functions (2) and (3) are invariant under rotations for rotationally symmetric latent space priors, as the ubiquitous p(z) = N (0, 1) Rolinek et al., 2019). In other words, any rotation of a fixed latent representation results in the same value of the objective function and yet β-VAEs consistently produce representations that are axis-aligned and in effect are isolating the generative factor into individual latent variables.
Resolution via Nonlinear Connections to PCA
A mechanistic answer to the question raised in the previous subsection was given by Rolinek et al. (2019). The formal argument showed that under specific conditions which are typical for β-VAEs (called polarized regime), the datapoint-wise linearization of the model performs PCA in the sense of aligning the "sources of variance" with the local axes. The resulting alignment often coincides with finding the components of the datasets ground truth generating factors. Fig. 1 illustrates the difference between local and global PCA. Note that the principal directions of a non-degenerate uniform distribution are the Cartesian axes. PCA as a linear transformation is aligning the embedding following the overall (global) variance. Nonlinear VAEs are aligning the latent space according to the local structure (the local principal components of the almost uniform clusters). This behavior stems from the convenient but uninformed choice of a diagonal posterior, which breaks the symmetry of (2) and (
Linear vs. Nonlinear Embeddings
One less obvious observation is that the "isolation" of different sources of variance relies on the non-linearity of the decoder. The region in which the linearization of the decoder around a fixed µ (i) (x (i) ) is a reasonable approximation suggests a certain radius of the relevant local structure. Since in many datasets the local principal components are well aligned with the intuitively chosen generating factors, β-VAEs recover sound global principal components. If, however, the local structure obeys a different "natural" alignment, the VAE could prefer it, and in return not disentangle the ground truth generating factors.
Methods
We first tighten the connection between VAEs and PCA, secondly introduce the general data generation scheme of commonly used disentanglement datasets, and lastly turn this understanding into an experimental setup that allows for empirical confirmation that the success of VAE based architectures mostly relies on the local structure of the data.
Connection to PCA
The argument established by Rolinek et al. (2019) is technically incomplete to justify the equivalence of linear VAEs and PCA. Strictly speaking, the core message of that work is that VAE decoders tend to be locally orthogonal. The actual alignment of the latent space is insufficiently described by that finding. However, Lucas et al. (2019) argue for the similarity of linear VAEs to probabilistic PCA. We now show a more technical connection between classical PCA and linear VAEs which allows for easier understanding of the consequent subsections. We try to stay close to the language of Rolinek et al.
(2019) and partially reuse their arguments.
The canonical implementation of the β-VAE uses a normal posterior with diagonal covariance matrix and a rotationally symmetric p(z) = N (0, 1) latent prior. This, together with a Gaussian decoder model, turns the ELBO (3) into the tractable loss function
L = E i L (i) rec + βL (i) KL (4) L rec = Dec θ (Enc ϕ (x (i) )) − x (i) 2 L KL = 1 2 j µ (i) 2 j + σ (i) 2 j − log(σ (i) 2 j ) − 1
for an encoder Enc ϕ parameterized by ϕ, a decoder Dec θ parameterized by θ, and
z (i) = Enc ϕ (x (i) ) = µ (i) (x (i) ) + ε (i) , ε (i) ∼ N (0, σ (i) 2 (x (i) )). Since z (i) is z 1 z 2 Latent (local) PCA (local) Nonlinear VAE x 1 x 2 Input Data z 1 z 2 Latent (local) PCA (local)
Linear VAE PCA Figure 1: Distribution of latent encodings for an input distributed as depicted in the middle (data dimensionality equals latent dimensionality). The linear VAE's encoding matches the PCA encoding remarkably well (right); both focus on aligning with axes based on the global variance. The nonlinear VAE (left) is, however, more sensitive to local variance. It picks up on the natural axis alignment of the microscopic structure. The insets show the enlarged area and PCA performed only on the local subset of the point cloud. Our argument in this work is that misaligning the microscopic structure with respect to the ground truth generating factors leads to decreased convenient bias in the data.
unbiased around µ (i) (x (i) ), we find that
L rec = E i L µ rec (x (i) ) + L stoch rec (x (i) ) (5) L stoch rec (x (i) ) = Dec θ (Enc ϕ (x (i) )) − Dec θ (µ (i) ) 2 L µ rec (x (i) ) = Dec θ (µ (i) ) − x (i) 2 .
We assume linear models
µ (i) = M E x (i) , Dec θ (z (i) ) = M D z (i) and denote the SVD decomposition of M D as M D = U ΣV .
We can now state a constraint optimization problem of a simplified VAE objective as
min Σ,U,V E i U ΣV ε (i) 2 (6) s.t. E i L (i) ≈KL = c ≈KL .(7)
where only the stochastic part of the reconstruction loss is minimized and c ≈KL is a constant. The term L ≈KL is the KL loss in the polarized regime, where σ (i) 2 − log(σ (i) 2 ) (element-wise):
L ≈KL = j µ (i) 2 j − log(σ (i) 2 j ) .(8)
The 'decoder matrix' of the classical PCA contains the eigenvectors of the covariance matrix C. By SVD decomposing the zero-mean data matrix X = U X Σ X V X , we find
C = X X = V X Σ 2 X V X .(9)
For encoding data with PCA, the eigenvectors of V X are typically sorted according to their eigenvalue by a permutation matrix P , which leads to the PCA decoder as
M PCA = V X Σ 2 X P.(10)
To tighten the connection between VAEs and PCA, we
compare M D = U ΣV to M PCA = V X Σ 2 X P .
Theorem 1 (Linear VAEs perform PCA). For any X ∈ R n×m , the solution to (6, 7) fulfils
Σ , U , V = arg min Σ,U,V E i U ΣV ε (i) 2 ,(11)
V is a signed permutation matrix,
U = V X .
It was known for long that linear autoencoders, trained on L 2 reconstruction loss, span the same space as PCA Bourlard and Kamp (1988); Baldi and Hornik (1989). The additional similarity that VAEs produce orthogonal mappings, like PCA, was presented by Rolinek et al. (2019). With the final connection presented here, even the alignment of the embedding is shown to be identical. For the sake of brevity, the proofs of the statements can be found in the supplementary material. Although this does not directly translate to a universal statement about the linearization of a nonlinear model, it provides an intuition for that case as well. An important observation is that the alignment of the latent Figure 2: Illustrations for linear and nonlinear embeddings. From left to right: (i) a 3 dimensional point cloud and the corresponding two-dimensional PCA manifold (blue surface) with the canonical principal components (red/blue curves), (ii) a nonlinear two-dimensional manifold with a latent traversal, (iii) a locally perturbed two-dimensional manifold with its principal components which are rotated with respect to (ii), (iv) the goal of our modifications is to move each datapoint closer to this entangled manifold.
(i) (ii) (iii) (iv) →
space is mostly driven by the distribution of the latent noise. When generalizing this statement to the linearization of a nonlinear decoder, the effect of the noise stays local. As a consequence, local changes of the data distribution can potentially lead to a disruptive change in the latent alignments, without inducing large global variance. This idea is depicted in Fig. 2.
The Generative Process
The standard datasets for evaluating disentanglement all have an explicit generation procedure. Each data point x (i) ∈ X is an outcome of a generative process g applied to input w (i) ∈ W. Imagine that g is a function rendering a simple scene from its specification w containing as its coordinates the background color, foreground color, object shape, object size, etc. By design, the individual generative factors are statistically independent in W. All in all,
the dataset X = x (1) , x (2) , . . . , x (n) is constructed with x (i) = g(w (i) ),
where g is a mapping from the generative factors to the corresponding data points.
In this paper, we design a modification g of the generative procedure g that changes the local structure of the dataset X , whilst barely distorts each individual data point. In particular, for each x (i) ∈ X , we have under some distance measure d(·, ·), that
d x (i) , g(w (i) ) ≤ ε.(12)
How to design g such that despite an ε-small modification, VAE-based architectures will create an entangled representation? Following the intuition from Sec. 3.3, Fig. 1 and Fig. 2, we misalign the local variance with respect to the generating factors in order to promote an alternative (entangled) latent embedding. This is precisely the step from (iii) to (iv) in Fig. 2.
To avoid hand-crafting this process, we can exploit the following observation. VAE-based architectures suffer from large performance variance over e.g. different random initializations. This hints at an existing ambiguity: two or more candidates for the latent coordinate system are competing minima of the optimization problem. Some of these solutions perform well, others are "bad" in terms of disentanglement -they correspond to (ii) and (iii) in Fig. 2 respectively. Below, we elaborate on how to foster the entangling and diminish the disentangling solutions. Our modifications are not an implementation of (Locatello et al., 2019, Theorem 1). We do not modify the set of generative factors, but slightly alter the generating process to target a specific subtlety in the inner working of VAEs. Given any dataset, our modification process has three steps:
(i) Find the most disentangled and the most entangled latent space alignment that a β-VAE produces over multiple restarts.
(ii) Optimize a generator that manipulates images to foster and diminish their suitability for the entangled and disentangled model respectively.
(iii) Apply the manipulation to the whole dataset and compare the performance of models trained on the original and the modified dataset.
w 1 w 2 w 3 w 4 w 5 x m ψ (w) x e ϕ dis d θ dis e ϕent d θent x dis x ent ψ = arg min ψ L m L m = L ent − L dis θ dis = arg min θ dis L dis L dis = x dis − x 2 θ ent = arg min θent L ent L ent = x ent − x 2 z dis ∼ N (µ dis , σ 2 dis ) z ent ∼ N (µ ent ,
Choice of Fostered Latent Coordinate System
Over multiple restarts of β-VAE, we pick the model with the lowest MIG score. This gives us an entangled alignment that is expressible by the architecture. Although any choice of metric is valid for this model selection (e.g. UDR Duan et al. (2020)), we chose MIG for the sake of simplicity. The latent variables of each of the models capture the nonlinear principal components of the data. Similarly to PCA, we can order them according to the variance they induce. The order is inversely reflected by the magnitude of the latent noise values. We find the j'th principal components s
(i) j as s (i) j x (i) = enc x (i) k (j)(13)k (j) = arg min l ∈{k (0) ,k (1) ,...,k (j−1) } σ 2 l .(14)
This procedure of sorting the most important latent coordinates is consistent with Higgins et al. (2017) and
Rolinek et al. (2019). The analogy to PCA is that the mapping s (j) (x (i) ) gives the j'th coordinate of x (i) in the new (nonlinear) coordinate system.
Dataset Manipulations
We will now describe the modification procedure assuming the data points are r × r images. The manipulated datapoint
x (i) is of the form x (i) = x (i) +εm w (i)
where the mapping f : R → R r ×R r is constrained by m(w (i) ) ∞ ≤ 1 for every w (i) . Then inequality (12) is naturally satisfied for the maximum norm.
The abstract idea of how to achieve a change of the latent embedding coordinate systems can be visualized using the intuition following from Eq. (14). We can think of two VAE latent spaces where one is considered disentangled ({µ
(i) dis , σ(i)
dis }) and the other is entangled ({µ
(i) ent , σ(i)
ent }), as two sets of nonlinear principal directions, and the variance each of the dimensions capture is reflected in the magnitude of σ (i) . We are aiming to alter the dataset such that its entangled representation is superior over the disentangled representation, in the sense of being cheaper to decode with respect to the reconstruction loss. In other words, projecting the dataset to the manifold supported by z (i) ent should result in a lower loss in Eq. (5) than projecting it to the manifold supported by z (i) dis . A naive way of doing so is by moving each image closer to its projections on the first principal components of the entangled representation and further away from those of the disentangled representation. Instead of hand-crafting this operation, we can optimize for it directly. This idea can be turned into an end-to-end trainable architecture as depicted in Fig. 3. We want to change the dataset such that it is more convenient to encode it in an entangled way. Starting with two pretrained models, we fix their encoders and keep feeding them the original images. This ensures that the latent encoding stay unchanged, as we want to compare their suitability for reconstruction. The decoders are trained to minimize the reconstruction loss given the entangled representation:
θ ent = arg min θent L ent rec x (i) , z (i) , θ dis = arg min θ dis L dis rec x (i) , z (i) .
We initialize this network with the parameters of the disentangled model θ dis , ϕ dis and the entangled model θ ent , ϕ ent respectively. We introduce a network to learn the additive manipulation, m ψ . It is trained to minimize the reconstruction loss of the entangled VAE and to increase the loss of the disentangled VAE via its effect on the dataset:
ψ = arg min ψ L ent rec x (i) , z (i) − L dis rec x (i) , z (i) .
It is worth noting that both latent spaces were suitable for reconstructing the images of the original dataset. The major play that the network m ψ has, is to utilize the different ways the noise was distributed across the latent space.
Experiments
In order to experimentally validate the soundness of the manipulations, we need to demonstrate the following: 1. Effectiveness of manipulations. Disentanglement metrics should drop on the altered datasets across VAE-based architectures. We do not expect to see changes on non variational methods.
2. Comparison to a trivial modification. Instead of the proposed method, we modify with uniform noise of the same magnitude. The disentanglement scores for the algorithms on the resulting datasets should not drop significantly, as this change does not alleviate the existing bias.
3.
Robustness. The new datasets should be hard to disentangle even after retuning hyperparameters of the original architectures.
Effectiveness of Manipulations
We deploy the suggested training for the manipulations on two datasets: Shapes3D and dSprites, leading to manipulations as depicted in Fig. 4. In terms of models, we evaluated four VAE-based architectures Higgins et al. Whilst being a perk in real world application scenarios, this behaviour can lead to over-or under-pruning and thereby cloak the actual difference in the alignment of the latent space. The resulting MIG scores are listed in Tab. 1, other metrics are listed in the supplementary materials. Over all variational models, the disentanglement quality is significantly reduced. Interestingly even for SlowVAE, an architecture that supposedly circumvents the non-identifiability problem by deploying a sparse temporal prior, the disentanglement reduces. This indicates β-VAE 0.23 ± 0.08 0.07 ± 0.09 0.14 ± 0.07 0.60 ± 0.31 0.09 ± 0.14 0.66 ± 0.05
Fac. VAE 0.27 ± 0.11 0.20 ± 0.12 0.16 ± 0.08 0.27 ± 0.18 0.07 ± 0.05 0.33 ± 0.20 TC-β-VAE 0.25 ± 0.08 0.14 ± 0.10 0.20 ± 0.04 0.58 ± 0.20 0.24 ± 0.16 0.60 ± 0.11
Slow-VAE 0.39 ± 0.08 0.27 ± 0.08 0.37 ± 0.09 0.53 ± 0.19 0.13 ± 0.08 0.60 ± 0.10 PCL 0.21 ± 0.03 0.24 ± 0.07 0.24 ± 0.07 0.44 ± 0.06 0.47 ± 0.08 0.40 ± 0.07 that the architecture still builds upon the local data structure more than the temporal sparsity. PCL, as a non variational method, performs equivalently well on the original and the modified architecture, which is a strong indicator that due to the constraint (12), the main sources of global variance remain unaltered. The modifications indeed only attack the subtle bias VAEs exploit.
Noisy Datasets
We replace our modification by contaminating each image with uniform pixel-wise noise [−ε, ε]. The value of ε is fixed to the level of the presented manipulations (0.1 for dSprites and 0.175 for Shapes3D). The results are also listed in Tab. 1. The lack of structure in the contamination does not affect the performance in a guided way and leads to very little effect on Shapes3D. The impact on dSprites is, however, noticeable. Due to the comparatively small variance among dSprites images, the noise conceals the variance from the less important generating factors (such as e.g. orientation).
Robustness over Hyperparameters
We run a line search over the primary hyperparameter for each architecture. The results are illustrated in Fig. 5. Overall our modifications seem mostly robust for adjusted hyperparameters. Significant increase in the regularization strength allowed for some recovery. More thorough analysis revealed that this effect starts only once the models reach a level of over-pruning, which is a behavior well known to practitioners. We discard the runs that over pruned the latent space (number of active coordinates, i.e. E σ 2 i < 0.8, sinks below the dimensionality of the ground truth generating factors). This effect goes along with decreased reconstruction quality and intrinsically prevents the models from recovering all true generating factors and as such renders these cases uninteresting.
Conclusion
We have shown that the success of β-VAE based architectures is mostly based on the structured nature of the datasets they are being evaluated on. Small perturbations of the dataset can alleviate this structure and decrease the bias that such architectures exploit. Interestingly even architectures that are proven to be identifiable, like the Slow-VAE, still owe their success to the same bias. PCL however, as a non-variational method, was unaffected by the small perturbation.
It remains an open question whether the same local structure can reliably be found in real world data on which such architectures could be deployed. If so, fostering the sensitivity of future architectures towards the natural alignment of data could result in a transparent advance of unsupervised representation learning. It would be interesting to investigate and compare the different nonlinear embeddings VAE based architectures find. There are hints of clearly distinct local minima of the optimization problem; their suitability for downstream applications remains unexplored.
Duan, S., Matthey, L., Saraiva, A., Watters, N., Burgess, C., Lerchner, A., and Higgins, I. (2020). Unsupervised model selection for variational disentangled representation learning. In International Conference on Learning Representations. Figure 6: The SVD decomposition of a VAE decoder (top) and an alternative decoder (bottom) which decodes the same dataX, complies with V = I, and also shares diag Z Z = 1. The difference lies in the rotation induced by U , which for VAEs (and PCA) aligns the directions of largest variance inX with the cartesian axes.
Supplementary Material Demystifying Inductive Biases for (Beta-)VAE Based Architectures
Z diag Z Z = 1X V Σ U V Σ U
A Proofs
A.1 The Formal Setting
The simplified objective stated in this paper as
min Σ,U,V E i U ΣV ε (i) 2 (15) s.t. E i L (i) ≈KL = c ≈KL .(16)
resembles the minimization problem (20) and (21) Which is equivalent to V being a signed permutation matrix (Proposition 1 of Rolinek et al. (2019)). Without loss of generality, we assume V = I and rearrange the elements of Σ in ascending order and those of ε (i) in descending order with respect to σ (i) 2 .
In the setting of Theorem (1), we consider the mean latent representation Z to be constrained only by the condition diag Z Z = 1, which reads as "each active latent variable has unit variance". Even though, this statement is unsurprising in the context of VAEs, we offer a quick proof of how this follows directly from the KL loss in Lemma 1. Additionally, we fully fix the matrixX, which contains the reconstruction of all data-points. The remaining freedom in U and Σ has the following nature: for each fixed U (which rotatesX), the nonzero singular values of Σ (scaling factors along individual axes in the latent space) are fully determined by the diag Z Z = 1 requirement. We minimize objective (15) under these constraints.
Remark Notice that fixing the reconstructed datapoints ensures that the observed effect is entirely independent of the deterministic loss. The deterministic loss, is known to have some PCA-like effects, as it is basically a MSE loss of a deterministic autoencoder. The additional (and in fact stronger) effects of the stochastic loss are precisely the novelty of the following theoretical derivations.
For technical reasons regarding the uniqueness of SVD, we additionally inherit the assumption of Rolinek et al. (2019) that the random variables ε (i) have distinct variances. Finally, the orthnormal matrix U acts isometrically and can be removed from the objective (15), even though it still plays a vital role in how the problem is constrained. The reduced objective is further conveniently rewritten as a trace as:
min Σ E i Σε (i) 2 = min Σ E i tr EΣ ΣE ,(17)
where E is the diagonal matrix induced by the vector ε.
A visualization of the role of U , Σ and V in the decoding process is illustrated in Fig. 6.
A.2 Proof of Theorem 1
We rewrite the objective in order to introduce U ,X, and Z and make use of the constraints diag(Z Z) = 1 and X = ZΣU . We have
EΣ ΣE = EΣ (Z Z + M )ΣE,(18)
where M = I − Z Z is a matrix with diag(M ) = 0. Also, we can expand
Σ Z ZΣ = U U Σ Z (ZΣU ) U = UX X U(19)
By combining (18) and (19), we learn that
EΣ ΣE − EUX X U E = EΣ M ΣE.(20)
By repeating Lemma 2, we learn that diag(EΣ M ΣE) = 0, which allows us to use Lemma 2 yet again, this time on the left-hand side of (20) and obtain a key intermediate conclusion:
tr EΣ ΣE = tr EUX X U E(21)
This has a lower bound according to a classical trace inequality (see Proposition 1), as EUX X U E is positive semi-definite.
tr EUX X U E ≥ n det EUX X U E 1/n (22) = n det EX X E 1/n(23)
with equality if and only if
EUX X U E = λI.(24)
For the SVD decompositionX = U X Σ X V X , we see that X X = V X Σ 2 X V X and with U = U V X we arrive at
U Σ 2 X U = λE −2 .(25)
The left-hand side gives an SVD decomposition of the diagonal matrix E −2 . The SVD decomposition of a diagonal matrix is unique up to a signed permutation matrix. The conclusion of Theorem 1 now follows.
A.3 Auxiliary Statements
In the following lemma, the vectors x and y correspond to the mean latent µ and the noise standard deviation σ respectively. We allow for scaling the latent space and find that the KL loss is minimal for unit standard deviation of the means.
Lemma 1. For vectors x = (x 0 , . . . , x n ) ∈ R n , y = (y 0 , . . . , y n ) ∈ R n and c = arg min c∈R i
c 2 x i 2 − log c 2 y i 2 , it holds that c = i (x i 2 )(26)
Proof. It is easy to inspect that the minimum of i c 2 x i 2 − log c 2 y i 2 with respect to c fulfils the statement.
Proposition 1 (Trace Inequality). For a positive semidefinite M ∈ R n×n , that is M 0, it holds that
tr(M ) ≥ n det(M ) 1/n(27)
with equality if and only if M = λ · I for some λ ≥ 0.
Proof. Let λ 1 , . . . , λ n denote the eigenvalues of M , then tr(M ) = i λ i and det(M ) = i λ i . Since M 0, we have λ i ≥ 0 for every i = 1, . . . , n. Then, due to the classical AM-GM inequality, we have
tr(M ) = i λ i ≥ n · i λ i 1/n = n det(M ) 1/n ,(28)
with equality precisely if all eigenvalues are equal to the same value λ ≥ 0. Then by the definition of eigenvalues, the M − λI has zero rank, and equals to zero as required.
Lemma 2 ("Empty diagonal absorbs"). Let D ∈ R m×m be a diagonal matrix and let M ∈ R m×m be a matrix with zero elements on the diagonal, that is diag(M ) = 0. Then diag(M D) = diag(DM ) = 0 and consequently also tr(M D) = tr(DM ) = 0.
Proof. Follows immediately from the definition of matrix multiplication.
B Experimental Details
B.1 Architecture for Manipulations
The model implemented for m(w) has almost the same architecture as the CNN decoder as it is implemented in the Disentanglement Library Locatello et al. (2019). The only differences lies in the input MLP which was extended by a single neuron hidden layer. This enforces a compression of the generating factors w (i) to some scalar value based on which the modifications are rendered. Both m and the decoders were trained with Adam (β 1 = 0.9, β 2 = 0.999, = 10 −7 ) and 10 −4 learning rate. To ensure training stability, we train the decoders on three times more batches as the manipulation network and reconstruct five latent samples per image to get a better estimate of the stochastic losses. We achieved a better result on Shapes3D when using an ensemble of four disentangling and four entangling encoder-decoder pairs instead of single models. In order to stay in the same value range as the original images, we ensured normalization of the manipulated images x (i) = x (i) + m(w (i) ) by
x (i) norm = x (i) − 2ReLu(x (i) − 1) + 2ReLu(−x (i) ).
C Additional Experiments
C.1 Evaluation on Different Metrics
We have evaluated all architectures on three additional metrics. See Tables (3, 4, 5) for the resulting DCI-, FactorVAE-and SAP-Scores. Figures (7, 8, 9) show the scores for a line search of the primary hyperparameter of each architecture. The hyperparameters are listed in Table 2. We used the implementations of the Disentanglement Library.
Architecture dSprites Shapes3D β-VAE (β) 8 32 TC-β-VAE (β) 6 32 Factor-VAE (γ) 35 7 Slow-VAE (β) 1 1
C.2 Inspection of Entangled and Disentangled Latent Embeddings
Over multiple restarts of β-VAE trainings on the unmodified dataset, we used the runs that achieved highest and lowest MIG scores. Exemplary, Fig. 10 and Fig. 11 show two dimensional latent traversals of four disentangled and four entangled β-VAE representation respectively. The dimension of the latent traversal were hand-picked to encode for the wall hue and the orientation. Interestingly, the disentangled models reliably encode the color in the same way (e.g. starting from green to cyan). The entangled models reliably mix the two generating factors in a very similar way: The color is encoded as the angular component of the two latent dimensions and the orientation as the radial component. Table 4: FactorVAE Scores for unmodified, modified and noisy datasets. We report the mean and standard deviation over 10 distinct random seeds for each setting. PCL is the only disentangling non-variational model.
Until today, derivates of VAEs Higgins et al. (2017); Kim and Mnih (2018a); Chen et al. (2018); Kumar et al. (2017); Klindt et al. (2021) excel over other architectures in terms of disentanglement metrics. The extent of the VAE's success even prompted recent deeper analyses of its inner work-ings Rolinek et al. (2019); Burgess et al. (2018); Chen et al. (2018); Mathieu et al. (2018).
3). This connection with PCA was also reported by Stuehmer et al. (2020), alternatively formalized by Lucas et al. (2019) and converted into performance improvements in an unsupervised setting by Duan et al. (2020). Strictly speaking, the formal statements of Rolinek et al. (2019) are limited and only claim that β-VAEs strive for local orthogonality which, in the linear case, is a strong similarity to PCA.
Figure 3 :
3A schematic visualization of the image generation process. Starting from ground truth generating factors w, two β-VAE encoder-decoder pairs are initialized such that one (top) produces entangled and the other (bottom) disentangled representations. Another decoder-like network m is trained to produce additive manipulations to the original images x. The encoder of the entangling model is frozen and fed with the original images. The set of ground truth generating factors w stays untouched by the modification.
Figure 4 :
4From left to right: Original images, additive manipulations and the altered images. Top row shows an example of dSprites, the bottom for Shapes3D.
(2017); Kim and Mnih (2018a); Chen et al. (2018); Klindt et al. (2021), a regular autoencoder Hinton and Salakhutdinov (2006), and, as a non-variational method, PCL Hyvarinen and Morioka (2017), on both the original and manipulated datasets. We used the regularization strength reported in the literature (or better tuned values), and took the other hyperparameters from the disentanglement library Locatello et al. (2019). For the sake of simplicity and clarity, we restricted the latent space dimension to be equal to the number of ground truth generative factors. Most of the architectures have been shown to be capable of pruning the latent space as a consequence of their intrinsic regularization Stuehmer et al. (2020).
Figure 5 :
5MIG scores for scaled literature hyperparameters over 10 restarts for Shapes3D. Overpruning runs with fewer active units than generating factors were discarded
Eastwood, C. and Williams, C. K.(2018). A framework for the quantitative evaluation of disentangled representations. In International Conference on Learning Representations.Higgins, I., Amos, D., Pfau, D., Racaniere, S., Matthey, L.,Rezende, D., and Lerchner, A. (2018). Towards a definition of disentangled representations. arXiv preprint arXiv:1812.02230.Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A.(2017). β-VAE: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations, ICLR. Hinton, G. E. and Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. science, 313(5786):504-507. Hyvarinen, A. and Morioka, H. (2017). Nonlinear ica of temporally dependent stationary sources. In Artificial Intelligence and Statistics, pages 460-469. PMLR. Khemakhem, I., Kingma, D., Monti, R., and Hyvarinen, A. (2020). Variational autoencoders and nonlinear ica: A unifying framework. In Chiappa, S. and Calandra, R., editors, Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pages 2207-2217. PMLR. Kim, H. (2019). Interpretable Models in Probabilistic Machine Learning. PhD thesis, University of Oxford. Kim, H. and Mnih, A. (2018a). Disentangling by factorising. In Dy, J. and Krause, A., editors, Proc. ICML, volume 80, pages 2649-2658. PMLR. Kim, H. and Mnih, A. (2018b). Disentangling by factorising. volume 80 of Proceedings of Machine Learning Research, pages 2649-2658, Stockholmsmässan, Stockholm Sweden. PMLR. Kingma, D. P. and Welling, M. (2014). Auto-Encoding Variational Bayes. ICLR. Klindt, D. A., Schott, L., Sharma, Y., Ustyuzhaninov, I., Brendel, W., Bethge, M., and Paiton, D. (2021). Towards nonlinear disentanglement in natural data with temporal sparse coding. In International Conference on Learning Representations.Kumar, A., Sattigeri, P., and Balakrishnan, A.(2017). Variational inference of disentangled latent concepts from unlabeled observations. arXiv preprint arXiv:1711.00848.Liao, Y., Schwarz, K., Mescheder, L., and Geiger, A.(2020). Towards unsupervised learning of generative models for 3d controllable image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5871-5880. Locatello, F., Bauer, S., Lucic, M., Raetsch, G., Gelly, S., Schölkopf, B., and Bachem, O. (2019). Challenging common assumptions in the unsupervised learning of disentangled representations. In international conference on machine learning, pages 4114-4124. Locatello, F., Bauer, S., Lucic, M., Raetsch, G., Gelly, S., Schölkopf, B., and Bachem, O. (2020). A sober look at the unsupervised learning of disentangled representations and their evaluation. Journal of Machine Learning Research, 21(209):1-62. Lucas, J., Tucker, G., Grosse, R. B., and Norouzi, M. (2019). Don't blame the elbo! a linear vae perspective on posterior collapse. In Advances in Neural Information Processing Systems, pages 9408-9418. Mathieu, E., Rainforth, T., Siddharth, N., and Whye Teh, Y. (2018). Disentangling disentanglement in variational auto-encoders. ArXiv e-prints, abs/1812.02833. . Liii. on lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2(11):559-572. Rolinek, M., Zietlow, D., and Martius, G. (2019). Variational autoencoders pursue PCA directions (by accident). In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 12406-12415. Schölkopf, B. (2019). Causality for machine learning. arXiv preprint arXiv:1911.10500. Shu, Z., Yumer, E., Hadap, S., Sunkavalli, K., Shechtman, E., and Samaras, D. (2017). Neural face editing with intrinsic image disentangling. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5541-5550. Stuehmer, J., Turner, R., and Nowozin, S. (2020). Independent subspace analysis for unsupervised learning of disentangled representations. volume 108 of Proceedings of Machine Learning Research, pages 1200-1210. PMLR. Suter, R., Miladinovic, D., Schölkopf, B., and Bauer, S. (2019). Robustly disentangled causal mechanisms: Validating deep representations for interventional robustness. In International Conference on Machine Learning, pages 6056-6065. Tipping, M. E. and Bishop, C. M. (1999). Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61(3):611-622.
from Rolinek et al. (2019). They only optimize for distributing the latent noise σ (i) and the orthogonal matrix V of the SVD decomposition of the whole linear decoder and conclude that for M = U ΣV In every global minimum, the columns of M are orthogonal.
Figure 7 :Figure 8 :Figure 9 :
789DCI scores for scaled literature hyperparameters over 10 restarts for Shapes3D. Overpruning runs with fewer active units than generating FactorVAE scores for scaled literature hyperparameters over 10 restarts for Shapes3D. Overpruning runs with fewer active units than gener-SAP scores for scaled literature hyperparameters over 10 restarts for Shapes3D. Overpruning runs with fewer active units than generating factors were discarded
Figure 10 :
10Latent traversals along two latent dimensions for four different disentangled representations. They encode the wall hue and orientation separately. The latent coordinates were flipped to match the same alignment.
Figure 11 :
11Latent traversals along two latent dimensions for four different disentangled representations. They encode a mixture of wall hue and orientation.
Table 1 :
1MIG Scores for unmodified, modified and noisy datasets. We report the mean and standard deviation over 10 distinct random seeds for each setting. The regular autoencoder serves as a baseline (random alignment). PCL is the only disentangling non-variational model. The modification leads to a significant drop in all variational methods.dSprites
Shapes3d
orig.
mod.
noise
orig.
mod.
noise
AE 0.09 ± 0.06
-
-
0.06 ± 0.03
-
-
Table 2 :
2Primary hyperparameters, for other parameters we used the defaults in the Disentanglement Library or literature values.
Table 3 :
3DCI Disentanglement Scores for unmodified, modified and noisy datasets. We report the mean and standard deviation over 10 distinct random seeds for each setting. PCL is the only disentangling non-variational model. β-VAE 0.11 ± 0.03 0.08 ± 0.11 0.14 ± 0.07 0.73 ± 0.14 0.43 ± 0.06 0.56 ± 0.06 Fac. VAE 0.37 ± 0.10 0.27 ± 0.11 0.24 ± 0.09 0.39 ± 0.18 0.25 ± 0.08 0.57 ± 0.20 TC-β-VAE 0.34 ± 0.06 0.19 ± 0.10 0.27 ± 0.03 0.67 ± 0.08 0.41 ± 0.05 0.59 ± 0.09 Slow-VAE 0.47 ± 0.07 0.40 ± 0.07 0.47 ± 0.08 0.65 ± 0.10 0.33 ± 0.08 0.73 ± 0.09 PCL 0.28 ± 0.03 0.30 ± 0.03 0.29 ± 0.06 0.70 ± 0.06 0.67 ± 0.09 0.71 ± 0.07dSprites
Shapes3d
orig.
mod.
noise
orig.
mod.
noise
TC-β-VAE 0.68 ± 0.09 0.53 ± 0.15 0.60 ± 0.12dSprites
Shapes3d
orig.
mod.
noise
orig.
mod.
noise
β-VAE 0.47 ± 0.07 0.38 ± 0.13 0.50 ± 0.10
0.80 ± 0.17
0.54 ± 0.10
0.71 ± 0.06
Fac. VAE 0.67 ± 0.11 0.62 ± 0.14
0.60 ± 0.11
0.63 ± 0.15
0.48 ± 0.05
0.71 ± 0.15
0.76 ± 0.07
0.57 ± 0.07
0.71 ± 0.06
Slow-VAE 0.77 ± 0.03 0.77 ± 0.04 0.76 ± 0.07
0.87 ± 0.10
0.62 ± 0.06
0.85 ± 0.08
PCL 0.77 ± 0.09 0.82 ± 0.05 0.77 ± 0.08
0.80 ± 0.06
0.77 ± 0.07
0.80 ± 0.06
Table 5 :
5SAP Scores for unmodified, modified and noisy datasets. We report the mean and standard deviation over 10 distinct random seeds for each setting. PCL is the only disentangling non-variational model.dSprites
Shapes3d
orig.
mod.
noise
orig.
mod.
noise
β-VAE 0.04 ± 0.01 0.02 ± 0.02 0.03 ± 0.03
0.16 ± 0.08
0.03 ± 0.03
0.09 ± 0.02
Fac. VAE 0.07 ± 0.03 0.06 ± 0.03
0.08 ± 0.01
0.07 ± 0.04
0.04 ± 0.01
0.08 ± 0.03
TC-β-VAE 0.08 ± 0.01 0.06 ± 0.03 0.05 ± 0.02
0.08 ± 0.02
0.04 ± 0.02
0.06 ± 0.03
Slow-VAE 0.08 ± 0.01 0.07 ± 0.01 0.07 ± 0.01
0.09 ± 0.04
0.04 ± 0.01
0.09 ± 0.05
PCL 0.07 ± 0.03 0.10 ± 0.03 0.10 ± 0.03
0.07 ± 0.01
0.07 ± 0.01
0.07 ± 0.01
AcknowledgementsWe thank Maximilian Seitzer and Lukas Schott for the fruitful and invaluable discussions. Also, we thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting DZ.
Discovering interpretable representations for both deep generative and discriminative models. T Adel, Z Ghahramani, A Weller, International Conference on Machine Learning. Adel, T., Ghahramani, Z., and Weller, A. (2018). Dis- covering interpretable representations for both deep generative and discriminative models. In International Conference on Machine Learning, pages 50-59.
Neural networks and principal component analysis: Learning from examples without local minima. P Baldi, K Hornik, Neural networks. 21Baldi, P. and Hornik, K. (1989). Neural networks and principal component analysis: Learning from examples without local minima. Neural networks, 2(1):53-58.
An informationmaximization approach to blind separation and blind deconvolution. A J Bell, T J Sejnowski, Neural computation. 76Bell, A. J. and Sejnowski, T. J. (1995). An information- maximization approach to blind separation and blind deconvolution. Neural computation, 7(6):1129-1159.
Y Bengio, A Courville, P Vincent, Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence. 35Bengio, Y., Courville, A., and Vincent, P. (2013). Rep- resentation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798-1828.
Group invariance principles for causal generative models. M Besserve, N Shajarisales, B Schölkopf, Janzing , D , PMLRInternational Conference on Artificial Intelligence and Statistics. Besserve, M., Shajarisales, N., Schölkopf, B., and Janzing, D. (2018). Group invariance principles for causal gener- ative models. In International Conference on Artificial Intelligence and Statistics, pages 557-565. PMLR.
A theory of independent mechanisms for extrapolation in generative models. M Besserve, R Sun, D Janzing, B Schölkopf, arXiv:2004.00184arXiv preprintBesserve, M., Sun, R., Janzing, D., and Schölkopf, B. (2020). A theory of independent mechanisms for extrapolation in generative models. arXiv preprint arXiv:2004.00184.
Auto-association by multilayer perceptrons and singular value decomposition. H Bourlard, Y Kamp, Biological cybernetics. 594Bourlard, H. and Kamp, Y. (1988). Auto-association by multilayer perceptrons and singular value decomposi- tion. Biological cybernetics, 59(4):291-294.
. C Burgess, H Kim, 3d shapes datasetBurgess, C. and Kim, H. (2018). 3d shapes dataset. https://github.com/deepmind/3dshapes-dataset/.
Understanding disentangling in β-vae. C P Burgess, I Higgins, A Pal, L Matthey, N Watters, G Desjardins, A Lerchner, ArXiv e-prints, abs/1804.03599Burgess, C. P., Higgins, I., Pal, A., Matthey, L., Wat- ters, N., Desjardins, G., and Lerchner, A. (2018). Un- derstanding disentangling in β-vae. ArXiv e-prints, abs/1804.03599.
Isolating sources of disentanglement in variational autoencoders. R T Chen, X Li, R B Grosse, D K Duvenaud, Advances in Neural Information Processing Systems. Chen, R. T., Li, X., Grosse, R. B., and Duvenaud, D. K. (2018). Isolating sources of disentanglement in varia- tional autoencoders. In Advances in Neural Information Processing Systems, pages 2610-2620.
Independent component analysis. P Comon, Signal Processing. 363Higher Order StatisticsComon, P. (1994). Independent component analysis, a new concept? Signal Processing, 36(3):287 -314. Higher Order Statistics.
C Doersch, arXiv:1606.05908Tutorial on variational autoencoders. arXiv preprintDoersch, C. (2016). Tutorial on variational autoencoders. arXiv preprint arXiv:1606.05908.
| [
"https://github.com/deepmind/3dshapes-dataset/."
]
|
[
"Variability in quasar broad absorption line outflows II. Multi-epoch monitoring of Si iv and C iv BAL variability",
"Variability in quasar broad absorption line outflows II. Multi-epoch monitoring of Si iv and C iv BAL variability"
]
| [
"D M Capellupo \nDepartment of Astronomy\nUniversity of Florida\n32611-2055GainesvilleFL\n",
"⋆ ",
"F Hamann \nDepartment of Astronomy\nUniversity of Florida\n32611-2055GainesvilleFL\n",
"J C Shields \nDepartment of Physics & Astronomy\nOhio University\n45701AthensOH\n",
"P Rodríguez Hidalgo \nDepartment of Astronomy and Astrophysics\nPennsylvania State University\n16802University ParkPA\n",
"T A Barlow \nInfrared Processing and Analysis Center\nCalifornia Institute of Technology\n91125PasadenaCA\n"
]
| [
"Department of Astronomy\nUniversity of Florida\n32611-2055GainesvilleFL",
"Department of Astronomy\nUniversity of Florida\n32611-2055GainesvilleFL",
"Department of Physics & Astronomy\nOhio University\n45701AthensOH",
"Department of Astronomy and Astrophysics\nPennsylvania State University\n16802University ParkPA",
"Infrared Processing and Analysis Center\nCalifornia Institute of Technology\n91125PasadenaCA"
]
| [
"Mon. Not. R. Astron. Soc"
]
| Broad absorption lines (BALs) in quasar spectra indicate high-velocity outflows that may be present in all quasars and could be an important contributor to feedback to their host galaxies. Variability studies of BALs help illuminate the structure, evolution, and basic physical properties of the outflows. Here we present further results from an ongoing BAL monitoring campaign of a sample of 24 luminous quasars at redshifts 1.2 < z < 2.9. We directly compare the variabilities in the C iv λ1549 and Si iv λ1400 absorption to try to ascertain the cause(s) of the variability. We find that Si iv BALs are more likely to vary than C iv BALs. When looking at flow speeds >−20 000 km s −1 , 47 per cent of quasars exhibited Si iv variability while 31 per cent exhibited C iv variability. Furthermore, ∼50 per cent of the variable Si iv regions did not have corresponding C iv variability at the same velocities, while nearly all occurrences of C iv variability had corresponding changes in Si iv. We do not find any correlation between the absolute change in strength in C iv and in Si iv, but the fractional change in strength tends to be greater in Si iv than in C iv. When both C iv and Si iv varied, those changes always occurred in the same sense (either getting weaker or stronger). We also include our full data set so far in this paper, which includes up to 10 epochs of data per quasar. The multi-epoch data show that the BAL changes were not generally monotonic across the full ∼5 to ∼8 yr time span of our observations, suggesting that the characteristic time-scale for significant line variations, and (perhaps) for structural changes in the outflows, is less than a few years. Coordinated variabilities between absorption regions at different velocities in individual quasars seems to favor changing ionization of the outflowing gas as the cause of the observed BAL variability. However, variability in limited portions of broad troughs fits naturally in a scenario where movements of individual clouds, or substructures in the flow, across our lines-of-sight cause the absorption to vary. The actual situation may be a complex mixture of changing ionization and cloud movements. Further discussion of the implications of variability, e.g., in terms of the size and location of the outflowing gas, will be presented in a forthcoming paper. | 10.1111/j.1365-2966.2012.20846.x | [
"https://arxiv.org/pdf/1203.1051v1.pdf"
]
| 30,133,734 | 1203.1051 | e6abd81d407ef4e796d0462e8507df7f51836c1b |
Variability in quasar broad absorption line outflows II. Multi-epoch monitoring of Si iv and C iv BAL variability
2002. May 2014
D M Capellupo
Department of Astronomy
University of Florida
32611-2055GainesvilleFL
⋆
F Hamann
Department of Astronomy
University of Florida
32611-2055GainesvilleFL
J C Shields
Department of Physics & Astronomy
Ohio University
45701AthensOH
P Rodríguez Hidalgo
Department of Astronomy and Astrophysics
Pennsylvania State University
16802University ParkPA
T A Barlow
Infrared Processing and Analysis Center
California Institute of Technology
91125PasadenaCA
Variability in quasar broad absorption line outflows II. Multi-epoch monitoring of Si iv and C iv BAL variability
Mon. Not. R. Astron. Soc
0002002. May 2014(MN L A T E X style file v2.2)galaxies: active -quasars:general -quasars:absorption lines
Broad absorption lines (BALs) in quasar spectra indicate high-velocity outflows that may be present in all quasars and could be an important contributor to feedback to their host galaxies. Variability studies of BALs help illuminate the structure, evolution, and basic physical properties of the outflows. Here we present further results from an ongoing BAL monitoring campaign of a sample of 24 luminous quasars at redshifts 1.2 < z < 2.9. We directly compare the variabilities in the C iv λ1549 and Si iv λ1400 absorption to try to ascertain the cause(s) of the variability. We find that Si iv BALs are more likely to vary than C iv BALs. When looking at flow speeds >−20 000 km s −1 , 47 per cent of quasars exhibited Si iv variability while 31 per cent exhibited C iv variability. Furthermore, ∼50 per cent of the variable Si iv regions did not have corresponding C iv variability at the same velocities, while nearly all occurrences of C iv variability had corresponding changes in Si iv. We do not find any correlation between the absolute change in strength in C iv and in Si iv, but the fractional change in strength tends to be greater in Si iv than in C iv. When both C iv and Si iv varied, those changes always occurred in the same sense (either getting weaker or stronger). We also include our full data set so far in this paper, which includes up to 10 epochs of data per quasar. The multi-epoch data show that the BAL changes were not generally monotonic across the full ∼5 to ∼8 yr time span of our observations, suggesting that the characteristic time-scale for significant line variations, and (perhaps) for structural changes in the outflows, is less than a few years. Coordinated variabilities between absorption regions at different velocities in individual quasars seems to favor changing ionization of the outflowing gas as the cause of the observed BAL variability. However, variability in limited portions of broad troughs fits naturally in a scenario where movements of individual clouds, or substructures in the flow, across our lines-of-sight cause the absorption to vary. The actual situation may be a complex mixture of changing ionization and cloud movements. Further discussion of the implications of variability, e.g., in terms of the size and location of the outflowing gas, will be presented in a forthcoming paper.
widths >2000 km s −1 at depths >10% below the continuum (Weymann et al. 1991), and they appear in the spectra of ∼10-15% of quasars (Reichard et al. 2003;Trump et al. 2006;Gibson et al. 2009). Since BALs are seen in just a fraction of quasar spectra, their presence could represent a phase in the evolution of a quasar and/or particular orientations where the outflow lies between us and the quasar emission sources.
The location and three-dimensional structure of quasar outflows are poorly understood. Sophisticated models predict these outflows as arising from a rotating accretion disk, with acceleration to high speeds by radiative and/or magneto-centrifugal forces (Murray et al. 1995;Proga & Kallman 2004;Proga 2007;Everett 2005). Improved observational constraints are necessary to test these models and to estimate mass-loss rates, kinetic energy yields, and the role of quasar outflows in feedback to the surrounding environment.
One way to obtain constraints on quasar outflows is to study the variability in their absorption lines, which can provide information on the structure and dynamics of the outflowing gas. Two possible causes of this observed variability are movement of gas across our line of sight to the quasar and changes in ionization (Barlow 1993;Wise et al. 2004;Misawa et al. 2005;Lundgren et al. 2007;Gibson et al. 2008;Hamann et al. 2008). Variability on shorter time-scales can place constraints on the distance of the absorbing material from the central SMBH. Shorter variability time-scales indicate smaller distances, based on nominally shorter crossing times for moving clouds (Hamann et al. 2008;Capellupo et al. 2011) or the higher densities required for shorter recombination times . Measurements of variability on longer (multi-year) time-scales provide insight into the homogeneity and stability of the outflowing gas. If no variability is detected on long time-scales, then this indicates a smooth flow with a persistent structure. Overall, results of variability studies provide information on the size, kinematics, and internal makeup of sub-structures within the outflows. Furthermore, variability studies can address the evolution of these outflows as absorption lines have been observed to appear and disappear (Hamann et al. 2008;Leighly et al. 2009;Krongold et al. 2010;Rodríguez Hidalgo et al. 2011), or they can evolve from one type of outflow feature to another (e.g., from a mini-BAL to a BAL or vice versa; Gibson et al. 2010;Rodríguez Hidalgo et al. in preparation;also this work).
Most of the existing work on BAL variability has focused on variability in C iv λ1549 over two epochs (e.g., Barlow 1993;Lundgren et al. 2007;Gibson et al. 2008). Gibson et al. 2008 detected C iv BAL variability in 12 out of 13 BAL quasars (92 per cent) over multi-year time-scales. None of these studies found clear evidence for acceleration in the BALs. Gibson et al. (2010) reports on variability on multi-month to multi-year rest-frame time-scales, using 3-4 epochs of data for 9 BALQSOs and found that BALs generally do not vary monotonically over time. Their study also makes comparisons between variability in Si iv λ1400 absorption and variability in C iv, and their results include a correlation between fractional change in EW in Si iv and C iv.
This work is the second paper in a series on BAL vari-ability. The first paper, Capellupo et al. (2011; hereafter, Paper 1), introduced our ongoing monitoring programme of a sample of 24 BAL quasars. We began with a sample of BAL quasars from Barlow (1993), which includes spectra of the C iv absorption region and, in most cases, coverage of the Si iv absorption region as well. We have re-observed these quasars to provide a longer time baseline over which to study variability, as well as to obtain multiple epochs of data per object. We currently have up to 10 epochs of data per quasar up to March 2009, covering rest-frame time intervals (∆t) from 15 days to 8.2 yr 1 . Paper 1 focused on a subset of the data from this monitoring programme to look for basic trends in the data between variability and other properties of the absorbers, as well as to directly compare short-term and long-term variability within the same sample of quasars. Paper 1 took a novel approach to studying BAL variability by introducing a measurement of BAL strength within portions of a trough, instead of using equivalent width (EW) measurements. Paper 1 discusses variability in just two different time intervals: a short-term interval of 0.35−0.75 yr and a long-term interval of 3.8−7.7 yr. We found that 39 per cent (7/18) of the quasars varied in the short-term, whereas 65 per cent (15/23) varied in the long-term data. The variability most often occurred in just portions of a BAL trough, which is similar to the findings of Gibson et al. (2008). We found that the incidence of variability was greater at higher velocities and in weaker portions of BAL troughs. Similarly, in Lundgren et al. (2007), the strongest occurrences of BAL variability occurred at velocities <−12 000 km s −1 and in features with smaller equivalent widths. Overall, the results of Paper 1 are broadly consistent with previous work on BAL variability.
In this paper, we extend the analysis of Paper 1 by looking at variability in Si iv and comparing it to the variability results for C iv. Expanding our study to include Si iv absorption can help constrain theories on the cause(s) of BAL variability. C and Si have different abundances, if solar abundances are assumed, and they have different ionization properties (e.g. Hamann et al. 2008Hamann et al. , 2011. By examining if C iv and Si iv have different variability properties, and how they differ, coupled with these differences in abundances and ionization properties, we can gain new insight into the cause(s) of BAL variability.
Our dataset is uniquely suited to this study because we have coverage of the Si iv line for nearly our entire sample (22 out of 24 quasars). In addition to the larger sample size, we go beyond existing work by adopting a method of measuring the absorption strength in portions of BALs, instead of EW. Equivalent width measurements apply to an entire feature and are less sensitive to changes in small portions of troughs. Our method of measuring portions of troughs also allows more direct comparisons between the behavior of C iv and Si iv variability.
We also include the entire dataset so far to look at variability in C iv and Si iv over multiple epochs. This work contains up to 10 epochs of data per quasar, and including all of these epochs will provide better insight into the characteristics of BAL variability. Increasing the number of epochs provides new information on whether BALs change monotonically over time or whether they can vary and then return to an earlier state. In Paper 1, we reported that typically only portions of BALs varied. Multi-epoch data can tell us whether variability only occurs in those specific velocity intervals or if the velocity range over which variability occurs can change over time. We also highlight several individual interesting cases of variability that can further help us understand BAL outflows. Section 2 below reviews the quasar sample and analysis introduced in Paper 1, Section 3 describes our results, Section 4 summarizes the results so far from Paper 1 and the current work, and Section 5 discusses the results and their implications.
DATA AND ANALYSIS
Observations and quasar sample
In this work, we use the same sample of 24 BAL quasars introduced in Paper 1. This sample is based on the set of BALQSOs studied in Barlow (1993). The sample selection and general characteristics are described in Paper 1. These data were obtained from the Lick Observatory 3-m Shane Telescope, using the Kast spectrograph. Most of the spectra we use from that data set have a resolution of R ≡ λ/∆λ ≈ 1300 (230 km s −1 ). For epochs where this resolution is not available, we use data taken at R ≈ 600 (530 km s −1 ). BALs are defined to have a width of at least 2000 km s −1 , so either of these resolutions is sufficient to measure the lines and study their variabilities. The wavelength coverage of each spectrum covers at least the Si iv through C iv emission lines, and most cover at least the Lyα λ1216 through C iv emission lines.
We have been re-observing 23 of the BALQSOs from Barlow (1993) at the MDM Observatory 2.4-m Hiltner telescope, using the CCDS spectrograph with a resolution of R ≈ 1200 (250 km s −1 ). The observations used in this work were taken in January and February 2007; January, April, and May 2008;and January and March 2009. We used the same spectrograph setup each time, varying only the wavelength range in order to observe each quasar at roughly the same rest wavelength range, from Lyα through C iv emission. One exception is 0946+3009, which has a redshift too low for the Si iv emission to appear in our spectra.
We supplement our data with spectra from the SDSS Data Release 6 (Adelman- McCarthy et al. 2008) for 8 of the quasars in our sample, for which the resolution is R ≈ 2000 (150 km s −1 ). These spectra cover the observed wavelength range 3800 to 9200Å, and we only include spectra that cover at least the Si iv through C iv emission. Table 1 summarizes the full dataset presented in this work, including the emission redshift, zem, 2 and the 'balnicity index' (BI) for each object (as calculated in Paper 1). Any uncertainty in the redshift will not affect our comparisons between Si iv and C iv or any of our other results.
The balnicity index, defined by Weymann et al. (1991), is a measure of the strength of the BAL absorption and is calculated as an EW in units of velocity. It quantifies blue-shifted C iv absorption between −25 000 and −3 000 km s −1 that reaches at least 10 per cent below the continuum across a region at least 2000 km s −1 in width. The next four columns list the number of observations taken for each object at each observatory and then the total overall number of observations. The final column lists the range in ∆t covered for each quasar.
Two of the quasars in our sample have BI=0, so they are not BAL quasars based on the balnicity index. They both contain broad absorption, but this absorption falls outside the velocity range, −25 000 to −3 000 km s −1 , used to define BI. As noted in Paper 1 and discussed further in Section 3 below, including these two objects in our sample does not affect any of our main results.
Measuring BALs and their variability
In Fig. 1, we plot spectra for all 24 objects, showing the longterm comparisons (∆t = 3.8−7.7 yr) between a Lick spectrum and an MDM spectrum. The one exception is 2225-0534, for which we only have short-term Lick data. For each object, we plot the C iv absorption region in the top panel and the corresponding Si iv absorption region in the bottom panel. The velocity scale is based on the wavelengths of C iv and Si iv in the observed frame calculated from the redshifts given in Table 1. In order to compare the C iv and Si iv absorption regions, we use the bluer line in both the C iv and Si iv doublet for the zero-points of the velocity scales, i.e., 1548.20Å for C iv and 1393.76Å for Si iv.
We adopt the velocity ranges over which C iv BAL absorption occurs defined in Paper 1. These regions were defined based on the definition of BI, i.e. they must contain contiguous absorption that reaches 10 per cent below the continuum across 2000 km s −1 . We apply the same definition when defining the velocity ranges of Si iv BAL absorption.
In Paper 1, we defined a pseudo-continuum fit for the fiducial Lick observation used in the long-term analysis for each quasar by fitting a power-law to regions of the spectrum free of absorption and emission. The preferred spectral regions for the fits were 1270−1350Å and 1680−1800Å, but were adjusted to avoid emission and absorption features as much as possible or due to the limits of the wavelength coverage. We then fit the C iv emission lines, using between 1 and 3 Gaussians to define the line profile. In Paper 1, we also fit the Si iv emission in cases where C iv absorption overlaps with the Si iv emission. For this work, we additionally fit the Si iv emission in the fiducial Lick observation for all the quasars. For the long-term comparisons below, we fit the Si iv emission for the MDM spectrum in cases where the emission line varied. Some special cases where there were difficulties with fitting the Si iv emission, such as 1011+0906 and 1309−0536, are discussed further in Section 3.3 below.
When comparing multiple epochs, we scaled all the spectra to the fiducial Lick spectrum used for the pseudocontinuum fit. We only fit the power-law continuum to one spectrum for each object, so any errors in this continuum fit will not effect our main variability results. To match the individual epochs for each quasar, we adopt a simple ver- Figure 1. Spectra of all 24 quasars in our sample, showing the long-term comparisons (∆t = 3.8−7.7 yr) between a Lick Observatory spectrum (bold curves) and a recent MDM spectrum (thin curves). For each quasar, the C iv region is displayed in the top panel and the corresponding Si iv region is shown in the bottom panel. For 2225-0534, we only have short-term Lick data (see Table 1). The vertical flux scale applies to the Lick data, and the MDM spectrum has been scaled to match the Lick data in the continuum. The dashed curves show our pseudo-continuum fits. The horizontal bars indicate intervals of BAL absorption included in this study, and the shaded regions indicate intervals of variation within the BALs. We used binomial smoothing to improve the presentation of the spectra. The formal 1σ errors are shown near the bottom of each panel. (The two variability intervals defined for 1524+5147 were labeled as one interval in Paper 1.) tical scaling that matches the spectra along the continuum redwards of the C iv emission line (i.e. from 1560Å to the limit of the wavelength coverage), between the Si iv and C iv emission (∼1425-1515Å), and between the Lyα and the Si iv emission (∼1305-1315Å). For the few cases where a simple scaling did not produce a good match and there were disparities in the overall spectral shape between the comparison spectra, we fit either a linear function (for 0903+1734, 1413+1143, 1423+5000, and 1524+5147) or quadratic function (for 0019+0107 and 1309-0536) to the ratio of the two spectra across regions that avoid the BALs. We then multiplied this function by the SDSS or MDM spectrum to match the fiducial Lick data. With the spectra for each object matched, we used visual inspection to identify velocity intervals with a width of at least 1200 km s −1 that varied. We identify intervals of variability separately for C iv and Si iv absorption. We then calculate the average flux and associated error for each candidate variable interval in each of the two epochs being compared. The error on the average flux is given by
σ 2 f = 1 n 2 n i=1 σ 2 i
where σi is the error on an individual pixel, taken from the error arrays as displayed in Fig. 1, and n is the number of pixels in the candidate variable interval. We then calculate the flux difference between the two spectra and place an error on this flux difference using the error on the average flux from each epoch. We include all intervals of variability where the flux differences are at least 4σ. Any interval which varied by at least 4σ was readily identified by our initial inspection procedure. However, photon statistics alone are not sufficient for defining real variability, so we took a conservative approach, described in more detail in Paper 1, whereby we omit ambiguous cases of variability, even if they meet the 4σ threshold. Flux calibrations, a poorly constrained continuum placement, and underlying emission-line variability can all add additional uncertainty to identifying variability. For example, in 1011+0906 and 1309−0536, there might be BAL absorption, and variability, on top of the Si iv emission line (see Fig. 1). However, it is too ambiguous to be included in this study. See Paper 1 for further examples of intervals of potential variability that were not included because of additional uncertainties and Section 3.3 below, where we comment further on certain individual quasars. We include narrow intervals of variability such as the shaded region in Si iv in 0043+0048 and the shaded region in C iv in 1246−0542 in Fig. 1, where the flux differences are 6.3σ and 5.6σ, respectively. These regions meet the aforementioned thresholds, and the comparison spectra match well in regions of the continuum free of emission and absorption and on either side of the variability interval. We also include regions such as those shaded in 1011+0906 in Fig. 1 because even though the errors are slightly higher in the MDM spectra shown, the flux differences are still 7.7σ for the variable region in C iv and 6.3σ and 7.9σ for the two regions of variability in Si iv.
Overall our approach is designed to be conservative; we try to exclude marginal cases of variability to avoid overestimating the true variability fractions.
We calculated the absorption strength, A, of the BALs in our sample, where A is the fraction of the normalized continuum flux removed by absorption (0 A 1) within a specified velocity interval. These calculations are described in Paper 1. Very briefly, we divide each interval of variability and absorption, as defined above, into equal-sized bins of width 1000 to 2000 km s −1 , with the final bin size depending on the total velocity width of the specified interval. Then, for each quasar, we adopt the same bin size for the epochs being compared and calculate A and ∆A in every individual bin.
One of the difficulties in directly comparing C iv to Si iv absorption is the wider doublet separation in Si iv (500 km s −1 in C iv versus 1900 km s −1 in Si iv). This can cause the Si iv absorption intervals to be wider than those in C iv. The only effect this should have on the variability results in Section 3 below is that there may be portions of Si iv absorption that are detected as variable but the corresponding velocity intervals in C iv may be too narrow to pass our variability threshold. We comment further on this in Section 3.1. We mark the regions defined as BAL absorption (horizontal bars) and variability (shaded rectangles) in Fig. 1. The absorption and variability regions for C iv were defined in Paper 1. We defined the absorption and variability regions for Si iv independently from what we found for C iv. See Section 3.1 for a full discussion of these results.
In Fig. 2, we plot the relationship between absorption strength, A , in C iv and A in Si iv for the long-term spectra shown in Fig. 1. Each point represents a different absorption bin in an individual quasar, such that individual quasars contribute multiple points to this plot. The diagonal line through the plot represents equal strength in both lines. Since we fix the velocity scale based on the bluer doublet member in C iv and in Si iv and the doublet separation is wider in Si iv, the edges of the Si iv absorption troughs tend to extend redward of the edges of the corresponding C iv troughs (see, for example, 0302+1705 in Fig. 1). As mentioned above, to calculate the A values, we divide each BAL into bins. We thus remove the redmost bin for each absorption trough because Si iv may have a greater strength than C iv in those bins due to the wider doublet separation and not necessarily because the Si iv absorption is actually stronger than C iv. We also plot 1σ error bars, calculated by using the error spectra shown in Fig. 1 and averaging over the velocity interval for each bin. We point out that non-statistical errors, e.g., from the continuum fitting, can increase the error in A measurements by up to 0.05 to 0.1. It is clear from Fig. 2 that absorption in C iv is roughly as strong or stronger than the corresponding Si iv absorption. In some cases, the C iv has no detectable corresponding Si iv absorption at all.
There are almost no well-measured cases of C iv absorption weaker than Si iv. We did find a few absorption bins where Si iv appeared to be stronger than C iv, but some of these bins might not actually have absorption due to Si iv, or due solely to Si iv. In about 10% of BAL spectra, there are lower ionization lines, such as C ii λ1335 and Al iii λ1855,1863 (Trump et al. 2006). The wavelength of C ii places any C ii absorption line at a velocity of <−13 500 km s −1 in Si iv velocity space. In order to look for interloping C ii lines, we checked all of our objects for another low ionization species, Al iii, which exists redward of C iv, in a region of the spectrum relatively uncontaminated by other lines. We found that in two cases (1232+1325, 1331−0108), the velocity of the Al iii line puts the corresponding C ii line within a Si iv BAL trough. We therefore removed the bins affected by C ii from Fig. 2, and we exclude the contaminated Si iv regions in these two quasars from the analysis below. There are still a few points below the one-to-one line in Fig. 2. However, these few points are mostly within 3σ of the one-to-one line and therefore are consistent with equal strength changes in C iv and Si iv. The one point that is just beyond 3σ from the line corresponds to the interval −3100 to −1200 km s −1 in 1303+3048.
RESULTS
Variability in Si iv versus C iv BALs
In this section, we directly compare the variability of Si iv to C iv in the "long-term" dataset from Paper 1. This involves 2 epochs of data for 23 quasars separated by 3.8 to 7.7 yrs. Our main goal is to discriminate between the possible causes of BAL variability. We begin by looking at what fractions of quasars exhibited C iv and Si iv variability to determine if Si iv varies more or less often than C iv. In Paper 1, we found a correlation between the incidence of C iv BAL variability and outflow velocity and absorption strength. Here we investigate whether similar trends exists for Si iv BALs. We then look at the relationship between the incidence of C iv variability and Si iv strength. Last, we compare the change in strength for the two lines when they both vary.
In the long-term dataset in Paper 1, 15 out of 23 quasars (65 per cent) exhibited C iv BAL variability and 11 out of 19 (58 per cent) exhibited Si iv BAL variability, at any measured velocity. We do not have data covering the Si iv region for 2 of our quasars, and another 2 quasars do not have Si iv BALs. This comparison between C iv and Si iv is complicated because the absorption in C iv is not always accompanied by corresponding absorption (e.g., at the same velocities) in Si iv ( Figs. 1 and 2). In addition, we are observationally less sensitive to absorption and variability at high velocities in Si iv, compared to C iv, because those wave-lengths can have poorer signal-to-noise ratios and larger uncertainties in the continuum placement caused by blends with underlying broad emission lines. Altogether this means we are more sensitive to variability in C iv than Si iv in our data set.
To make a fair comparison between the incidence of variability in the two lines, we recalculate the above fractions while considering just the flow speeds at >−20 000 km s −1 . We adopt this velocity as the cutoff because, in some spectra, there is emission due to O i at ∼−20 500 km s −1 in the Si iv absorption region (see also Gibson et al. 2010). With this additional restriction, we find that Si iv is more likely to vary than C iv. In particular, 35 per cent (8/23) of quasars exhibited C iv variability and 47 per cent (9/19) exhibited Si iv variability. The dramatic reduction in the C iv variability recorded this way, compared to the 65 per cent quoted above, is due to i) consideration of a narrower velocity range, and ii) the specific exclusion of high velocities, which are the most likely to show variability (Paper 1). Nearly half of the occurrences of C iv variability detected in our data set are at high velocities, i.e., v <−20 000 km/s. The incidence of C iv variability further reduces to 31 per cent (6/19) if we only include the 19 quasars which have complete spectral coverage across Si iv and have a Si iv BAL. The further decline in the C iv variability in this case probably occurs because the two quasars excluded for having no Si iv BAL have weak C iv lines, and weak C iv lines are more likely to vary than strong ones (Paper 1). Thus, we again removed C iv BALs that are more likely to vary.
Overall, it is important to realize that a number of factors can affect the measured incidence of BAL variability. Our comparisons show that, over matching velocity ranges, Si iv BALs have a significantly higher incidence of variability than C iv BALs. This difference is probably related to the different line strengths. In particular, the Si iv BALs are generally weaker than C iv BALs (Fig. 2), and weaker lines tend to be more variable (Paper 1 and Figs. 4 and 5 below).
To more directly compare Si iv and C iv BAL variability, we examine the individual velocity intervals over which the variability occurs. We consider intervals at all velocities here and in the remainder of this section. Variability in C iv occurred in a total of 20 velocity intervals. Ten of these intervals have measurable Si iv absorption. Nine of these 10 intervals, or 90 per cent, showed Si iv variations in the same sense (either getting stronger or weaker) as the C iv changes. There is only one interval (in 0119+0310) that exhibits variability in C iv, without corresponding variability at the same velocities in Si iv. Conversely, we find long-term Si iv BAL variability in a total of 22 velocity intervals. All of the variable Si iv intervals have significant corresponding C iv absorption, and 10 of these intervals showed C iv variations in the same sense as the Si iv (45 per cent).
As mentioned in Section 2.2, Si iv has a wider doublet separation than C iv, causing some of the Si iv absorption and variability intervals to be wider than the corresponding C iv intervals. For the intervals of Si iv variability without corresponding C iv variability, we looked for any evidence of variability in C iv that was not included because the width of the candidate varying region was too narrow to meet our variability threshold (see Section 2.2). There is only one case where we detect a marginal narrow variability region in C iv corresponding to a region in Si iv classified as variable (in 0903+1734). Even if we were to count this as a variable C iv interval, still only 50 per cent of Si iv variability intervals would have corresponding C iv variability. Therefore, while 91 per cent of the intervals of C iv variability had corresponding Si iv variability, only ∼50 per cent of the intervals of Si iv variability had corresponding C iv variability. These results reinforce our main conclusion above, that Si iv BALs are more likely to vary than their C iv counterparts.
Next we examine the dependence of Si iv variability on velocity and absorption strength, matching our analysis of C iv BALs in Paper 1. Figure 3 shows the incidence of Si iv absorption and Si iv variability versus velocity. For comparison, this figure also shows the corresponding data for C iv taken directly from fig. 3 in Paper 1 (dashed curves). The top two panels display the number of quasars with Si iv BAL absorption and with Si iv BAL variability at each velocity (solid curves). The third panel is the second panel divided by the top one, which gives the fraction of Si iv BALs that varied at each velocity. The top panel shows clearly that there are more C iv BALs than Si iv BALs at higher velocities.
In Paper 1, we showed that the incidence of C iv variability increases significantly with increasing velocity. This trend is not evident in the Si iv data. Figure 3 displays 1σ error bars based on Wilson (1927) and Agresti & Coull (1998). These errors are based on counting statistics for the number of quasars with absorption and variability at each velocity. We performed a least-squares fit, and the slope of the Si iv data is −6.58 ± 4.33 × 10 −6 , in the formal unit, fraction per km s −1 . The slope is non-zero at just a 1.5σ significance. Therefore, while there might be a weak tendency for more variability in Si iv at higher velocities (Figure 3), the trend is not statistically significant.
To further match our analysis from Paper 1, we looked at the relationship between the incidence of Si iv variability and the absorption strength, A , in Si iv. As described in Paper 1 (and Section 2 above), we divide each BAL into ∼1200 km s −1 bins, and we treat each bin as an individual occurrence of absorption. In Fig. 4, the top two panels show the number of these occurrences of Si iv absorption and the number of these occurrences that varied at each value of Si iv absorption strength, A . An individual quasar can contribute more than once to each point in the histogram. The third panel is the second panel divided by the top one, and the bottom panel shows the same curve from the third panel with error bars plotted, calculated in the same way as for Fig. 3. We find only a weak trend between Si iv variability and Si iv strength. The slope of the plot is −0.315±0.090 fraction per unit absorption strength, which is non-zero at a 3.5σ significance. This is much weaker than the trend between the incidence of C iv variability and C iv strength found in Paper 1. We overplot this curve for C iv from fig. 5 of Paper 1 in the third panel of Fig. 4 (dashed curve). This indicates that the occurrence of variability in Si iv is less sensitive to the strength of the line than the occurrence of variability in C iv.
Next, we looked at the relationship between the inci- dence of C iv BAL variability and the strength of the Si iv absorption, A , at the same velocities. Si is known to be less abundant than C, when solar abundances are assumed, and the high ionization typical in BALs favors C iv (see Section 5). Therefore, the optical depth in C iv is higher than in Si iv, so the stronger the Si iv absorption is, the more likely C iv is to be saturated. In Fig. 5, we plot the number of occurrences of C iv absorption that occur at the same velocities in the same spectra as each Si iv A value and then the number of these occurrences that varied in the second panel. As in Fig. 4, an individual quasar can contribute more than once to each point in the histogram. The third panel is the second panel divided by the top one, showing the fraction of occurrences of C iv absorption that varied at each Si iv absorption strength value. As in Figs. 3 and 4, the bottom panel of Fig. 5 shows the 1σ error bars. The slope of these points is −0.643 ± 0.066 fraction per unit absorption strength, which is non-zero at a 10σ significance. Fig. 5 thus indicates that the incidence of C iv variability decreases with increasing Si iv absorption strength. Therefore, when the Si iv absorption is stronger and the C iv absorption is more likely to be saturated, the incidence of C iv variability decreases. In fact, whenever the Si iv absorption strength is greater than 0.5, the corresponding C iv absorption at the same velocity does not vary. As a further comparison to Paper 1, we overplot in the third panel of Fig. 5 the fraction of occurrences of C iv absorption that varied versus C iv absorption strength (dashed curve). In Paper 1, we concluded that, for C iv BALs, weaker lines are more likely to vary than stronger lines. Fig. 5 shows that C iv lines are even more likely to vary when the corresponding Si iv lines are also weak. We also investigate the change in strength, |∆A|, in C iv compared to the corresponding changes in Si iv in the same velocity interval (Fig. 6). There are 6 quasars in which the velocity intervals of C iv and Si iv variability overlap, and we only compare the velocity intervals where both lines varied. Each point represents one bin in one of these quasars, as described above for Fig. 5. The diagonal line through this plot corresponds to equal strength changes in both lines. There is clearly no correlation between the strength changes in C iv versus Si iv evident in this figure. Despite Si iv varying more often than C iv, the strength changes in Si iv are not always greater than in C iv.
Finally, Fig. 7 shows the fractional change in strength, |∆A|/ A , in C iv compared to |∆A|/ A in Si iv, again for velocity intervals where both lines varied. As in Fig. 6, there are points lying above the line, representing greater strength changes in C iv than in Si iv. However, there is a weak trend towards greater fractional change in strength in Si iv, which is consistent with the correlation found between fractional change in EW in C iv and Si iv in Gibson et al. (2010).
As mentioned in Section 2.1, we include two quasars that have BI=0 because they do have broad absorption, but it falls outside the velocity range, −25 000 to −3 000 km s −1 , used in the strict definition of BI. We also include broad absorption in other quasars in our sample that falls outside of this velocity range. Their inclusion has minimal impact on our results because most of the Si iv broad absorption in our sample falls within this velocity range. For the quasars with BI=0, 0846+1540 does not contain any Si iv broad ab- sorption at all, and 0302+1705 contains broad absorption at low-velocity, which did not vary in either C iv or Si iv.
We can summarize our comparisons between the C iv and Si iv BAL variabilities as follows: 1) Si iv BALs are more likely to vary than C iv BALs. The fractions of quasars showing variability in our long-term 2-epoch data set are 31% in C iv and 47% in Si iv if we consider only the well-measured velocity range v > −20 000 km/s and include only quasars with both Si iv and C iv BALs detected. 2) The variabilities usually occur in just portions of the BAL troughs. 3) When changes occur in both Si iv and C iv, they always occur in the same sense (i.e., with both lines getting either weaker or stronger). They also occur in overlapping but not necessarily identical velocity ranges. 4) The trend for a higher incidence of C iv variability at higher velocities, which we reported in Paper 1, is not clearly evident in the Si iv data. Finally, 5) there is no correlation between absorption strength changes in C iv versus Si iv when they both vary; although, there is a weak trend towards greater fractional change in strength in Si iv.
Multi-Epoch Monitoring of BALQSOs
We now expand our analysis to the full dataset, which includes 2 to 10 epochs of data for each object (Table 1). Including all of these epochs and considering all measured velocities, the fraction of quasars that showed C iv BAL variability is 83 per cent. This is a significant increase from the 65 per cent we derived considering only two long-term epochs, or the 39 per cent derived from only two shortterm epochs (Section 3.1 and Paper 1). Clearly, including more epochs of data increases the observed variability fractions. Moreover, these larger variability fractions apply to roughly the same time-frame as our 2-epoch long-term data set. Therefore, the multi-epoch data did not find new occurrences of variability at some other time; they identified variability missed by the 2-epoch measurements.
To investigate the multi-epoch behaviours of BAL variability, we compared all the spectra obtained for each quasar. From one object to another, there are large differences in the widths of the varying regions and the amplitudes of the changes (e.g., see Figs. 1 and 6). However, there are certain general trends that most, if not all, the quasars follow. In particular, the variability almost always occurred within just a portion of a BAL and not in the entire trough. In nearly all the quasars, the variability occurred over the same velocity interval(s) between each epoch. Finally, when there are multiple velocity intervals of variability within the same quasar, the changes in these separate intervals almost always occur in the same sense. Similarly, as described in Section 3.1, when there is variability in both C iv and Si iv, they also vary in the same sense.
We also find no clear evidence for velocity shifts that would be indicative of acceleration or deceleration in the flows. The constraints on velocity shifts are difficult to quantify in BALs because there can be complex profile variabilities, but we specifically search for and did not find cases where a distinct absorption feature preserved its identity while shifting in velocity. Despite the large outflow velocities, there is no clear evidence to date for acceleration or deceleration in BALs, or in any other outflow lines (i.e. NALs and mini-BALs; e.g. Rodríguez .
We highlight below a few well-sampled cases to illustrate these general trends in the data. Fig. 8 shows specifically the quasars for which we have at least 7 measured epochs including one from the SDSS, which helps to span the time gap between the early Lick data and our recent MDM observations. For each object, the top panel shows the C iv BAL(s) and the bottom panel shows the Si iv BAL(s). The blue curves show the early Lick data, the red curves show the intermediate SDSS data, and the green curves show the MDM spectra. We note that in 1524+5147 there is an O i emission line centered at ∼−20 500 km s −1 in the Si iv panel.
The bold bars in Fig. 8 mark intervals of variability identified for C iv. The velocity ranges are guided by the intervals defined for C iv in Paper 1 and are adjusted to cover the core of the varying region and avoid the edges where the variability is less pronounced. These same velocity ranges are applied to all the epochs plotted here and to Si iv for comparison. The one exception is the thin bar marking a region of variability in Si iv in 0842+3431 with minimal corresponding variability in C iv. For all of these varying regions, we calculate the absorption strength, A, for the defined velocity intervals in each epoch and then plot these A values versus time in Fig. 9.
We plot A instead of EW, in Fig. 9, in order to highlight the intervals that varied. Using EW would dilute these changes in strength. The different colors correspond to different velocity intervals. The dashed and dotted lines represent changes in C iv and Si iv, respectively. We note that these lines do not represent how the A values changed between epochs. They simply connect the measurements from different epochs in order to aide the eye.
In Fig. 8, the spectra of 0842+3431 (in the C iv BAL), 0903+1734, and 1524+5147 show clearly how portions of BALs can vary. In 0842+3431, there is significant variabil-ity in both the blue side and the red side of the C iv BAL; although in Si iv, the entire BAL varies (see Section 3.3.3 below). In 0903+1734, there is significant variability at higher outflow velocities, but at lower velocities, there is no variability. This is consistent with the result from Paper 1 that there is a higher incidence of variability at higher velocities. The variable regions in 1524+5147 cover most, but not all, of the BAL. In contrast to the general trends in Paper 1, the blue-most portion of the BAL did not vary. In 0932+5006, however, the entire C iv BALs vary at the highest velocities.
In terms of C iv to Si iv comparisons, 1524+5147 is a case where there is weak corresponding absorption, and variability, in Si iv, but no Si iv BAL. And, 0932+5006 shows clearly a case where a Si iv BAL varied, but the C iv BAL did not.
The plots in Fig. 9 for 0842+3431, 0903+1734, and 0932+5006 show how the change in A is not always monotonic. The same BAL can grow deeper from one epoch to another, then become shallower again. This is consistent with the results of Gibson et al. (2010). Furthermore, the change in the A value from one epoch to another generally occurs in the same direction (either positive or negative) in both C iv and Si iv, which is consistent with the results of Section 3.1. In 0842+3431, 0903+1734, and 1524+5147, where there are two separate intervals of C iv variability, the change in strength occurs in the same direction for both intervals. The high-velocity BALs in 0932+5006 vary in concert starting with the 1989.84 epoch through 2008.03. At the earliest and latest epochs, they do not clearly vary in concert, but the changes in A are small and could be affected by changes in the underlying Si iv emission line. We comment further on these BALs in 0932+5006 in Section 3.3.4.
Notes on Individual Quasars
In this section, we comment on individual quasars that are cases of special scientific interest. We also comment on cases where there were specific issues in the analysis or measurements that result in larger uncertainties.
0119+0310
0119+0310 is the only quasar for which we record C iv BAL variations without corresponding changes in Si iv in our long-term sample ( Fig. 1 and Section 3.1). However, these results are very tentative because the Si iv absorption is poorly measured across the velocities that varied in C iv. The two long-term spectra for this object, plotted in Fig. 1, have a lower signal-to-noise level than most of the other data in our sample. Furthermore, the Si iv absorption at the velocity of C iv variability (∼−7500 km s −1 ) is very weak, and if the continuum fit is off by even ∼5 per cent, this region in Si iv might not be considered part of the Si iv BAL. If this region is not part of the BAL, then we would not include it in the comparison of C iv to Si iv. Therefore, while we have several well-measured cases of Si iv variability without corresponding C iv variability, we only have this one poorlymeasured case of C iv variability with no corresponding Si iv variability.
This quasar also appears to differ from most of the other quasars in the sample in that the different regions of C iv variability do not vary in the same sense (Fig. 1). The two Figure 8. Spectra of the C iv (top panel) and Si iv (bottom panel) BALs in four well-sampled quasars from our sample, after smoothing three times with a binomial function. The blue curves are the Lick spectra, red curves are SDSS spectra, and green curves are MDM spectra. The bold bars mark varying regions identified for C iv. The average error for each spectrum is shown in the top panel for each quasar, where the height of the error bar represents ±1σ. Figure 9. The absorption strength in both C iv (dashed lines) and Si iv (dotted lines) as a function of time in the velocity intervals indicated by bars in Fig. 8. higher velocity variable regions both increase in strength between the Lick and MDM observations, while the lower velocity variable region decreases in strength. As mentioned above, this is one of our least well-measured quasars, so this is a tentative result. We find just two other cases where two regions of C iv variability vary in opposite directions (0146+0142 and 1423+5000).
0146+0142
We first note that this object has a high-velocity C iv BAL, so the BAL that appears just redward of the marked Si iv BAL in Fig. 1 is actually C iv. As mentioned in Paper 1, we can confirm that this is high-velocity C iv, and not Si iv, because if it were Si iv absorption, we should see correspond-ing low-velocity C iv absorption (see also fig. 1, Korista et al. 1993). We also have further confirmation that this is highvelocity C iv absorption because we find evidence of corresponding high-velocity Si iv absorption on top of the Lyα emission line. Fig. 10 shows the two long-term spectra for 0146+0142 with the two right-most shaded regions marking the C iv variability and the two left-most shaded regions marking the corresponding velocities (but not necessarily the entire variable regions) in Si iv.
Another interesting note about 0146+0142 is that in our short-term data, there are two separate regions of C iv BAL variability, but they do not vary in the same sense. In Fig. 11, we plot the two short-term epochs with these two variable regions shaded. The redder varying interval at ∼−28 300 km s −1 increases in strength, while the bluer interval at ∼−36 Figure 10. The two long-term epochs for 0146+0142, showing the full spectrum from the Lyα emission through the C iv emission. The shading here differs from Fig. 1, with the right-most shaded regions marking the C iv variability and the left-most shaded regions showing the corresponding velocities in Si iv (but not the exact regions of Si iv variability). This figure shows evidence of Si iv BAL variability on top of the Lyα emission line. Figure 11. The two short-term epochs for 0146+0142, with the two shaded regions marking C iv variability. The absorption in these two regions vary in opposite directions, with one region becoming weaker while the other becomes stronger. 600 km s −1 decreases in strength. As mentioned in Section 3.3.1, this is one of just three cases showing this behavior.
0842+3431
In 0842+3431, there is significant variability in two distinct regions of the C iv BAL trough (marked by bold bars in Fig. 8), while the entire Si iv trough varies. However, the bluer, and more variable, region in the C iv trough (the leftmost bold bar in Fig. 8) corresponds to a region of weak absorption and variability in Si iv. Some of the variability in the redmost portion of the Si iv trough (the right-most bold bar in Fig. 8) may be connected with the variability in the red side of the C iv BAL trough, but the variability in the core of the Si iv trough, marked by the thin bar in Fig. 8, cannot be explained by the wider Si iv doublet separation alone. Some of the Si iv variability is occurring at different velocities than the C iv variability. Nonetheless, as seen in Fig. 9, the changes in strength in all three marked regions of the Si iv trough occur in the same sense as the changes in strength in the two distinct varying regions in C iv.
Another interesting note about 0842+3431 is that we identified it as variable in the short-term, but not in the longterm, in Paper 1. For the long-term comparison in Paper 1, we used the 1990.90 and the 2008.35 observations. However, Figs. 8 and 9 clearly show that the C iv BAL varied. The strengh of the BAL is weaker in 2007.04 than in 1990.90, but the BAL becomes stronger again by 2008.35. Therefore, when looking at just the 1990.90 and 2008.35 observations, it appears as if the BAL did not vary at all. This shows how variable profiles can return to a previous state and that variable BALs can be missed in 2-epoch studies.
0932+5006
As in 0146+0142, we note that the absorption trough that appears just redward of the Si iv BAL in 0932+5006 in Fig. 1 is actually a high-velocity C iv BAL (see Fig. 8).
In 0932+5006, there is C iv absorption overlapping Si iv emission. The variability in these two detectable BAL troughs has a different behaviour than the variability in the other quasars in our sample. When looking at the spectra (Fig. 8), the two high-velocity BALs in the 2003.01 spectrum appear to be offset in velocity from the BALs in the other epochs. While this could be indicative of a shift in velocity of the BALs, this apparent offset could also be due to part of the trough weakening while the other part strengthens. This complicates the measurement of A for Fig. 9 because any measurement of A in a fixed velocity interval for each of these two high-velocity BALs does not accurately represent how the line changed. Furthermore, the Si iv emission line itself could be variable, which complicates any analysis of C iv BAL variability in this velocity range.
0946+3009
0946+3009 is the one object in our sample with a redshift too low for our MDM spectra to cover the entire C iv region out to the Si iv emission line. The spectra only go as blue as ∼−19000 km s −1 . The absorption and variability regions that we define for this quasar do not extend all the way to the edge of the MDM spectrum in order to avoid any uncertainties there. The detection of variability in this quasar is secure because we have an additional MDM spectrum of this quasar that matches the MDM spectrum shown in Fig. 1.
1011+0906
As mentioned in Section 2.2, some of the quasars in our sample have low-ionization BALs. We searched for Al iii lines in our sample and used the velocity of the Al iii line to determine the location of C ii. 1011+0906 has an Al iii BAL, but the velocity of the absorption puts C ii blueward of the Si iv absorption. Therefore, if there is any C ii absorption in this object, it does not affect our measurements of the Si iv BAL.
The Si iv broad emission line (BEL) in this quasar is mostly absorbed by the C iv BAL. We fit the Si iv BEL using the procedure defined in Paper 1 for cases like this. We take the C iv fit, increase the FWHM based on the greater doublet separation in Si iv, and place it at the wavelength where the Si iv emission should be. However, there is still some slight emission blueward of the Si iv BEL fit. This extra emission may be part of the wing of the Si iv BEL, or the underlying power-law continuum fit might be slightly too low. However, the Si iv BAL is located at a high enough velocity that any error in the Si iv BEL fit should have a negligible effect on our measurements of the BAL and its variability.
1232+1325
1232+1325 has an Al iii BAL which puts C ii within the Si iv region. The C ii BAL is in the velocity range −25 500 to −19 600 km s −1 (see also Fig. 1), and we omit these velocities from our analysis.
1303+3048
1303+3048 is a BAL quasar that also contained a C iv mini-BAL at ∼−18 500 km s −1 when first observed at Lick (Fig. 1). We only have one Lick observation of this object, but the MDM observations show that the mini-BAL widened and increased in strength to become a BAL. A BAL emerges at the same velocities in Si iv as well. The lower-velocity BAL in 1303+3048 is visible in C iv in the Lick data, but appears as only weak absorption in Si iv. However, between the Lick and MDM epochs, a Si iv BAL emerges and the variability in the Si iv absorption extends to lower velocities than the C iv variability. The variability at all velocities in this quasar in both C iv and Si iv occurs in the same sense; the absorption increases in strength.
1309−0536
As in 1011+0906, the Si iv BEL in 1309−0536 is heavily absorbed by a C iv BAL. We used the same procedure that we used for 1011+0906 to fit the Si iv BEL, and we found what appeared to be significant emission blueward of the Si iv BEL fit. In this case, the underlying powerlaw continuum fit did not have the correct slope, so we made a slight adjustment to the continuum fit. Adjusting the powerlaw continuum fit caused on average an increase in A of ∼6 per cent throughout most of the C iv trough, compared to the measurements in Paper 1. The new Si iv BEL fit increased A at the highest velocities in C iv by up to a factor of 2. Even with this adjustment, there still appears to be some slight emission blueward of the Si iv BEL fit, but, as in 1011+0906, the Si iv BAL in 1309−0536 is at a high enough velocity that errors in the emission fit should not have much effect on measurements of the BAL. This quasar also did not vary in either Si iv or C iv, so any measurement errors for this quasar will not effect any of our results comparing Si iv and C iv variability properties (e.g., Figs. 6 or 7).
1331−0108
As in 1232+1325, 1331−0108 has an Al iii BAL at a velocity that places the corresponding C ii absorption within the Si iv BAL. We therefore omit the velocity range −23 800 to −15 300 km s −1 in the Si iv region from our analysis.
While analyzing the Si iv region in 1331−0108, we noticed that the pseudo-continuum fit defined in Paper 1 needed to adjusted. Like 1309−0536, 1331−0108 has very broad BALs, which makes fitting a continuum difficult. The slope of the fit for 1331−0108 is now slightly steeper than the fit used in Paper 1, increasing the measured A values for C iv on average by just ∼7 per cent throughout most of the trough and up to ∼30 per cent at the highest velocities, where the absorption is much weaker.
1423+5000
1423+5000 is another quasar where there were two C iv BALs that varied, but one BAL increased in strength, while the other weakened. This quasar varied between the Lick and SDSS observations, but we did not detect any variability in the long-term analysis in Paper 1. 0119+0310, 0146+0142, and 1423+5000 are the only quasars in our sample where we see two separate varying regions in C iv that did not vary in the same sense.
1435+5005
1435+5005 has Al iii absorption that is either a weak BAL or strong mini-BAL. However, the velocity of the absorption places C ii blueward of the Si iv absorption.
We also note that the signal-to-noise level in the data for 1435+5005 decreases rapidly at bluer wavelengths. We therefore we do not include the spectral region blueward of −12 100 km s −1 in Si iv in our analysis in Section 3.1.
SUMMARY OF RESULTS
This is the second paper in a 3-part series to analyze the BAL variabilities in a sample of 24 BAL quasars measured originally by Barlow (1993) at the Lick Observatory in 1988-1992. We supplement those data with spectra from the SDSS archives (for 8 quasars) and our own measurements obtained at the MDM observatory (Table 1). In Paper 1 we discussed the variability properties of C iv λ1549 measured in just two epochs that span a "short-term" (0.35−0.75 yr) and a "long-term" (3.8−7.7 yrs) time interval. Here we build upon that work by including our full multi-epoch data set for these same quasars and making detailed comparisons between the Si iv and C iv BAL behaviors. Our main results are the following:
(1) BAL variability usually occurred in only portions of the BAL troughs (Paper 1; Section 3.3.2).
(2) In the long-term interval, 65 per cent of the BAL quasars in our sample showed C iv BAL variability while only 39 per cent varied in the short-term (Paper 1).
(3) C iv variability occurs more often at higher velocities and in shallower absorption troughs (or shallower portions of absorption troughs; Paper 1).
(4) In rare cases, BAL features appear, disappear, or change to or from narrower mini-BAL features (Paper 1; Section 3.3.8).
(5) C iv BALs in our data are as strong or stronger than Si iv BALs at all velocities (in all well-measured cases; Fig. 2).
(6) Si iv BALs are more likely to vary than C iv BALs. For example, when looking at flow speeds >−20 000 km s −1 , 47 per cent of the quasars in our sample exhibited Si iv variability while 31 per cent exhibited C iv variability (Section 3.1). The greater variability in Si iv is likely due to a combination of items (3) and (5) above; weaker lines are more likely to vary, and Si iv tends to be weaker than C iv.
(7) Variability in Si iv can occur without corresponding changes in C iv at the same velocities. ∼50 per cent of the variable Si iv regions did not have corresponding C iv variability at the same velocities. However, in only one poorlymeasured case were changes in C iv not matched by Si iv (Sections 3.1 and 3.3.1).
(8) At BAL velocities where both C iv and Si iv varied, the changes always occurred in the same sense (Section 3.1).
(9) We do not find any correlation between the absolute change in strength in C iv and in Si iv (Fig. 6), but the fractional change in strength tends to be greater in Si iv than in C iv (Fig. 7).
(10) When additional observing epochs are included (e.g., our full data set; Section 3.2), the fraction of C iv BALs that varied at any velocity increases from 65 per cent to 83 per cent. This increase was caused by variations missed in the 2-epoch comparisons in Paper 1.
(11) BAL changes at different velocities in the same ion almost always occurred in the same sense (getting weaker or stronger) but not generally by the same amount (Section 3.2). We find just 3 cases that show evidence for one C iv BAL weakening while another strengthens within the same object (Sections 3.3.1, 3.3.2, and 3.3.11).
(12) The multi-epoch data also show that the BAL changes across 0.04−8.2 years in the rest frame were not generally monotonic (Section 3.2). Thus, the characteristic time-scale for significant line variations, and (perhaps) for structural changes in the outflows, is less than a few years.
(13) With more epochs added, we still do not find clear evidence for acceleration or deceleration in the BAL outflows (Section 3.2).
DISCUSSION
The BAL variability data provide important constraints on the outflow physical properties. However, the information we derive depends critically on what causes the BAL variations. In this section we discuss pros and cons of two competing scenarios, namely, 1) fluctuations in the far-UV continuum flux that cause global changes in the outflow ionization, and 2) outflow clouds moving across our lines-of-sight to the quasar continuum source.
An important part of this discussion is the BAL optical depths, which can be much larger than they appear in the spectrum if the absorbers cover only part of the background light source (Hamann 1998;Hamann et al. 2008). Comparisons between the C iv λ1549 and Si iv λ1400 BALs can help because these lines probe slightly different ionizations with potentially very different line optical depths. For example, in a simple situation with solar abundances and an ion ratio equal to the abundance ratio, i.e., Si iv/C iv = Si/C, the optical depth in Si iv λ1400 would be ∼3.4 times less than C iv λ1549 (Hamann 1997;Hamann & Ferland 1999;Asplund et al. 2009). In actual BAL flows, the relative Si iv optical depth should be even lower because BAL ionization tends to be high and thus favors C iv. We cannot make specific comparisons without specific knowledge of the absorber ionizations. However, if we reasonably assume that the ionization is at least as high as that needed for a maximum C iv/C ratio (e.g., in a gas that is photoionized by the quasar and optically thin in the Lyman continuum - fig. A1 in Hamann et al. 2011), then the Si iv optical depths should be >8 times smaller than C iv.
Changing Ionization
When there is variability in different velocity intervals within the same BAL or within multiple BALs in the same quasar, the changes almost always occur in the same sense (e.g., 0842+3431 and 0903+1734 -Figs. 8 and 9). Studies of narrow absorption line (NAL) variability have observed multiple NALs in a given quasar varying in concert (Misawa et al. 2007;Hamann et al. 2011 found coordinated line variations in five NAL systems in a single quasar. They argue that the most likely explanation for this is a global change in ionization. If there are changes in the ionizing flux incident on the entire outflow, then global changes in ionization should occur. While the connection between NALs and BALs is unclear, this argument can be applied to BALs as well. Absorbing regions at different velocities have different radial distances from the central SMBH. They are therefore spatially distinct, even if they are part of the same larger outflow structure. A change in covering fraction due to moving clouds is unlikely in cases such as 0842+3431 and 0903+1734 because it would require coordinated movements among multiple absorbing structures at different outflow velocities and radii.
To further investigate this scenario, for simplicity, we consider a system with homogeneous clouds outflowing from the accretion disk, with no transverse motion across our lineof-sight to the quasar. A change in ionization will cause the optical depths in the lines to change. As mentioned above, the optical depths in C iv are higher than in Si iv, so Si iv would be more susceptible to changes in ionization. Therefore, it is more likely for Si iv to vary than C iv in this scenario because C iv is more likely to be saturated. This generally matches our results since we find that Si iv is more variable than C iv. We find only one case of C iv variability unaccompanied by Si iv variability, and it is a very tentative result (Section 3.3.1).
This scenario becomes a little more complicated when considering that typically variability only occurs in portions of BAL troughs, rather than entire BAL troughs varying (Gibson et al. 2008; Paper 1). As mentioned above, a change in ionization should cause more global changes, rather than changes in small, discrete velocity intervals. It is possible that the variable regions in the troughs have moderate or low optical depths, while the non-variable sections are too saturated to respond to modest changes in the ionization and line optical depths. We have evidence from Paper 1 and Fig. 5 that weaker lines, or weaker portions of lines, are more likely to vary. However, we also found in Paper 1 that variability is more common at higher velocities, where the absorption tends to be weaker, so it is difficult to say whether it is the higher velocity or the weaker absorption strength that is the root cause of the variability. If it is true that weaker portions of lines, which are least likely to be saturated, are more likely to vary, regardless of outflow velocity, then this would support the changing ionization scenario.
However, if it is true that weaker portions of lines are less saturated and thus more likely to vary, it is unclear why, for example, the weak blue wing of the BAL trough in 0842+3431 does not vary. If changing ionization is causing the variability in this quasar, then the wings of the line must be saturated while the portions of the line adjacent to the wings are not saturated. There are other examples of similar behavior. In 1524+5147, the strongest variability occurs in the deepest segment of the BAL, and there is weak or no variability in the weakest segments of the trough (Figs. 1 and 8). Similarly, the variability in 1011+0906 occurred near the core of the line, while the wings did not vary (Fig. 1).
In order for changing ionization to cause variability in just the deepest portions of BAL troughs, as in the examples given above, there must be velocity-dependent covering fractions with velocity-dependent optical depths. In this way, even the weak wings can be highly saturated. There is evidence in the literature that both optical depth and covering fractions can have complex velocity-dependent behaviors (Barlow & Sargent 1997;Hamann et al. 1997Hamann et al. , 2001Ganguly et al. 1999;de Kool et al. 2002;Gabel et al. 2005, Gabel et al. 2006Arav et al. 2008).
The true optical depths and covering fractions are difficult to measure for BALs. In our data, 1413+1143 provides direct evidence for velocity-dependent optical depths if the line variations are caused by ionization changes. At the core of the Si iv trough there are two dips at the Si iv doublet separation ( Figure 1). The doublet ratio is roughly one-toone, indicating saturation at these velocities and therefore little or no sensitivity to changes in continuum flux. This part of the trough did not vary. However, this saturated doublet is surrounded by variability at higher and lower velocities in Si iv. In C iv, which should have generally larger optical depths, variability occurs only at higher velocities. This behavior is at least suggestive of lower optical depths (non-saturated absorption) at the variable velocities.
One further piece of evidence for the changing ionization scenario comes from the multi-epoch data in Section 3.2, which show that changes in BAL strength are not necessarily monotonic (see also, Gibson et al. 2010). In fact, in 0842+3431, the absorption trough varied, and then in our last MDM observation it returned to the same strength it had in one of the first Lick observations. In order for a change in covering fraction to have caused the variability in 0842+3431, the cloud movements would have to be repeatable, in addition to being coordinated at different velocities corresponding to different spatial locations. A change in ionization is a more likely explanation because continuum flux variations are not necessarily monotonic either (e.g. Barlow 1993).
If a change in ionization does indeed cause the variability we detect, then there should be a connection between changes in continuum flux and BAL variability. However, the results from previous studies have been mixed. Barlow 1993 found some evidence for a correlation between continuum variability and BAL variations, at least in certain individual quasars, while other studies have not found a strong correlation (Barlow et al. 1992;Lundgren et al. 2007;Gibson et al. 2008). However, all of these studies look at near-UV flux variations and little is known about the far-UV variability properties of luminous quasars. It is the far-UV flux that is the source of the ionizing radiation. Therefore, these results do not rule out ionization changes as a cause of BAL variability.
Changing Covering Fraction
While most of the evidence presented so far favors ionization changes, previous BAL variability studies, including Paper 1, have favored changing covering fractions over ionization changes (Lundgren et al. 2007;Gibson et al. 2008;Hamann et al. 2008;Krongold et al. 2010;Hall et al. 2011). To investigate this possibility, we consider a simple scenario with clouds that have constant ionization and column density, but are moving across our line-of-sight. If C iv and Si iv have the same covering fraction, then C iv should be just as likely to vary as Si iv, which is inconsistent with the results of Section 3.1. Further, the change in strength in the two lines should be the same, which is contradicted by our results in Fig. 6 (also, Gibson et al. 2010). Hence, this simple scenario clearly does not match the results of this and previous work.
A more realistic scenario involves clouds that can have different covering fractions in C iv and Si iv (Barlow & Sargent 1997;Hamann et al. 1997Hamann et al. , 2001Ganguly et al. 1999;Gabel et al. 2005, Gabel et al. 2006Arav et al. 2008). Hamann et al. (2001) and Hamann & Sabra (2004) discuss simple schematics of inhomogeneous clouds that could lead to different covering fractions in different ions (see fig. 6 in Hamann et al. 2001 andfig. 2 in Hamann &Sabra 2004). Stronger transitions in more abundant ions can have a larger optical depth over a larger area in these schematic models. Thus, Si iv may trace a different area of the outflowing gas clouds than C iv.
If the C iv and Si iv lines are saturated, e.g. like the BALs in Hamann et al. 2008, then the strengths of the lines would be governed by the covering fractions in those lines. In this case, a smaller covering fraction in Si iv would be consistent with the results of Fig. 2, which shows that Si iv lines are generally weaker than C iv. If Si iv is tracing a smaller area of the gas cloud than C iv and this cloud is moving across our line-of-sight, then Si iv absorption would generally be more variable. Furthermore, if the covering fractions are different for each ion, then the change in covering fraction, as well as the fractional change in strength of the absorption lines, for each ion can also differ. This is consistent with Figs. 6 and 7.
In Sections 3.3.1, 3.3.2, and 3.3.11, we report on three cases that show evidence of one BAL, or one portion of a BAL, strengthening while another weakens within the same quasar. This can be readily explained in a moving cloud scenario, for it is possible for clouds at different velocities to enter/leave our line-of-sight at different times. If one cloud enters our line-of-sight, while another is exiting, we would see one BAL strengthening while another weakens.
Conclusions
The higher variability fractions in Si iv versus C iv BALs and coordinated variabilities between absorption regions at different velocities in individual quasars supports the scenario of global changes in the ionization of the outflowing gas causing the observed BAL variability. Furthermore, velocitydependent covering fractions and optical depths could explain why in many cases we see variability in just portions of BAL troughs, rather than entire troughs varying. On the other hand, variability in portions of BAL troughs fits naturally in a scenario where movements of individual clouds, or substructures in the flow, are causing changes in covering fractions in the absorption lines. This scenario is also consistent with the main results of Section 3.1, assuming that Si iv has a smaller covering fraction than C iv.
In reality, changes in ionization and covering fractions could both be contributing to BAL variability. In our sample, there are quasars in which we observed no variability; there are quasars that varied in only one or two narrow velocity intervals; and, there are yet other quasars with variations over a wide range in velocities. It is unlikely that one scenario is governing the changes in all of these quasars. Perhaps in quasars where the lines are more saturated, the lines are not susceptible to small changes in ionization, but can easily vary due to covering fraction changes. In other cases, where the lines have lower optical depth, a change in ionization can cause large changes over a wide range in velocities, possibly masking variations due to changes in covering fraction.
There are still some unanswered questions that our results from Paper 1, and the current work, raise. In particular, in Paper 1, we find correlations between incidence of C iv variability and both velocity and absorption strength. However, velocity and absorption strength are also correlated. If the trend is really with absorption strength, indicating that weaker lines, which are less likely to be saturated, are more variable, then this favors ionization changes. If the trend is with velocity, then the implications are more ambiguous, but it would be more consistent with the crossing cloud scenario. Clouds with higher outflow velocities are more likely to have greater transverse velocities as well.
There are also documented cases of BALs emerging where there had hitherto been no absorption (Hamann et al. 2008;Krongold et al. 2010), and in this work, we report on a quasar (1303+3048; Section 3.3.8) where a mini-BAL became a BAL. These scenarios speak to the general complexity of BAL variability. Previous studies have hypothesized that different outflow lines may indicate different inclinations of our lines-of-sight to the quasars and that at certain inclinations no absorption is seen (Elvis 2000;Ganguly et al. 2001). The connection between BALs, mini-BALs, and NALs is still unclear. If BAL and mini-BAL outflows occur at different inclinations, then perhaps our line-of-sight to 1303+3048 goes through a region of overlap between the mini-BAL and BAL inclinations. One might expect this putative border region between the BAL and mini-BAL parts of the flow to be the most turbulent or un-stable, and thus the most prone to showing line variability caused by structural changes/motions in the flow.
This work is only the second paper in a 3-part series on BAL variability. The next paper will include 1) a more thorough exploration of the variability time-scales, with new data added to give extensive coverage across week to month intervals; and 2) a more complete discussion of the implications of variability, e.g., in terms of the size, location, and stability of outflow structures.
Figure 2 .
2The average normalized absorption strength, A , in C iv versus the A in Si iv for each absorption bin in each quasar in the long-term subsample. The error bars represent the 1σ errors based on photon statistics.
Figure 3 .
3The top two panels show the number of occurrences of Si iv (solid lines) and C iv (dashed lines) BAL absorption and variable absorption versus velocity. The third panel is the second panel divided by the first. The bottom panel shows the same curve from the third panel for Si iv with 1σ error bars.
Figure 4 .
4The top two panels show the number of occurrences of Si iv BAL absorption and variability versus the average normalized absorption strength, A , in Si iv. The third panel is the second panel divided by the first. The bottom panel shows the same curve from the third panel with 1σ error bars overplotted.
Figure 5 .
5The top two panels show the number of occurrences of C iv BAL absorption and variability versus the average normalized absorption strength, A , at the same velocities in Si iv. The third panel is the second panel divided by the first, and the bottom panel shows the same curve from the third panel with 1σ error bars. In the third panel, we overplot the fraction of occurrences of C iv absorption that varied versus A in C iv.
Figure 6 .
6The change in strength of C iv BAL absorption versus the change in strength of Si iv BAL absorption in velocity intervals where both lines varied. The error bars are calculated as inFig. 2.
Figure 7 .
7The fractional change in strength of C iv BAL absorption versus the fractional change in strength of Si iv BAL absorption in velocity intervals where both lines varied. The error bars are calculated as inFig. 2.
Table 1 .
1Quasar DataName
zem
BI
Lick
SDSS
MDM
Total
∆t
1988-92
2000-06
2007-09
(yrs)
0019+0107
2.130
2290
6
0
1
7
0.08-5.79
0043+0048
2.137
4330
2
2
1
5
0.35-6.13
0119+0310
2.090
6070
2
0
1
3
0.65-5.57
0146+0142
2.909
5780
2
0
2
4
0.52-5.15
0226−1024
2.256
7770
1
0
1
2
4.66
0302+1705
2.890
0
2
0
1
3
0.27-4.42
0842+3431
2.150
4430
6
1
3
10
0.06-5.87
0846+1540
2.928
0
5
0
3
8
0.04-4.93
0903+1734
2.771
10700
2
1
4
7
0.04-5.29
0932+5006
1.926
7920
4
1
4
9
0.05-6.98
0946+3009
1.221
5550
2
0
3
5
0.11-8.16
0957−0535
1.810
2670
2
0
2
4
0.11-6.21
1011+0906
2.268
6100
4
0
3
7
0.10-5.94
1232+1325
2.364
11000
1
0
2
3
0.35-5.93
1246−0542
2.236
4810
2
0
2
4
0.40-5.90
1303+3048
1.770
1390
1
0
4
5
0.05-6.10
1309−0536
2.224
4690
2
0
2
4
0.68-6.19
1331−0108
1.876
10400
2
1
2
5
0.42-5.97
1336+1335
2.445
7120
1
0
3
4
0.07-5.79
1413+1143
2.558
6810
2
1
2
5
0.26-5.61
1423+5000
2.252
3060
2
1
2
5
0.39-5.87
1435+5005
1.587
11500
2
0
2
4
0.34-7.72
1524+5147
2.883
1810
3
1
3
7
0.04-5.14
2225−0534
1.981
7920
3
0
0
3
0.27-0.73
Throughout this paper, all time intervals are measured in years in the rest frame of the quasar. c 2002 RAS, MNRAS 000, 1-21
The values of zem were obtained from the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
c 2002 RAS, MNRAS 000, 1-21
ACKNOWLEDGMENTSWe thank an anonymous referee for helpful comments on the manuscript. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/.
. J K Adelman-Mccarthy, ApJS. 175297Adelman-McCarthy J. K., et al. 2008, ApJS, 175, 297
. A Agresti, B A Coull, The American Statistician. 52119Agresti A., Coull B. A., 1998, The American Statistician, 52, 119
. N Arav, M Moe, E Costantini, K T Korista, C Benn, S Ellison, ApJ. 681954Arav N., Moe M., Costantini E., Korista K. T., Benn C., Ellison S., 2008, ApJ, 681, 954
. M Asplund, N Grevesse, A J Sauval, P Scott, ARA&A. 47481Asplund M., Grevesse N., Sauval A. J., Scott P., 2009, ARA&A, 47, 481
. T A Barlow, AA(California Univ.PhD thesisBarlow T. A., 1993, PhD thesis, AA(California Univ.)
. T A Barlow, V T Junkkarinen, E M Burbidge, R J Weymann, S L Morris, K T Korista, ApJ. 39781Barlow T. A., Junkkarinen V. T., Burbidge E. M., Wey- mann R. J., Morris S. L., Korista K. T., 1992, ApJ, 397, 81
. T A Barlow, W L W Sargent, AJ. 113136Barlow T. A., Sargent W. L. W., 1997, AJ, 113, 136
. D M Capellupo, F Hamann, J C Shields, Rodríguez Hidalgo, P Barlow, T A , MNRAS. 413908Paper 1Capellupo D. M., Hamann F., Shields J. C., Rodríguez Hi- dalgo P., Barlow T. A., 2011, MNRAS, 413, 908 (Paper 1)
. M De Kool, K T Korista, N Arav, ApJ. 58054de Kool M., Korista K. T., Arav N., 2002, ApJ, 580, 54
. Di Matteo, T Springel, V Hernquist, L , 433604NaturDi Matteo T., Springel V., Hernquist L., 2005, Natur, 433, 604
. M Elvis, ApJ. 54563Elvis M., 2000, ApJ, 545, 63
. J E Everett, ApJ. 631689Everett J. E., 2005, ApJ, 631, 689
. J R Gabel, N Arav, J S Kaastra, G A Kriss, E Behar, E Costantini, C M Gaskell, K T Korista, A Laor, F Paerels, D Proga, J K Quijano, M Sako, J E Scott, K C Steenbrugge, ApJ. 62385Gabel J. R., Arav N., Kaastra J. S., Kriss G. A., Behar E., Costantini E., Gaskell C. M., Korista K. T., Laor A., Paerels F., Proga D., Quijano J. K., Sako M., Scott J. E., Steenbrugge K. C., 2005, ApJ, 623, 85
. J R Gabel, N Arav, T.-S Kim, ApJ. 646742Gabel J. R., Arav N., Kim T.-S., 2006, ApJ, 646, 742
. R Ganguly, N A Bond, J C Charlton, M Eracleous, W N Brandt, C W Churchill, ApJ. 549133Ganguly R., Bond N. A., Charlton J. C., Eracleous M., Brandt W. N., Churchill C. W., 2001, ApJ, 549, 133
. R Ganguly, M Eracleous, J C Charlton, C W Churchill, AJ. 1172594Ganguly R., Eracleous M., Charlton J. C., Churchill C. W., 1999, AJ, 117, 2594
. R R Gibson, W N Brandt, S C Gallagher, P C Hewett, D P Schneider, ApJ. 713220Gibson R. R., Brandt W. N., Gallagher S. C., Hewett P. C., Schneider D. P., 2010, ApJ, 713, 220
. R R Gibson, W N Brandt, D P Schneider, S C Gallagher, ApJ. 675985Gibson R. R., Brandt W. N., Schneider D. P., Gallagher S. C., 2008, ApJ, 675, 985
. R R Gibson, L Jiang, W N Brandt, P B Hall, Y Shen, J Wu, S F Anderson, D P Schneider, D Vanden Berk, S C Gallagher, X Fan, D G York, ApJ. 692758Gibson R. R., Jiang L., Brandt W. N., Hall P. B., Shen Y., Wu J., Anderson S. F., Schneider D. P., Vanden Berk D., Gallagher S. C., Fan X., York D. G., 2009, ApJ, 692, 758
. P B Hall, K Anosov, R L White, W N Brandt, M D Gregg, R R Gibson, R H Becker, D P Schneider, MNRAS. 4112653Hall P. B., Anosov K., White R. L., Brandt W. N., Gregg M. D., Gibson R. R., Becker R. H., Schneider D. P., 2011, MNRAS, 411, 2653
. F Hamann, ApJS. 109279Hamann F., 1997, ApJS, 109, 279
. F Hamann, ApJ. 500798Hamann F., 1998, ApJ, 500, 798
. F Hamann, T A Barlow, V Junkkarinen, E M Burbidge, ApJ. 47880Hamann F., Barlow T. A., Junkkarinen V., Burbidge E. M., 1997, ApJ, 478, 80
. F Hamann, G Ferland, ARA&A. 37487Hamann F., Ferland G., 1999, ARA&A, 37, 487
. F Hamann, N Kanekar, J X Prochaska, M T Murphy, S Ellison, A L Malec, N Milutinovic, W Ubachs, MNRAS. 4101957Hamann F., Kanekar N., Prochaska J. X., Murphy M. T., Ellison S., Malec A. L., Milutinovic N., Ubachs W., 2011, MNRAS, 410, 1957
. F Hamann, K F Kaplan, Rodríguez Hidalgo, P Prochaska, J X Herbert-Fort, S , MNRAS. 39139Hamann F., Kaplan K. F., Rodríguez Hidalgo P., Prochaska J. X., Herbert-Fort S., 2008, MNRAS, 391, L39
F Hamann, B Sabra, The Diverse Nature of Intrinsic Absorbers in AGNs. Richards G. T., Hall P. B.311203AGN Physics with the Sloan Digital Sky SurveyHamann F., Sabra B., 2004, in Richards G. T., Hall P. B., eds, AGN Physics with the Sloan Digital Sky Survey Vol. 311 of Astronomical Society of the Pacific Confer- ence Series, The Diverse Nature of Intrinsic Absorbers in AGNs. p. 203
. F W Hamann, T A Barlow, F C Chaffee, C B Foltz, R J Weymann, ApJ. 550142Hamann F. W., Barlow T. A., Chaffee F. C., Foltz C. B., Weymann R. J., 2001, ApJ, 550, 142
. K T Korista, G M Voit, S L Morris, R J Weymann, ApJS. 88357Korista K. T., Voit G. M., Morris S. L., Weymann R. J., 1993, ApJS, 88, 357
. Y Krongold, L Binette, F Hernández-Ibarra, ApJL. 724203Krongold Y., Binette L., Hernández-Ibarra F., 2010, ApJL, 724, L203
. K M Leighly, F Hamann, D A Casebeer, D Grupe, ApJ. 701176Leighly K. M., Hamann F., Casebeer D. A., Grupe D., 2009, ApJ, 701, 176
. B F Lundgren, B C Wilhite, R J Brunner, P B Hall, D P Schneider, D G York, D E Vanden Berk, J Brinkmann, ApJ. 65673Lundgren B. F., Wilhite B. C., Brunner R. J., Hall P. B., Schneider D. P., York D. G., Vanden Berk D. E., Brinkmann J., 2007, ApJ, 656, 73
. T Misawa, M Eracleous, J C Charlton, N Kashikawa, ApJ. 660152Misawa T., Eracleous M., Charlton J. C., Kashikawa N., 2007, ApJ, 660, 152
. T Misawa, M Eracleous, J C Charlton, A Tajitsu, ApJ. 629115Misawa T., Eracleous M., Charlton J. C., Tajitsu A., 2005, ApJ, 629, 115
. R Moll, S Schindler, W Domainko, W Kapferer, M Mair, E Van Kampen, T Kronberger, S Kimeswenger, M Ruffert, A&A. 463513Moll R., Schindler S., Domainko W., Kapferer W., Mair M., van Kampen E., Kronberger T., Kimeswenger S., Ruffert M., 2007, A&A, 463, 513
. N Murray, J Chiang, S A Grossman, G M Voit, ApJ. 451498Murray N., Chiang J., Grossman S. A., Voit G. M., 1995, ApJ, 451, 498
. D Proga, ApJ. 661693Proga D., 2007, ApJ, 661, 693
. D Proga, T R Kallman, ApJ. 616688Proga D., Kallman T. R., 2004, ApJ, 616, 688
. T A Reichard, G T Richards, D P Schneider, P B Hall, A Tolea, J H Krolik, Z Tsvetanov, D E Vanden Berk, D G York, G R Knapp, J E Gunn, J Brinkmann, AJ. 1251711Reichard T. A., Richards G. T., Schneider D. P., Hall P. B., Tolea A., Krolik J. H., Tsvetanov Z., Vanden Berk D. E., York D. G., Knapp G. R., Gunn J. E., Brinkmann J., 2003, AJ, 125, 1711
. Rodríguez Hidalgo, P Hamann, F Hall, P , MNRAS. 411247Rodríguez Hidalgo P., Hamann F., Hall P., 2011, MNRAS, 411, 247
. J R Trump, P B Hall, T A Reichard, G T Richards, D P Schneider, D E Vanden Berk, G R Knapp, S F Anderson, X Fan, J Brinkman, S J Kleinman, A Nitta, ApJS. 1651Trump J. R., Hall P. B., Reichard T. A., Richards G. T., Schneider D. P., Vanden Berk D. E., Knapp G. R., An- derson S. F., Fan X., Brinkman J., Kleinman S. J., Nitta A., 2006, ApJS, 165, 1
. R J Weymann, S L Morris, C B Foltz, P C Hewett, ApJ. 37323Weymann R. J., Morris S. L., Foltz C. B., Hewett P. C., 1991, ApJ, 373, 23
. E B Wilson, Journal of the American Statistical Association. 22209Wilson E. B., 1927, Journal of the American Statistical Association, 22, 209
. J H Wise, M Eracleous, J C Charlton, R Ganguly, ApJ. 613129Wise J. H., Eracleous M., Charlton J. C., Ganguly R., 2004, ApJ, 613, 129
| []
|
[
"Measuring the Impact of Adversarial Errors on Packet Scheduling Strategies *",
"Measuring the Impact of Adversarial Errors on Packet Scheduling Strategies *"
]
| [
"Antonio Fernández Anta \nInstitute IMDEA Networks\n\n",
"Chryssis Georgiou \nUniversity of Cyprus\n\n",
"Dariusz R Kowalski \nUniversity of Liverpool\n\n",
"Joerg Widmer \nInstitute IMDEA Networks\n\n",
"Elli Zavou \nInstitute IMDEA Networks\n\n\nUniversidad Carlos III de Madrid\n\n"
]
| [
"Institute IMDEA Networks\n",
"University of Cyprus\n",
"University of Liverpool\n",
"Institute IMDEA Networks\n",
"Institute IMDEA Networks\n",
"Universidad Carlos III de Madrid\n"
]
| []
| In this paper we explore the problem of achieving efficient packet transmission over unreliable links with worst case occurrence of errors. In such a setup, even an omniscient offline scheduling strategy cannot achieve stability of the packet queue, nor is it able to use up all the available bandwidth. Hence, an important first step is to identify an appropriate metric for measuring the efficiency of scheduling strategies in such a setting. To this end, we propose a relative throughput metric which corresponds to the long term competitive ratio of the algorithm with respect to the optimal. We then explore the impact of the error detection mechanism and feedback delay on our measure. We compare instantaneous error feedback with deferred error feedback, that requires a faulty packet to be fully received in order to detect the error. We propose algorithms for worst-case adversarial and stochastic packet arrival models, and formally analyze their performance. The relative throughput achieved by these algorithms is shown to be close to optimal by deriving lower bounds on the relative throughput of the algorithms and almost matching upper bounds for any algorithm in the considered settings. Our collection of results demonstrate the potential of using instantaneous feedback to improve the performance of communication systems in adverse environments. *Contributions.Packet scheduling performance is often evaluated using throughput, measured in absolute terms (e.g., in bits per second) or normalized with respect to the bandwidth (maximum transmission capacity) of the link. This throughput metric makes sense for a link without errors or with random errors, where the full capacity of the link can be achieved under certain conditions. However, if adversarial bit errors can occur during the transmission of packets, the full capacity is usually not achievable by any protocol, unless restrictions are imposed on the adversary [2, 12]. Moreover, since a bit error renders a whole packet unusable (unless costly techniques like PPR [4] are used), a throughput equal to the capacity minus the bits with errors is not achievable either. As a consequence, in a link with adversarial bit errors, a fair comparison should compare the throughput of a specific algorithm to the maximum achievable amount of traffic that any protocol could send across the link. This introduces the challenge of identifying an appropriate metric to measure the throughput of a protocol over a link with adversarial errors.Relative throughput: Our first contribution is the proposal of a relative throughput metric for packet scheduling algorithms under unreliable links (Section 2). This metric is a variation of the competitive ratio typically considered in online scheduling. Instead of considering the ratio of the performance of a given algorithm over that of the optimal offline algorithm, we consider the limit of this ratio as time goes to infinity. This corresponds to the long term competitive ratio of the algorithm with respect to the optimal.Problem outline: We consider a sender that transmits packets to a receiver over an unreliable link, where the errors are controlled by an adversary. Regarding packet arrivals (at the sender), we consider two models: (a) the arrival times and their sizes follow a stochastic distribution, and (b) the arrival times and their sizes are also controlled by an adversary. The general offline version of our scheduling problem, in which the scheduling algorithm knows a priori when errors will occur, is NP-hard 1 . This further motivates the need for devising simple and efficient online algorithms for the problem we consider. | 10.1007/s10951-015-0451-z | [
"https://arxiv.org/pdf/1306.1769v1.pdf"
]
| 2,105,680 | 1306.1769 | 855466dca229b9297ebbf221eb22272cf682028b |
Measuring the Impact of Adversarial Errors on Packet Scheduling Strategies *
7 Jun 2013
Antonio Fernández Anta
Institute IMDEA Networks
Chryssis Georgiou
University of Cyprus
Dariusz R Kowalski
University of Liverpool
Joerg Widmer
Institute IMDEA Networks
Elli Zavou
Institute IMDEA Networks
Universidad Carlos III de Madrid
Measuring the Impact of Adversarial Errors on Packet Scheduling Strategies *
7 Jun 2013
In this paper we explore the problem of achieving efficient packet transmission over unreliable links with worst case occurrence of errors. In such a setup, even an omniscient offline scheduling strategy cannot achieve stability of the packet queue, nor is it able to use up all the available bandwidth. Hence, an important first step is to identify an appropriate metric for measuring the efficiency of scheduling strategies in such a setting. To this end, we propose a relative throughput metric which corresponds to the long term competitive ratio of the algorithm with respect to the optimal. We then explore the impact of the error detection mechanism and feedback delay on our measure. We compare instantaneous error feedback with deferred error feedback, that requires a faulty packet to be fully received in order to detect the error. We propose algorithms for worst-case adversarial and stochastic packet arrival models, and formally analyze their performance. The relative throughput achieved by these algorithms is shown to be close to optimal by deriving lower bounds on the relative throughput of the algorithms and almost matching upper bounds for any algorithm in the considered settings. Our collection of results demonstrate the potential of using instantaneous feedback to improve the performance of communication systems in adverse environments. *Contributions.Packet scheduling performance is often evaluated using throughput, measured in absolute terms (e.g., in bits per second) or normalized with respect to the bandwidth (maximum transmission capacity) of the link. This throughput metric makes sense for a link without errors or with random errors, where the full capacity of the link can be achieved under certain conditions. However, if adversarial bit errors can occur during the transmission of packets, the full capacity is usually not achievable by any protocol, unless restrictions are imposed on the adversary [2, 12]. Moreover, since a bit error renders a whole packet unusable (unless costly techniques like PPR [4] are used), a throughput equal to the capacity minus the bits with errors is not achievable either. As a consequence, in a link with adversarial bit errors, a fair comparison should compare the throughput of a specific algorithm to the maximum achievable amount of traffic that any protocol could send across the link. This introduces the challenge of identifying an appropriate metric to measure the throughput of a protocol over a link with adversarial errors.Relative throughput: Our first contribution is the proposal of a relative throughput metric for packet scheduling algorithms under unreliable links (Section 2). This metric is a variation of the competitive ratio typically considered in online scheduling. Instead of considering the ratio of the performance of a given algorithm over that of the optimal offline algorithm, we consider the limit of this ratio as time goes to infinity. This corresponds to the long term competitive ratio of the algorithm with respect to the optimal.Problem outline: We consider a sender that transmits packets to a receiver over an unreliable link, where the errors are controlled by an adversary. Regarding packet arrivals (at the sender), we consider two models: (a) the arrival times and their sizes follow a stochastic distribution, and (b) the arrival times and their sizes are also controlled by an adversary. The general offline version of our scheduling problem, in which the scheduling algorithm knows a priori when errors will occur, is NP-hard 1 . This further motivates the need for devising simple and efficient online algorithms for the problem we consider.
Introduction
Motivation. Packet scheduling [7] is one of the most fundamental problems in computer networks. As packets arrive, the sender (or scheduler) needs to continuously make scheduling decisions. Typically, the objective is to maximize the throughput of the link or to achieve stability. Furthermore, the sender needs to take decisions without knowledge of future packet arrivals. Therefore, many times this problem is treated as an online scheduling problem [3,10] and competitive analysis [1,13] is used to evaluate the performance of proposed solutions: the worst-case performance of an online algorithm is compared with the performance of an offline optimal algorithm that has a priori knowledge of the problem's input.
In this work we focus on online packet scheduling over unreliable links, where packets transmitted over the link might be corrupted by bit errors. Such errors may, for example, be caused by an increased noise level or transient interference on the link, that in the worst case could be caused by a malicious entity or an Arrivals Feedback
Upper Bound Lower Bound Deferred 0 0
Adversarial Instantaneous T Alg ≤ γ/(γ + γ) TSL−P r ≥ γ/(γ + γ) TLL = 0, TSL ≤ 1/(γ + 1)
Deferred 0 0
Stochastic Instantaneous T Alg ≤ γ/γ TCSL−P r ≥ γ/(γ + γ), if λpℓmin ≤ γ/(2γ) T Alg ≤ max {λpℓmin, γ/(γ + γ)}, if p < q TCSL−P r ≥ min {λpℓmin, γ/γ}, otherwise TLL = 0, TSL ≤ 1/(γ + 1) Table 1: Summary of results presented. The results for deferred feedback are for one packet length, while the results for instantaneous feedback are for 2 packet lengths ℓ min and ℓ max . Note that γ = ℓ max /ℓ min , γ = ⌊γ⌋, λp is the arrival rate of ℓ min packets, and p and q = 1 − p are the proportions of ℓ min and ℓ max packets, respectively.
attacker. In the case of an error the affected packets must be retransmitted. To investigate the impact of such errors on the scheduling problem under study and provide provable guarantees, we consider the worst case occurrence of errors, that is, we consider errors caused by an omniscient and adaptive adversary [12]. The adversary has full knowledge of the protocol and its history, and it uses this knowledge to decide whether it will cause errors on the packets transmitted in the link at a certain time or not. Within this general framework, the packet arrival is continuous and can either be controlled by the adversary or be stochastic.
Feedback mechanisms: Then, moving to the online problem requires detecting the packets received with errors, in order to retransmit them. The usual mechanism [6], which we call deferred feedback, detects and notifies the sender that a packet has suffered an error after the whole packet has been received by the receiver. It can be shown that, even when the packet arrivals are stochastic and packets have the same length, no online scheduling algorithm with deferred feedback can be competitive with respect to the offline one. Hence, we center our study a second mechanism, which we call instantaneous feedback. It detects and notifies the sender of an error the moment this error occurs. This mechanism can be thought of as an abstraction of the emerging Continuous Error Detection (CED) framework [11] that uses arithmetic coding to provide continuous error detection. The difference between deferred and instantaneous feedback is drastic, since for the instantaneous feedback mechanism, and for packets of the same length, it is easy to obtain optimal relative throughput of 1, even in the case of adversarial arrivals. However, the problem becomes substantially more challenging in the case of non-uniform packet lengths. Hence, we analyze the problem for the case of packets with two different lengths, ℓ min and ℓ max , where ℓ min < ℓ max .
Bounds for adversarial arrivals: We show (Section 3), that an online algorithm with instantaneous feedback can achieve at most almost half the relative throughput with respect to the offline one. It can also be shown that two basic scheduling policies, giving priority either to short (SL -Shortest Length) or long (LL -Longest Length) packets, are not efficient under adversarial errors. Therefore, we devise a new algorithm, called SL-Preamble, and show that it achieves the optimal online relative throughput. Our algorithm, transmits a "sufficiently" large number of short packets while making sure that long packets are transmitted from time to time.
Bounds for stochastic arrivals: In the case of stochastic packet arrivals (Section 4), as one might expect, we obtain better relative throughput in some cases. The results are summarized in Table 1. We propose and analyze an algorithm, called CSL-Preamble, that achieves relative throughput that is optimal. This algorithm schedules packets according to SL-Preamble, giving preference to short packets depending on the parameters of the stochastic distribution of packet arrivals 1 . We show that the performance of algorithm CSL-Preamble is optimal for a wide range of parameters of stochastic distributions of packets arrivals, by proving the matching upper bound 2 for the relative throughput of any algorithm in this setting.
A note on randomization: All the proposed algorithms are deterministic. Interestingly, it can be shown that using randomization does not improve the results; the upper bounds already discussed hold also for the randomized case. For more details see Appendix D.
To the best of our knowledge, this is the first work that investigates in depth the impact of adversarial worstcase link errors on the throughput of the packet scheduling problem. Collectively, our results (see Table 1) show that instantaneous feedback can achieve a significant relative throughput under worst-case adversarial errors (almost half the relative throughput that the offline optimal algorithm can achieve). Furthermore, we observe that in some cases, stochastic arrivals allow for better performance.
Related work.
A vast amount of work exists for online (packet) scheduling. Here we focus only on the work that is most related to ours. For more information the reader can consult [9] and [10]. The work in [5] considers the packet scheduling problem in wireless networks. Like our work, it looks at both stochastic and adversarial arrivals. Unlike our work though, it considers only reliable links. Its main objective is to achieve maximal throughput guaranteeing stabiliy, meaning bounded time from injection to delivery. The work in [2] considers online packet scheduling over a wireless channel, where both the channel conditions and the data arrivals are governed by an adversary. Its main objective is to design scheduling algorithms for the base-station to achieve stability in terms of the size of queues of each mobile user. Our work does not focus on stability, as we assume errors controlled by an unbounded adversary that can always prevent it. The work in [12] considers the problem of devising local access control protocols for wireless networks with a single channel, that are provably robust against adaptive adversarial jamming. At certain time steps, the adversary can jam the communication in the channel in such a way that the wireless nodes do not receive messages (unlike our work, where the receiver might receive a message, but it might contain bit errors). Although the model and the objectives of this line of work is different from ours, it shares the same concept of studying the impact of adversarial behavior on network communication.
Model
Network setting. We consider a sending station transmitting packets over a link. Packets arrive at the sending station continuously and may have different lengths. Each packet that arrives is associated with a length and its arrival time (based on the station's local clock). We denote by ℓ min and ℓ max the smallest and largest lengths, respectively, that a packet may have. We use the notation γ = ℓ max /ℓ min , γ = ⌊γ⌋ andγ = ⌈γ⌉ − 1. The link is unreliable, that is, transmitted packets might be corrupted by bit errors. We assume that all packets are transmitted at the same bit rate, hence the transmission time is proportional to the packet's length.
Arrival models. We consider two models for packet arrivals.
• Adversarial: The packets' arrival time and length are governed by an adversary. We define an adversarial arrival pattern as a collection of packet arrivals caused by the adversary.
• Stochastic: We consider a probabilistic distribution D a , under which packets arrive at the sending station and a probabilistic distribution D s , for the length of the packets. In particular, we assume packets arriving according to a Poisson process with parameter λ > 0. When considering two packet lengths, ℓ min and ℓ max , each packet that arrives is assigned one of the two lengths independently, with probabilities p > 0 and q > 0 respectively, where p + q = 1.
Packet bit errors.
We consider an adversary that controls the bit errors of the packets transmitted over the link. An adversarial error pattern is defined as a collection of error events on the link caused by the adversary. More precisely, an error event at time t specifies that an instantaneous error occurs on the link at time t, so the packet that happens to be on the link at that time is corrupted with bit errors. A corrupted packet transmission is unsuccessful, therefore the packet needs to be retransmitted in full. As mentioned before, we consider an instantaneous feedback mechanism for the notification of the sender about the error. The instant the packet suffers a bit error the sending station is notified (and hence it can stop transmitting the remainder of the packet -if any).
The power of the adversary. Adversarial models are typically used to argue about the algorithm's behavior in worst-case scenarios. In this work we assume an adaptive adversary that knows the algorithm and the history of the execution up to the current point in time. In the case of stochastic arrivals, this includes all stochastic packet arrivals up to this point, and the length of the packets that have arrived. However it only knows the distribution but neither the exact timing nor the length of the packets arriving beyond the current time.
Note that in the case of deterministic algorithms, in the model of adversarial arrivals the adversary has full knowledge of the computation, as it controls both packet arrivals and errors, and can simulate the behavior of the algorithm in the future (there are no random bits involved in the computation). This is not the case in the model with stochastic arrivals, where the adversary does not control the timing of future packet arrivals, but knows only about the packet arrival and length distributions.
Efficiency metric: Relative throughput. Due to dynamic packet arrivals and adversarial errors, the real link capacity may vary throughout the execution. Therefore, we view the problem of packet scheduling in this setting as an online problem and we pursue long-term competitive analysis. Specifically, let A be an arrival pattern and E an error pattern. For a given deterministic algorithm Alg, let L Alg (A, E, t) be the total length of all the successfully transferred (i.e., non-corrupted) packets by time t under patterns A and E. Let OPT be the offline optimal algorithm that knows the exact arrival and error patterns before the start of the execution. We assume that OPT devises an optimal schedule that maximizes at each time t the successfully transferred packets L OPT (A, E, t). Observe that, in the case of stochastic arrivals, the worstcase adversarial error pattern may depend on stochastic injections. Therefore, we view E as a function of an arrival pattern A and time t. In particular, for an arrival pattern A we consider a function E(A, t) that defines errors at time t based on the behavior of a given algorithm Alg under the arrival pattern A up to time t and the values of function E(A, t ′ ) for t ′ < t.
Let A denote a considered arrival model, i.e., a set of arrival patterns in case of adversarial, or a distribution of packet injection patterns in case of stochastic, and let E denote the corresponding adversarial error model, i.e., a set of error patterns derived by the adversary, or a set of functions defining the error event times in response to the arrivals that already took place in case of stochastic arrivals. In case of adversarial arrivals, we require that any pair of patterns A ∈ A and E ∈ E occurring in an execution must allow non-trivial communication, i.e., the value of L OPT (A, E, t) in the execution is unbounded with t going to infinity. In case of stochastic arrivals, we require that any adversarial error function E ∈ E applied in an execution must allow non-trivial communication for any stochastic arrival pattern A ∈ A.
For arrival pattern A, adversarial error function E and time t, we define the relative throughput T Alg (A, E, t) of a deterministic algorithm Alg by time t as:
T Alg (A, E, t) = L Alg (A, E, t) L OPT (A, E, t) .
For completeness, T Alg (A, E, t) equals 1 if L Alg (A, E, t) = L OPT (A, E, t) = 0. We define the relative throughput of algorithm Alg in the adversarial arrival model as:
T Alg = inf A∈A,E∈E lim t→∞ T Alg (A, E, t) ,
while in the stochastic arrival model it needs to take into account the random distribution of arrival patterns in A, and is defined as follows:
T Alg = inf E∈E lim t→∞ E A∈A [T Alg (A, E, t)] .
To prove lower bounds on relative throughput, we compare the performance of a given algorithm with that of OPT. When deriving upper bounds, it is not necessary to compare the performance of a given algorithm with that of OPT, but instead, with the performance of some carefully chosen offline algorithm OFF. As we demonstrate later, this approach leads to accurate upper bound results.
Finally, we consider work conserving online scheduling algorithms, in the following sense: as long as there are pending packets, the sender does not cease to schedule packets. Note that it does not make any difference whether one assumes that offline algorithms are work-conserving or not, since their throughput is the same in both cases (a work conserving offline algorithm always transmits, but stops the ongoing transmission as soon as an error occurs and then continues with the next packet). Hence for simplicity we do not assume offline algorithms to be work conserving.
Adversarial Arrivals
This section focuses on adversarial packet arrivals. First, observe that it is relatively easy and efficient to handle packets of only one length. Proposition 1 Any work conserving online scheduling algorithm with instantaneous feedback has optimal relative throughput of 1 when all packets have the same length.
Proof: Consider an algorithm Alg. Since it is work conserving, as long as there are pending packets, it schedules them. If an error is reported by the feedback mechanism, the algorithm simply retransmits another (or the same) packet. Since the notification is instantaneous, it is not difficult to see that the a priori knowledge that the offline optimal algorithm has, does not help in transmitting more non-corrupted packets than Alg.
Upper Bound
Let Alg be any deterministic algorithm for the considered packet scheduling problem. In order to prove upper bounds, Alg will be competing with an offline algorithm OFF. The scenario is as follows. We consider an infinite supply of packets of length ℓ max and initially assume that there are no packets of length ℓ min . We define as a link error event, the point in time when the adversary corrupts (causes an error to) any packet that happens to be in the link at that specific time. We divide the execution in phases, defined as the periods between two consecutive link error events. We distinguish 2 types of phases as described below and give a description for the behavior of the adversarial models A and E. The adversary controls the arrivals of packets at the sending station and error events of the link, as well as the actions of algorithm OFF. The two types of phases are as follows:
1. a phase in which Alg starts by transmitting an ℓ max packet (the first phase of the execution belongs to this class). Immediately after Alg starts transmitting the ℓ max packet, a set ofγ ℓ min -packets arrive, that are scheduled and transmitted by OFF. After OFF completes the transmission of these packets, a link error occurs, so Alg cannot complete the transmission of the ℓ max packet (more precisely, the packet undergoes a bit error, so it needs to be retransmitted). Here we use the fact thatγ < γ.
2. a phase in which Alg starts by transmitting an ℓ min packet. In this case, OFF transmits an ℓ max packet. Immediately after this transmission is completed, a link error occurs. Observe that in this phase Alg has transmitted successfully several ℓ min packets (up to γ of them).
Let A and E be the specific adversarial arrival and error patterns in an execution of Alg. Let us consider any time t (at the end of a phase for simplicity) in the execution. Let p 1 be the number of phases of type 1 executed by time t. Similarly, let p 2 (j) be the number of phases of type 2 executed by time t in which Alg transmits j ℓ min packets, for j ∈ [1, γ]. Then, the relative throughput can be computed as follows.
T Alg (A, E, t) = ℓ min γ j=1 jp 2 (j) ℓ max γ j=1 p 2 (j) + ℓ minγ p 1 ·(1)
From the arrival pattern A, the number of ℓ min packets injected by time t is exactlyγp 1 . Hence, γ j=1 jp 2 (j) ≤γp 1 . It can be easily observed from Eq. 1 that the relative throughput increases with the average number of ℓ min packets transmitted in the phases of type 2. Hence, the throughput would be maximal if all the ℓ min packets are used in phases of type 2 with γ packets. With the above we obtain the following theorem.
Theorem 1 The relative throughput of Alg under adversarial patterns A and E and up to time t is at most
γ γ+γ ≤ 1 2 (
the equality holds iff γ is an integer).
Proof: Applying the bound γ j=1
p 2 (j) ≥ γ j=1 jp 2 (j) γ in Eq (1), we get T Alg (A, E, t) ≤ ℓ min γ j=1 jp 2 (j) ℓmax γ γ j=1 jp 2 (j) + ℓ minγ p 1 ,
which is a function that increases with γ j=1 jp 2 (j). Since γ j=1 jp 2 (j) ≤γp 1 , the relative throughput can be bounded as
T Alg (A, E, t) ≤ ℓ min γγp 1 /γ ℓ maxγ p 1 γ + ℓ minγ p 1 = ℓ min γ ℓ max + ℓ min γ = γ γ + γ .
Lower Bound and SL-Preamble Algorithm
Two natural scheduling policies one could consider are the Shortest Length (SL) and Longest Length (LL) algorithms; the first gives priority to ℓ min packets, whereas the second gives priority to the ℓ max packets. However, these two policies are not efficient in the considered setting; LL cannot achieve a relative throughput more than 0 while SL achieves at most T = 1 γ+1 . Therefore, we present algorithm SL-Preamble that tries to combine, in a graceful and efficient manner, these two policies.
Algorithm description: At the beginning of the execution and whenever the sender is (immediately) notified by the instantaneous feedback mechanism that a link error occurred, it checks the queue of pending packets to see whether there are at least γ packets of length ℓ min available for transmission. If there are, then it schedules γ of them -this is called a preamble -and then the algorithm continues to schedule packets using the LL policy. Otherwise, if there are not enough ℓ min packets available, it simply schedules packets following the LL policy.
Algorithm analysis:
We show that algorithm SL-Preamble achieves a relative throughput that matches the upper bound shown in the previous subsection, and hence, it is optimal. Let us define two types of time periods for the link in the executions of algorithm SL-Preamble: the active and the inactive periods. An active period is one in which the link experiences no errors and SL-Preamble has pending packets waiting to be transferred, whereas an inactive one is such that either the link has an error point or the queue of pending packets is empty for SL-Preamble. In the case of inactive periods, note that, if the link has an error, neither SL-Preamble nor OPT can make any progress in transmitting an error-free packet. Similarly, if the queue of pending packets is empty for SL-Preamble, it must be empty for OPT as well (otherwise it would contradict the optimality of OPT). Hence, we look at the active periods, which we refer to as phases, and according to the above algorithm we observe that there are four types of phases that may occur.
1. Phase starting with ℓ min packet and has length L < γℓ min 2. Phase starting with ℓ min packet and length L ≥ γℓ min 3. Phase starting with ℓ max packet and has length L < ℓ max 4. Phase starting with ℓ max packet and length L ≥ ℓ max We now introduce some notation that will be used throughout the analysis. For the execution of SL-Preamble and within the ith phase, let a i be the number of successfully transmitted ℓ min packets not in the preambles, b i the number of successfully transmitted ℓ max packets, and c i the number of successfully transmitted ℓ min packets in preambles. For the execution of OPT and within the ith phase, let a * i be the total number of successfully transmitted ℓ min packets and b * i the total number of successfully transmitted ℓ max packets. Let C j A (i) and C j O (i) denote the total amount successfully transmitted within a phase i of type j by SL-Preamble and OPT, respectively.
Analyzing the different types of phases we make some observations. First, for phases of type 1, SL-Preamble is not able to transmit successfully the γ ℓ min packets of the preamble, but OPT is only able to complete at most as much work, so
C 1 O ≤ C 1 A .
For phases of type 2, we observe that the amount of work completed by OPT minus the work completed by SL-Preamble is at most ℓ max (i.e.,
C 2 O − C 2 A < ℓ max ). Therefore, C 2 O ≤ ℓ min γ ℓmax+ℓ min γ C 2 A . (Observe that ℓ min γ ℓmax+ℓ min γ ≤ 1/2.)
The same holds for phases of type 4
(C 4 O − C 4 A < ℓ max ) and hence in this case C 4 O ≤ 2C 4 A .
In the case of phases of type 3, SL-Preamble is not able to transmit successfully any packet, and therefore C 3 A = 0, whereas OPT might transmit up toγℓ min packets. There are two cases of executions to be considered separately. Case 1: The number of phases of type 3 is finite. In such a case, there is a phase i * such that ∀i > i * phase i is not of type 3. Then
R 1 = j≤i * C A (j) + j>i * C A (j) j≤i * C O (j) + j>i * C O (j)(2)
It is clear that the total progress completed by the end of phase i * by both algorithms is bounded. So we define
R 1 = A + j>i * C A (j) O + j>i * C O (j) ≥ A + ℓ min γ ℓmax+ℓ min γ j>i * C O (j) O + j>i * C O (j)
Hence, the relative throughput of SL-Preamble at the end of each phase, can be computed as T = lim t→∞ R 1 , i.e.,
T = lim j→∞ A + ℓ min γ ℓmax+ℓ min γ j>i * C O (j) O + j>i * C O (j) = lim j→∞ (ℓ max + ℓ min γ)A + (ℓ min γ) j>i * C O (j) (ℓ max + ℓ min γ)(O + j>i * C O (j)) = lim j→∞ ( ℓ min γ ℓ max + ℓ min γ + (ℓ max + ℓ min γ)A − (ℓ min γ)O (ℓ max + ℓ min γ)(O + j>i * C O (j)) ) = ℓ min γ ℓ max + ℓ min γ = γ γ + γ
Here it is important to note that the assumption lim t→∞ C O (t) = ∞ is used, which corresponds to the expression lim j→∞ j>i * C O (j) in the above equality.
So far, we have basically seen what is the relative throughput of SL-Preamble at the end of each phase. It is also important to guarantee the lower bound at all times within the phases. Consider any time-point t of
phase i > i * . Then R i (t) = j∈(i * ,i−1] C A (j)+Xt j∈(i * ,i−1] C O (j)+Yt ,
where X t and Y t is the work completed by SL-Preamble and OPT within phase i up to time t. Using our proof above and the fact that for phases of type 1, 2 and 4
C A ≥ ℓ min γ ℓmax+ℓ min γ C O , we know that X t ≥ ℓ min γ ℓmax+ℓ min γ Y t as well. Therefore, R i (t) ≥ ℓ min γ ℓmax+ℓ min γ j∈(i * ,i−1] C O (j) + ℓ min γ ℓmax+ℓ min γ Y t j∈(i * ,i−1] C O (j) + Y t = ℓ min γ ℓ max + ℓ min γ
This completes the lower bound of relative throughput for Case 1.
Case 2:
The number of phases of type 3 is infinite. In this case we must see how the number of ℓ min and ℓ max packets are bounded for both SL-Preamble and OPT.
Lemma 1 Consider the time point t at the beginning of a phase j of type 3. Then the number of ℓ min tasks completed by t by OPT is no more than the amount of ℓ min tasks completed by SL-Preamble plus
γ − 1, i.e., i<j a * i ≤ i<j (a i + c i ) + (γ − 1).
Proof: Consider the beginning of phase j of type 3. At that point, we know that SL-Preamble has at most (γ − 1) ℓ min tasks in its queue of pending tasks by definition of phase type 3. Therefore, the amount of ℓ min tasks completed by OPT by the beginning of phase j is no more than the ones completed by SL-Preamble (including the ℓ min tasks in preambles) plus γ − 1.
Lemma 2
Considering all kinds of phases and the number of ℓ max tasks,
i≤j b * i ≤ i≤j b i + i≤j c i γ + 2, ∀j
Proof: We prove this claim by induction on phase j. For the Base Case: j = 0 the claim is trivial. We consider the Induction Hypothesis stating that
i≤j−1 b * i ≤ i≤j−1 b i + i≤j−1 c i γ + 2. For the Induction
Step we need to prove it up to the end of phase j. We first consider the case where during the phase j there is a time when SL-Preamble has no ℓ max tasks. Let t be the latest such time in the phase. Let us define b * (t) and b(t) being the number of ℓ max tasks completed up to time t by OPT and SL-Preamble respectively. We know that b * (t) ≤ b(t). Let also x * j (t) and x j (t) be the number of ℓ max tasks completed by OPT and SL-Preamble, respectively, after time point t until the end of the phase j. We claim that x * j (t) ≤ x j (t) + 2. From our definitions, at time t SL-Preamble is executing a ℓ min task. Since t is the last time that SL-Preamble has no ℓ max tasks, the worst case is being at the beginning of the preamble (by inspection of the 4 types of phases). Then, if the phase ends at time t ′ , we define period I = [t, t ′ ]:
|I| < γℓ min + (x j (t) + 1)ℓ max ≤ (x j (t) + 2)ℓ max
The +1 ℓ max task is because of the crash before completing the last ℓ max scheduled task of the phase. Observe that OPT could be executing a ℓ max task at time t, completed at some point in [t, t + ℓ max ] and accounted for in x * j (t). Therefore,
i≤j b * i = b * (t) + x * j (t) ≤ b(t) + x j (t) + 2 = i≤j b i + 2.
Now consider the case where at all times of a phase j there are ℓ max tasks in the queue of SL-Preamble. By inspection of the 4 types of phases, the worst case is when j is of type 2. Since there is always some ℓ max task pending in SL-Preamble, after completing the γℓ min tasks it will keep scheduling ℓ max tasks, until a crash stops the last one scheduled, or the queue becomes empty. On the same time OPT is able to complete at most ⌊ L j ℓmax ⌋ ≤ b j + 1 ℓ max -tasks, where L j is the length of the phase. Therefore, in all types of phases, b * j ≤ c j γ + b j . And hence by induction the claim follows;
i≤j b * i ≤ i≤j c i γ + i≤j b i + 2.
Combining the two lemmas above, Lemma 1 and 2:
R 2 = i≤j C A (i) i≤j C O (j) = i≤j [(a i + c i )ℓ min + b i ℓ max ] i≤j [a * i ℓ min + b * i ℓ max ] ≥ i≤j [(a i + c i )ℓ min + b i ℓ max ] i≤j (a i +c i )ℓ min +(γ −1)ℓ min + i≤j (b i + c i γ )ℓ max +2ℓ max ≥ i≤j [(a i + c i )ℓ min + b i ℓ max ] i≤j [(a i + 2c i )ℓ min + b i ℓ max ] + 3ℓ max ≥ i≤j [(a i + c i )ℓ min + b i ℓ max ] + 3 2 ℓ max − 3 2 ℓ max 2 i≤j [(a i + c i )ℓ min + b i ℓ max ] + 3ℓ max ≥ 1 2 − 3 2 ℓ max 2 i≤j [(a i + c i )ℓ min + b i ℓ max ] + 3ℓ max Therefore, T = lim j→∞ R 2 ≥ 1 2 (3)
Theorem 2
The relative throughput of Algorithm SL-Preamble is at least γ γ+γ .
Proof: From the analyses of Cases 1 and 2 and the fact that γ γ+γ ≤ 1 2 it is easy to conclude that the relative throughput of Algorithm SL-Preamble is at least γ γ+γ as claimed.
Stochastic Arrivals
We now turn our attention to stochastic packet arrivals.
Upper Bounds
In order to find the upper bound of the relative throughput, we consider again an arbitrary work conserving algorithm Alg. Recall that we assume that λp > 0 and λq > 0, which implies that there are in fact injections of packets of both lengths ℓ min and ℓ max (recall the definitions of λ, p and q from Section 2). We define the following adversarial error model E.
1. When Alg starts a phase by transmitting an ℓ max packet then, (a) If OFF has ℓ min packets pending, then the adversary extends the phase so that OFF can transmit successfully as many ℓ min packets as possible, up toγ. Then, it ends the phase so that Alg does not complete the transmission of the ℓ max packet (sinceγℓ min < ℓ max ).
(b) If OFF does not have any ℓ min packets pending, then the adversary inserts a link error immediately (say after infinitesimally small time ǫ).
2. When Alg starts a phase by transmitting an ℓ min packet then, (a) IF OFF has a packet of length ℓ max pending, then the adversary extends the phase so OFF can transmit an ℓ max packet. By the time this packet is successfully transmitted, the adversary inserts an error and finishes the phase. Observe that in this case Alg was able to successfully transmit up to γ packets ℓ min .
(b) If OFF has no ℓ max packets pending, then the adversary inserts an error immediately and ends the phase.
Observe that in phases of type 1b and 2b, neither OFF nor Alg are able to transmit any packet. These phases are just used by the adversary to wait for the conditions required by phases of type 1a and 2a to hold. In these latter types some packets are successfully transmitted (at least by OFF). Hence we call them productive phases. Analyzing a possible execution, in addition to the concept of phase that we have already used, we define rounds. There is a round associated with each productive phase. The round ends when its corresponding productive phase ends, and starts at the end of the prior round (or at the start of the execution if no prior round exists). Depending on the type of productive phase they contain, rounds can be classified as type 1a or 2a.
Let us fix some (large) time t. We denote by r 2a with j ≤ γ packets of length ℓ min sent by Alg, is defined similarly for rounds of type 2a. (Here rounding effects do not have any significant impact, since they will be compensated by the assumption that t is large.) We assume that t is a time when a round finishes. Let us denote by r the total number or rounds completed by time t, i.e., γ j=1 r
(j) 2a + γ j=1 r (j)
1a = r. The relative throughput by time t can be computed as
T Alg (A, E, t) = ℓ min γ j=1 j · r (j) 2a ℓ max γ j=1 r (j) 2a + ℓ min γ j=1 j · r (j) 1a .(4)
From this expression, we can show the following result.
Theorem 3 No algorithm Alg has relative throughput larger than γ γ .
Proof: It can be observed in Eq. 4 that, for a fixed r, the lower the value of r To provide tighter bounds for some special cases, we prove the following lemma.
Lemma 3
Consider any two constants η, η ′ such that 0 < η < λ < η ′ . Then:
(a) there is a constant c > 0, dependent only on λ, p, η, such that for any time t ≥ ℓ min , the number of packets of length ℓ min (resp., ℓ max ) injected by time t is at least tηp (resp., tηq) with probability at least 1 − e −ct ;
(b) there is a constant c ′ > 0, dependent only on λ, p, η ′ , such that for any time t ≥ ℓ min , the number of packets of length ℓ min (resp., ℓ max ) injected by time t is at most tη ′ p (resp., tη ′ q) with probability at least 1 − e −c ′ t .
Proof: We first prove the statement 1(a). The Poisson process governing arrival times of packets of length ℓ min has parameter λp. By the definition of a Poisson process, the distribution of packets of length ℓ min arriving to the system in the period [0, t] is the Poisson distribution with parameter λpt. Consequently, by Chernoff bound for Poisson random variables (with parameter λpt), c.f., [8], the probability that at least ηpt packets arrive to the system in the period [0, t] is at least
1 − e −λpt (eλpt) ηpt (ηpt) ηpt = 1 − e −tp(λ−η ln(eλ/η)) ≥ 1 − e −ct ,
for some constant c > 0 dependent on λ, η, p. In the above, the argument behind the last inequality is as follows. It is a well-known fact that x > 1 + ln x holds for any x > 1; in particular, for x = λ/η > 1. This implies that x − ln(ex) is a positive constant for x = λ/η > 1, and after multiplying it by η > 0 we obtain another positive constant equal to λ − η ln(eλ/η) that depends only on λ and η. Finally, we multiply this constant by p > 0 to obtain the final constant c > 0 dependent only on λ, η, p. The same result for packets of length ℓ max can be proved by replacing p by q = 1 − p in the above analysis.
Statement 1(b) is proved analogously to the first one, by replacing η by η ′ . This is possible because the Chernoff bound for Poisson process has the same form regardless whether the upper or the lower bound on the Poisson value is considered, c.f., [8].
Now we can show the following result.
Theorem 4 Let p < q.
Then, the relative throughput of any algorithm Alg is at most min max λpℓ min , γ γ+γ , γ γ .
Proof:
The claim has two cases. In the first case, λpℓ min ≥ γ γ . In this case, the upper bound of γ γ is provided by Theorem 3. In the second case λpℓ min < γ γ . For this case, define two constants η, η ′ such that 0 < η < λ < η ′ and η ′ p < ηq. Observe that these constants always exist. Then, we prove that the relative throughput of any algorithm Alg in this case is at most max η ′ pℓ min , γ γ+γ . Let us introduce some notation. We use a min t and a max t to denote the number of ℓ min and ℓ max packets, respectively, injected up to time t. Let r off t and s off t be the number of ℓ max and ℓ min packets respectively, successfully transmitted by OFF by time t. Similarly, let s alg t be the number of ℓ min packets transmitted by algorithm Alg by time t. Observe that s alg t ≥ r off t ≥ ⌊ s alg t γ ⌋. Let us consider a given execution and the time instants at which the queue of OFF is empty of ℓ min packets in the execution. We consider two cases. Case 1: For each time t, there is a time t ′ > t at which OFF has the queue empty of ℓ min packets. Let us fix a value δ > 0 and define time instants t 0 , t 1 , . . . as follows. t 0 is the first time instant no smaller than ℓ min at which OFF has no ℓ min packet and such that a min t 0 > ℓ max . Then, for i > 0, t i is the first time instant no smaller than t i−1 + δ at which OFF has no ℓ min packets. The relative throughput at time t i can be bounded as
T Alg (A, E, t i ) ≤ s alg t i ℓ min r off t i ℓ max + a min t i ℓ min ≤ s alg t i ℓ min ⌊ s alg t i γ ⌋ℓ max + a min t i ℓ min ≤ s alg t i ℓ min ( s alg t i γ − 1)ℓ max + a min t i ℓ min .
This bound grows with s alg t i when a min t i > ℓ max , which leads to a bound on the relative throughput as follows.
T Alg (A, E, t i ) ≤ a min t i ℓ min a min t i ( ℓmax γ + ℓ min ) − ℓ max = a min t i γ a min t i (γ + γ) − γγ .
Which as i goes to infinity yields a bound of γ γ+γ . Case 2: There is a time t * after which OFF never has the queue empty of ℓ min packets. Recall that for any t ≥ ℓ min , from Lemma 3, we have that the number of ℓ min packets injected by time t satisfy a min t > η ′ pt with probability at most exp(−c ′ t) and the injected max packets satisfy a max t < ηqt with probability at most exp(−ct). By the assumption of the theorem and the definition of η and η ′ , η ′ p < ηq. Let us define t * = 1/(ηq − η ′ p). Then, for all t ≥ t * it holds that a max t ≥ a min t + 1, with probability at least 1 − exp(−c ′ t) − exp(−ct). If this holds, it implies that OFF will always have ℓ max packets in the queue.
Let us fix a value δ > 0 and define t 0 = max(t * , t * ), and the sequence of instants t i = t 0 + iδ, for i = 0, 1, 2, . . .. By the definition of t 0 , at all times t > t 0 OFF is successfully transmitting packets. Using Lemma 3, we can also claim that in the interval (t 0 , t i ] the probability that more than η ′ piδ packets ℓ min are injected is no more than exp(−c ′′ iδ).
With the above, the relative throughput at any time t i for i ≥ 0 can be bounded as
T Alg (A, E, t i ) ≤ (a min t 0 + η ′ p · iδ)ℓ min r off t 0 ℓ max + s off t 0 ℓ min + iδ with probability at least 1 − exp(−ct i ) − exp(−c ′ t i ) − exp(−c ′′ t i ).
Observe that as i goes to infinity the above bound converges to η ′ pℓ min , while the probability converges exponentially fast to 1.
Lower Bound and Algorithm CSL-Preamble
In this section we consider algorithm CSL-Preamble (stands for Conditional SL-Preamble), which builds on algorithm SL-Preamble presented in Section 3.2, in order to solve packet scheduling in the setting of stochastic packet arrivals. The algorithm, depending on the arrival distribution, either follows the SL policy (giving priority to ℓ min packets) or algorithm SL-Preamble. More precisely, algorithm CSL-Preamble acts as follows:
If λpℓ min > γ 2γ then algorithm SL is run, otherwise algorithm SL-Preamble is executed. Then we show the following:
Theorem 5
The relative throughput of algorithm CSL-Preamble is not smaller than γ γ+γ for λpℓ min ≤ γ 2γ , and not smaller than min λpℓ min , γ γ otherwise.
Proof: We consider three complementary cases. Case λpℓ min ≤ γ 2γ . In this case algorithm CSL-Preamble runs algorithm SL-Preamble, achieving, per Theorem 2, relative throughput of at least γ γ+γ under any error pattern.
Case γ 2γ ≤ λpℓ min ≤ 1. Our goal is to prove that the relative throughput is not smaller than min ηpℓ min , γ γ , for any η satisfying λ/2 < η < λ. Considering such an η we can make use of Lemma 3 with respect to λ, η, p. The relative throughput compares the behavior of algorithm CSL-Preamble, which is simply SL in this case, with OPT for each execution. Hence, for the purpose of the analysis we introduce the following modification in every execution: we remove all periods in which OPT is not transmitting any packet. By "removing" we understand that we count time after removing the OPT-unproductive periods and "gluing" the remaining periods so that they form one time line. In the remainder of the analysis of this case we consider these modified executions with modified time lines and whenever we need to refer to the "original" time line we use the notion of global time.
For any positive integer i, we define time points t i = i · ℓ max . Consider events S i , for positive integers i, defined as follows: the number of packets arrived by time t i (on the modified time line of the considered execution) is at least t i ηp. By Lemma 3 and the fact that time t on the modified time line cannot occur before the global time t, there is a constant c dependent only on λ, η, p such that for any i: the event S i holds with probability at least 1 − exp (−ct i ).
Consider an integer j > 1 being a square of another integer. We prove that by time t j , the relative throughput is at least
min ηpℓ min − γℓ min t j , (1 − 1/ j) · γ γ
with probability at least 1 − c ′ exp (−ct √ j ), for some constant c ′ > 1 dependent only on λ, η, p. To show this, consider two complementary scenarios that may happen at time t j : there are at least γ pending packets of length ℓ min , or otherwise. It is sufficient to show the sought property separately in each of these two scenarios.
Consider the first scenario, when there are at least γ pending packets of length ℓ min at time t j . With probability at least 1 − c ′ exp (−ct √ j ), for every √ j ≤ i ≤ j at least t i ηp packets arrive by time t i . This is because of the union bound of the corresponding events S i and the fact that i≥ √ j exp (−ct i ) ≤ c ′ · exp (−ct √ j ) for some constant c ′ > 1 dependent on λ, η, p (note here that although c ′ seems to depend also on c, c ′ is still dependent only on λ, η, p because c is a function of these three parameters as well). Consider executions in j i= √ j S i . Using induction on i, if follows that for these executions for every √ j ≤ i ≤ j the following invariant holds: at least t i ηp − γ packets of length ℓ min have been successfully transmitted by time t i or in the time interval [t i , t i+1 ] at least γ packets of length ℓ min are successfully transmitted (i.e., these successful transmissions end in the interval [t i , t i+1 ]). The inductive proof of this invariant follows directly from the specification of algorithm CSL-Preamble (recall that it simply runs algorithm SL in the currently considered case) and from the definition of the modified execution and time line. Let i * denote the largest i ∈ [ √ j, j] satisfying the following condition: there are less than γ packets of length ℓ min pending in time t i ; if such an i does not exist, we set i * = −1. Consider two sub-cases. Sub-case i * ≥ √ j . If follows from the invariant and the definition of i * that by time t i * there are at least t i ηp − γ successfully transmitted packets of length ℓ min , and in each interval [t i , t i+1 ], for i * ≤ i < j, at least γ packets of length ℓ min finish their successful transmission. Therefore, by time t j the total length of packets (of length ℓ min ) successfully transmitted by algorithm CSL-Preamble is at least
(t i * ηp − γ)ℓ min + t j − t i * ℓ max · γℓ min ,
while the total length of successfully transmitted packets by OPT by time t j is at most t j , by the definition of the modified execution and time line. Therefore the relative throughput is at least
(t i * ηp − γ)ℓ min + ≥ min (t j ηp − γ)ℓ min t j , t j −t √ j ℓmax · γℓ min t j = min ηpℓ min − γℓ min t j , (1 − 1/ j) · γ γ .
This converges to min ηpℓ min , γ γ with j going to infinity. Sub-case i * < √ j . In this sub-case we have, by definition of i * < √ j, that at every time t i , where √ j ≤ i ≤ j, there are at least γ pending packets of length ℓ min . Consequently, by the specification of the algorithm, in each interval [t i , t i+1 ], for √ j ≤ i < j, at least γ packets of length ℓ min finish their successful transmission. Therefore, by time t j the total length of packets (of length ℓ min ) successfully transmitted by algorithm CSL-Preamble is at least
t j − t √ j ℓ max · γℓ min ,
while the total length of successfully transmitted packets by OPT by time t j is at most t j , by the definition of the modified execution and time line. Therefore the relative throughput is at least
t j −t √ j ℓmax · γℓ min t j = (1 − 1/ j) · γ γ ,
and it converges to γ γ with j going to infinity. This completes the analysis of the sub-cases. Finally, it is important to notice that the final converge of the ratio, with j going to infinity, in both sub-cases gives a valid bound on the relative throughput, since the subsequent ratios hold with probabilities approaching 1 exponentially fast (in j), i.e., with probabilities at least 1 − c ′ exp (−ct √ j ), where c and c ′ are positive constants dependent only on λ, η, p. The minimum of the two relative throughputs, coming from the sub-cases, is min ηpℓ min , γ γ , as desired and therefore the relative throughput is at least min λpℓ min , γ γ in this case. Case λpℓ min > 1. In this case we simply observe that we get at least the same relative throughput as in case λpℓ min = 1, because we are dealing with executions saturated with packets of length ℓ min with probability converging to 1 exponentially fast. (Recall that we use the same algorithm SL in the specification of CSL-Preamble, both for λpℓ min = 1 and for λpℓ min > 1.) Consequently, the relative throughput in this case is at least min ηpℓ min , γ γ , for any λ/2 < η < λ, and therefore it is at least min λpℓ min , γ γ ≥ min 1, γ γ = γ γ .
Observe that if we compare the upper bounds on relative throughput shown in the previous subsection with the lower bounds of the above theorem, then we may conclude that in the case where γ is an integer, algorithm CSL-Preamble is optimal (wrt relative throughput). In the case where γ is not an integer, there is a small gap between the upper and lower bound results.
Conclusions
This work was motivated by the following observation regarding the system of dynamic packet arrivals with errors: scheduling packets of same length is relatively easy and efficient in case of instantaneous feedback, but extremely inefficient in case of deferred feedback. We studied scenarios with two different packet lengths, developed efficient algorithms, and proved upper and lower bounds for relative throughput in average-case (i.e., stochastic) and worst-case (i.e., adversarial) online packet arrivals. These results demonstrate that exploring instantaneous feedback mechanisms (and developing more effective implementations of it) has the potential to significantly increase the performance of communication systems.
Several future research directions emanate from this work. Some of them concern the exploration of variants of the model considered, for example, assuming that packets that suffer errors are not retransmitted (which applies when Forward Error Correction [11] is used), considering packets of more than two lengths, or assuming bounded buffers. Other lines of work deal with adding QoS requirements to the problem, such as requiring fairness in the transmission of the packets from different flows or imposing deadlines to the packets. In the considered adversarial setting, it is easy to see that even an omniscient offline solution cannot achieve stability: for example, the adversary could prevent any packet from being transmitted correctly. Therefore, an interesting extension of our work would be to study conditions (e.g., restrictions on the adversary) under which an online algorithm could maintain stability, and still be efficient with respect to relative throughput. Finally, we believe that the definition of relative throughput as proposed here can be adapted, possibly in a different context, to other metrics and problems.
APPENDIX A NP-hardness
We prove the NP-hardness of the following problem, defined for a single link.
INSTANCE (Throughput Problem): Set X of packets, for each packet x ∈ X a length l(x) ∈ N + , an arrival time a(x) ∈ Z 0 , a sequence of time instants 0 = T 0 < T 1 < T 2 < · · · < T k , T i ∈ N 0 , so that the link suffers an instantaneous error at each time T i , i ∈ [1, k] (in other words, at each time T i , any packet transmitted over the link is corrupted).
QUESTION: is there a schedule of X so that error-free packets of total length T k are transmitted by time T k over the link? The reverse argument is similar. If there is a way to schedule packets so that the total packet length transmitted by time T k is T k , in each interval between two error events on the link there must be exactly B bytes of packets transmitted. Then, the packets can be partitioned into subsets of total length B each. This implies the partition of A.
B Deferred Feedback
In this section we study the relative throughput of any algorithm under the deferred feedback mechanism. As described in Section 1, with this mechanism the sending station is notified about a packet having been corrupted by an error only after the transmission of the packet is completed. Here we assume that all packets have the same length ℓ. We show that even in this case no algorithm can achieve positive throughput.
B.1 Adversarial Arrivals
In order to prove the upper bound on throughput, the packets arrive frequently enough so that there are always packets ready. The algorithm will then greedily send a train of packets. The adversary injects bit errors at a distance of exactly ℓ so that each error hits a different packet, and hence the algorithm cannot successfully complete any transmission (that is, it cannot transmit non-corrupted packets). At the same time, an offline algorithm OFF is able to send packets in each interval of length ℓ without errors. This argument leads to the following theorem:
Theorem 7
No packet scheduling algorithm Alg can achieve a relative throughput larger than 0 under adversarial arrivals in the deferred feedback model, even with one packet length.
B.2 Stochastic Arrivals
Let us consider now stochastic arrivals. We show that also in this case the upper bound on the relative throughput is 0.
Theorem 8
No packet scheduling algorithm Alg can achieve a relative throughput larger than 0 under stochastic arrivals in the deferred feedback model, even with one packet length.
Proof: As described in Section 2, we assume that packets arrive at a rate λ. Here we assume that all packets have the same length ℓ. Observe that if λℓ < 1 there are many times when there is no packet ready to be sent and the link will be idle. In any case, the adversary can inject errors following the next rule: inject an error in the middle point of each packet sent by Alg. Applying this rule, no packet sent by Alg is received without errors. However, between two errors there is at least ℓ space (even if packets are contiguous) and the offline algorithm OFF can send a packet. The conclusion is that OFF is able to successfully send at least one packet between two attempts of Alg, while Alg cannot complete successfully any transmission. This completes the proof.
C Upper Bounds for Algorithms SL and LL
We prove upper bounds that suggest that algorithms SL (Shortest Length) and LL (Longest Length) are not efficient. First, we show that SL cannot have relative throughput larger than 1 γ+1 under adversarial arrivals. We then show that algorithm LL is even worse, as its relative throughput cannot be more than 0 even with stochastic arrivals.
Theorem 9
Algorithm SL cannot achieve relative throughput larger than 1 γ+1 under adversarial arrivals, even if there is a schedule that transmits all the packets.
Proof: The scenario works as follows. At time 0 two packets arrive, one of length ℓ max and one of length ℓ min . SL schedules first the packet of length ℓ min , and when it is transmitted, it schedules the packet of length ℓ max . Meanwhile, an offline algorithm OFF schedules first the packet of length ℓ max . When it is transmitted, the adversary causes an error on the link, so SL does not transmit successfully the packet of length ℓ max . Now, SL only has one packet of length ℓ max in its queue (when this scenario is repeated will have several, but no packets of length ℓ min ). Hence, SL schedules this packet, while OFF schedules the packet of length ℓ min that has in its queue. When OFF completes the transmission of the ℓ min packet, the adversary causes an error on the link. This scenario can be repeated forever. In each instance, OFF transmits one packet of length ℓ max and one of lenght ℓ min , while SL only transmits one packet of length ℓ min . Hence, the throughput achieved is ℓ min ℓmax+ℓ min = 1 γ+1 . Observe that at the end of each instance of the scenario the queue of OFF is empty.
We now show that the above upper bound also holds with stochastic arrivals under specific packet arrival rates.
Theorem 10 ∀ε > 0, ∃λ, p, q such that algorithm SL cannot achieve a relative throughput larger than 1 (1−ε)γ+1 + ε.
Proof: Consider an execution of the SL algorithm. We define intervals I 1 , I 2 , . . . , I i as follows. The first such interval, I 1 , starts with the arrival of the first ℓ min packet. Then, I i starts as soon as an ℓ min packet is in the queue of SL after the end of interval I i−1 . The length of each interval depends on whether OFF has an ℓ max packet in its queue at the start of the interval or not. If it has an ℓ max packet, the length of the interval is |I i | = ℓ min + ℓ max , and we say that we have a long interval. If it does not, the length is |I i | = ℓ min and the interval is called short.
Between intervals the adversary injects frequent errors, so SL cannot transmit any packet. In every interval I i , SL starts by scheduling an ℓ min packet. In a short interval, OFF sends an ℓ min packet, followed by an error injected by the adversary. Hence, in a short interval both SL and OFF successfully transmit one ℓ min packet. In a long interval, OFF sends an ℓ max packet, after which the adversary injects an error. (Up to that point SL has been able to complete the transmission of one or more ℓ min packets, but no ℓ max packet.) After the error, OFF sends an ℓ min packet (which is available since beginning of the interval) after which continuous errors will be injected by the adversary until the next interval. Hence, in a long interval OFF successfully transmits one ℓ min packet and one ℓ max packet, while SL transmits only ℓ min packets. This implies that in both types of intervals OFF is transmitting useful packets during the whole interval.
Let us denote by s k the total length of the intervals I 1 , I 2 , . . . , I k , i.e., s k = k i=1 |I i |. Observe that the total number of ℓ min packets that arrive up to the end of interval I k is bounded by k (that accounts for the ℓ min packet in the queue of SL at the start of each interval) plus the packets that arrive in the intervals. From Lemma 3, we know that there is a constant η ′ > λ and a constant c ′ > 0 which depends only on η ′ , λ and p, such that the number of ℓ min packets that arrive in the intervals is at most η ′ ps k with probability at least 1 − e −c ′ s k .
Let T k be the throughput of SL at the end of interval I k . From the above, we have that T k is bounded as
T k ≤ ℓ min (k + η ′ ps k ) s k = ℓ min k s k + ℓ min η ′ p
with probability at least π 1 (k) = 1 − e −c ′ s k . Observe that in the above expression it is assumed that all ℓ min packets that arrive by the end of I k are successfully transmitted by SL. We provide now the following claim.
Claim: Let us consider the first x + 1 intervals I i , for x > 1. The number of long intervals is at least (1 − δ)(1 − e −λqℓ min )x with probability at least 1 − exp(−δ 2 (1 − e −λqℓ min )x/2), for any δ ∈ (0, 1).
Proof of claim:
Observe that if an ℓ max packet arrives during interval I i then the next interval I i+1 is long.
We consider now the first x intervals. Since each of these intervals has length at least ℓ min , some ℓ max packet arrives in the interval with probability at least 1 − e −λqℓ min (independently of what happens in other intervals). Hence, using a Chernoff bound, the probability of having less than (1 − δ)(1 − e −λqℓ min )x intervals among the x first intervals in which ℓ max packets arrive is at most exp(−δ 2 (1 − e −λqℓ min )x/2). ⊓ ⊔ From the claim, it follows that there are at least (1 − δ)(1 − e −λqℓ min )(k − 1) long intervals among the first k intervals, with high probability. Hence, the value of s k is bounded as s k ≥ (1 − δ)(1 − e −λqℓ min )(k − 1)(ℓ max + ℓ min ) + (k − (1 − δ)(1 − e −λqℓ min )(k − 1))ℓ min = (1 − δ)(1 − e −λqℓ min )(k − 1)ℓ max + kℓ min with probability at least π 2 (k) = 1 − exp(−δ 2 (1 − e −λqℓ min )(k − 1)/2). Note that T K cannot be larger than 1. Hence, the expected value of T k can be bounded as follows.
E[T k ] ≤ π 1 (k)π 2 (k) ℓ min k (1 − δ)(1 − e −λqℓ min )(k − 1)ℓ max + kℓ min + ℓ min η ′ p + (1 − π 1 (k)π 2 (k)).
Since π 1 (k) and π 2 (k) tend to one as k tends to infinity, we have that lim k→∞ E[T k ] ≤ ℓ min (1 − δ)(1 − e −λqℓ min )ℓ max + ℓ min + ℓ min η ′ p = 1 (1 − δ)(1 − e −λqℓ min )γ + 1 + ℓ min η ′ p.
Hence, choosing η ′ , p, q, and δ appropriately, the claim of the theorem follows. (E.g., they must satisfy ℓ min η ′ p ≤ ε and (1 − δ)(1 − e −λqℓ min ) ≥ (1 − ε).)
Theorem 11
Algorithm LL cannot achieve relative throughput larger than 0, even under stochastic arrivals.
Proof:
The scenario is simple. The adversary blocks all successful transmissions (by placing errors at distance smaller than ℓ min ) until at least two packets have arrived, one of length ℓ max and one of length ℓ min . Algorithm LL schedules a packet of length ℓ max , while an offline algorithm OFF schedules a packet of length ℓ min . Once OFF completes the transmission of this packet, the adversary causes an error on the link, and hence LL does not complete the transmission of the ℓ max packet. Then, again the adversary blocks successful transmissions until OFF has at least one ℓ min packet pending. The scenario is repeated for ever; while OFF will be transmitting successfully all ℓ min packets, LL will be stuck on the unsuccessful transmissions of ℓ max packets. Hence, the throughput will be 0.
D Randomized Algorithms
So far we have considered deterministic solutions. In many cases, randomized solutions can obtain better performance. As we argue in this section, this is not the case for the problem considered in this work. Let us first indicate how the model and the definition of relative throughput must be extended to the case of randomized algorithms. We assume that the adversary knows the algorithm and the history of the random choices made by the algorithm until the current point in time, but it does not know the future random choices made by the algorithm.
Regarding the relative throughput, and following the terminology of Section 2, in the case of randomized algorithms, an adversarial error-function E has three arguments: an arrival pattern A, a string of values of random bits R, and time t. The output of E(A, R, t) is a set of errors until time t based on the execution of a given randomized algorithm with the values of random bits taken from R under an adversarial pattern A by round t.
For arrival pattern A, adversarial error-function E, string of random bits R and time t, we define the relative throughput T Alg (A, E, R, t) of a randomized algorithm Alg by time t as follows: (Note that OPT is not randomized, but since the error-function E depends on the random choices of the algorithm, this has a direct effect on the performance of OPT. ) We define the relative throughput of algorithm Alg in the adversarial arrival model as follows:
T Alg = inf A∈A,E∈E lim t→∞ E R∈R [T Alg (A, E, R, t)] ,
where A is understood as a function of R and t, and R is a distribution of all possible strings of random bits used by the algorithm. In the stochastic arrival model the relative throughput needs to take into account the random distribution of arrival patterns in A (they are not functions now, as they do not depend on the adversary), and it is defined as follows:
T Alg = inf E∈E lim t→∞ E A∈A,R∈R [T Alg (A, E, R, t)] .
Now, looking at the analyses of the upper bounds for deterministic algorithms with deferred feedback (Section B) and with instantaneous feedback (under adversarial arrivals, Section 3.1, and stochastic arrivals, Section 4.1), it is not difficult to see that the derived bounds hold also for randomized algorithms. The main observation that leads to this conclusion is the following: The adversarial error and arrival patterns defined in the analyses are reactive, in the sense that the adversary that controls them does not need to know the future (and in particular the future random bits of the algorithm ) and makes its decisions only by looking at the system's history. In other words, when a given algorithm decides in a given phase what packet length to transmit, the adversary reacts adaptively on the specific choice, regardless of whether this choice was done deterministically or by flipping a coin. This leads to the conclusion that randomized solutions cannot yield better results (wrt relative throughput) for the considered packet scheduling problem.
C
O (j) = O and thus,
the number of rounds of type 1a in which j ≤γ packets of length ℓ min are sent by OFF completed by time t. The value r (j)
higher the relative throughput. Regarding the values r
the throughput increases when there are more rounds in the larger values of j. E.g., under the same conditions, a configuration with r k 2 + 1. Then, the throughput is maximized when r(γ) 2a = r and the rest of values r (j) 1a and r (j) 2a are 0, which yields the bound.
Theorem 6
6The Throughput Problem is NP-hard.Proof: We use the 3-Partition problem which is known to be an NP-hard problem.INSTANCE: Set A of 3m elements, a bound B ∈ N + and, for each a ∈ A, a size s(a) ∈ N + such that B/4 < s(a) < B/2 and a∈A s(a) = mB.QUESTION: can A be partitioned into m disjoint sets {A 1 , A 2 , . . . , A m } such that, for each 1 ≤ i ≤ m, a∈A i s(a) = B?We reduce the 3-Partition problem to the Throughput Problem, defined for a single link. The reduction is by setting X = A, l() = s(), a() = 0, k = m, and T i = iB for i ∈[1, k]. If the answer to 3-Partition is affirmative, then for the Throughput Problem there is a way to schedule (and transmit) the packets in X in subsets {X 1 , X 2 , . . . , X m } = {A 1 , A 2 , . . . , A m }, so that all the packets in A i can be transmitted over the link in the interval [T i−1 , T i ]. Furthermore, since a∈A i s(a) = x∈X i l(x) = B, and T i − T i−1 = B, the total length of packets transmitted by time T k is T k .
T
Alg (A, E, R, t) = L Alg (A, E, R, t) L OPT (A, E, R, t) . T Alg (A, E, R, t) is defined as 1 if L Alg (A, E, R, t) = L OPT (A, E, R, t) = 0.
Some of the results are omitted due to space limitation and can be found in the Appendix.
If the distribution is not known, then obviously one needs to use the algorithm developed for the case of adversarial arrivals that needs no knowledge a priori.2 Analyzing algorithms yields lower bounds on the relative throughput, while analyzing adversarial strategies yields upper bounds on the relative throughput.
t j −t i * ℓmax · γℓ min t j
A theory of competitive analysis for distributed algorithms. Miklos Ajtai, James Aspnes, Cynthia Dwork, Orli Waarts, Foundations of Computer Science, 1994 Proceedings., 35th Annual Symposium on. IEEEMiklos Ajtai, James Aspnes, Cynthia Dwork, and Orli Waarts. A theory of competitive analysis for distributed algorithms. In Foundations of Computer Science, 1994 Proceedings., 35th Annual Symposium on, pages 401-411. IEEE, 1994.
Scheduling over a time-varying user-dependent channel with applications to high-speed wireless data. Matthew Andrews, Lisa Zhang, J. ACM. 525Matthew Andrews and Lisa Zhang. Scheduling over a time-varying user-dependent channel with applications to high-speed wireless data. J. ACM, 52(5):809-834, September 2005.
Competitive distributed job scheduling. Baruch Awerbuch, Shay Kutten, David Peleg, Proceedings of the twenty-fourth annual ACM symposium on Theory of computing. the twenty-fourth annual ACM symposium on Theory of computingACMBaruch Awerbuch, Shay Kutten, and David Peleg. Competitive distributed job scheduling. In Pro- ceedings of the twenty-fourth annual ACM symposium on Theory of computing, pages 571-580. ACM, 1992.
Ppr: partial packet recovery for wireless networks. Kyle Jamieson, Hari Balakrishnan, Proceedings of the 2007 conference on Applications, technologies, architectures, and protocols for computer communications, SIGCOMM '07. the 2007 conference on Applications, technologies, architectures, and protocols for computer communications, SIGCOMM '07New York, NY, USAACMKyle Jamieson and Hari Balakrishnan. Ppr: partial packet recovery for wireless networks. In Proceed- ings of the 2007 conference on Applications, technologies, architectures, and protocols for computer communications, SIGCOMM '07, pages 409-420, New York, NY, USA, 2007. ACM.
Dynamic packet scheduling in wireless networks. Thomas Kesselheim, PODC. Thomas Kesselheim. Dynamic packet scheduling in wireless networks. In PODC, pages 281-290, 2012.
Error control coding. Shu Lin, J Daniel, Costello, Prentice-hall123Englewood Cliffs, NJShu Lin and Daniel J Costello. Error control coding, volume 123. Prentice-hall Englewood Cliffs, NJ, 2004.
Mixed criteria packet scheduling. Algorithmic Aspects in Information and Management. Chad Meiners, Eric Torng, Chad Meiners and Eric Torng. Mixed criteria packet scheduling. Algorithmic Aspects in Information and Management, pages 120-133, 2007.
Probability and Computing. Michael Mitzenmacher, Eli Upfal, Cambridge University PressMichael Mitzenmacher and Eli Upfal. Probability and Computing. Cambridge University Press, 2005.
Scheduling: theory, algorithms, and systems. L Michael, Pinedo, SpringerMichael L Pinedo. Scheduling: theory, algorithms, and systems. Springer, 2012.
Online scheduling. Kirk Pruhs, Eric Torng, Kirk Pruhs, Eric Torng, et al. Online scheduling. 2007.
Continuous error detection (ced) for reliable communication. Anand Raghavan, Kannan Ramchandran, Igor Kozintsev, IEEE Transactions on Communications. 499Anand Raghavan, Kannan Ramchandran, and Igor Kozintsev. Continuous error detection (ced) for reliable communication. IEEE Transactions on Communications, 49(9):1540-1549, 2001.
Competitive throughput in multihop wireless networks despite adaptive jamming. Andrea Richa, Christian Scheideler, Stefan Schmid, Jin Zhang, Distributed Computing. Andrea Richa, Christian Scheideler, Stefan Schmid, and Jin Zhang. Competitive throughput in multi- hop wireless networks despite adaptive jamming. Distributed Computing, pages 1-13, 2012.
Amortized efficiency of list update and paging rules. D Daniel, Robert E Sleator, Tarjan, Communications of the ACM. 282Daniel D Sleator and Robert E Tarjan. Amortized efficiency of list update and paging rules. Commu- nications of the ACM, 28(2):202-208, 1985.
| []
|
[
"The Effect of Biased Communications On Both Trusting and Suspicious Voters",
"The Effect of Biased Communications On Both Trusting and Suspicious Voters"
]
| [
"William W Cohen \nDepartment of Machine Learning\nDepartment of Political Science\nDepartment of Political Science\nCarnegie Mellon University\nRutgers University\nRutgers University\n\n",
"David P Redlawsk \nDepartment of Machine Learning\nDepartment of Political Science\nDepartment of Political Science\nCarnegie Mellon University\nRutgers University\nRutgers University\n\n",
"Douglas Pierce \nDepartment of Machine Learning\nDepartment of Political Science\nDepartment of Political Science\nCarnegie Mellon University\nRutgers University\nRutgers University\n\n"
]
| [
"Department of Machine Learning\nDepartment of Political Science\nDepartment of Political Science\nCarnegie Mellon University\nRutgers University\nRutgers University\n",
"Department of Machine Learning\nDepartment of Political Science\nDepartment of Political Science\nCarnegie Mellon University\nRutgers University\nRutgers University\n",
"Department of Machine Learning\nDepartment of Political Science\nDepartment of Political Science\nCarnegie Mellon University\nRutgers University\nRutgers University\n"
]
| []
| In recent studies of political decision-making, apparently anomalous behavior has been observed on the part of voters, in which negative information about a candidate strengthens, rather than weakens, a prior positive opinion about the candidate. This behavior appears to run counter to rational models of decision making, and it is sometimes interpreted as evidence of non-rational "motivated reasoning". We consider scenarios in which this effect arises in a model of rational decision making which includes the possibility of deceptive information. In particular, we will consider a model in which there are two classes of voters, which we will call trusting voters and suspicious voters, and two types of information sources, which we will call unbiased sources and biased sources. In our model, new data about a candidate can be efficiently incorporated by a trusting voter, and anomalous updates are impossible; however, anomalous updates can be made by suspicious voters, if the information source mistakenly plans for an audience of trusting voters, and if the partisan goals of the information source are known by the suspicious voter to be "opposite" to his own. Our model is based on a formalism introduced by the artificial intelligence community called "multi-agent influence diagrams", which generalize Bayesian networks to settings involving multiple agents with distinct goals. | null | [
"https://arxiv.org/pdf/1306.2558v1.pdf"
]
| 17,096,864 | 1306.2558 | 83cbe142d445a521aefa11acbd184e176085e7c7 |
The Effect of Biased Communications On Both Trusting and Suspicious Voters
May 7, 2014
William W Cohen
Department of Machine Learning
Department of Political Science
Department of Political Science
Carnegie Mellon University
Rutgers University
Rutgers University
David P Redlawsk
Department of Machine Learning
Department of Political Science
Department of Political Science
Carnegie Mellon University
Rutgers University
Rutgers University
Douglas Pierce
Department of Machine Learning
Department of Political Science
Department of Political Science
Carnegie Mellon University
Rutgers University
Rutgers University
The Effect of Biased Communications On Both Trusting and Suspicious Voters
May 7, 2014
In recent studies of political decision-making, apparently anomalous behavior has been observed on the part of voters, in which negative information about a candidate strengthens, rather than weakens, a prior positive opinion about the candidate. This behavior appears to run counter to rational models of decision making, and it is sometimes interpreted as evidence of non-rational "motivated reasoning". We consider scenarios in which this effect arises in a model of rational decision making which includes the possibility of deceptive information. In particular, we will consider a model in which there are two classes of voters, which we will call trusting voters and suspicious voters, and two types of information sources, which we will call unbiased sources and biased sources. In our model, new data about a candidate can be efficiently incorporated by a trusting voter, and anomalous updates are impossible; however, anomalous updates can be made by suspicious voters, if the information source mistakenly plans for an audience of trusting voters, and if the partisan goals of the information source are known by the suspicious voter to be "opposite" to his own. Our model is based on a formalism introduced by the artificial intelligence community called "multi-agent influence diagrams", which generalize Bayesian networks to settings involving multiple agents with distinct goals.
Introduction
Historically, political decision-making has been modeled in a number of ways. Models that propose rational decision making on the part of voters must account for the fact that voters frequently have difficulty in responding to factual surveys on political issues. One resolution to this difficulty is to model candidate evaluation as an online learning process, in which a tally representing candidate affect is incremented in response to external information [13], after which the information itself is discarded. However, in a number of recent studies of political decision-making, apparently anomolous behavior has been observed on the part of voters, in which negative information about a candidate k strengthens (rather than weakens) a prior positive opinion held about k [5,20] .
This behavior appears to run counter to rational models of decision making, and it is sometimes interpreted as evidence of non-rational "motivated reasoning" [11]. In motivated reasoning models, a voter will (1) evaluate the affect of new informationi.e., its positive or negative emotional charge, then (2) compare this to the affect predicted by current beliefs, and finally (3) react, where congruent information (i.e., information consistent with predicted affect) is processed quickly and easily, and incongruent information is processed by a slower stop-and-think process. Stop-and-think processing may include steps such as counter-arguing, discounting the validity of the information, or bolstering existing affect by recalling previously-assimilated information [14,20].
Some evidence for the motivated reasoning hypothesis comes from hman-subject experiments using a dynamic process tracing environment (DPTE), in which data relevant to a mock election is presented as a dynamic stream of possibly relevant news items. In DPTE experiments, detailed hypotheses about political reasoning can be tested, for instance by varying the frequency and amount of incongruent information presented to voters in the mock election. Experimental evidence shows, for instance, that both political sophisticates and novices spend more time processing negative information about a liked candidate, and novices also spend longer processing positive information about a disliked candidate [20]. Most intriguingly, small to moderate amounts of incongruent information-e.g., negative information about a liked candidate-actually reinforce the prior positive view of the candidate [21].
This apparently anomalous effect-whereby information has the inverse of the expected impact on a voter-appears to be inconsistent with rational decision-making. In this paper, we show analytically that this "anomalous" effect can occur in a model of rational decision making which includes the possibility of deceptive information. The model makes another interesting prediction: it justifies as computationally effective and efficient a heuristic of pretending to believe information from a possibly-deceptive source if that source's political preferences are the same as the voters.
In particular, we will consider a model in which there are two classes of voters, which we will call trusting voters and suspicious voters, and two types of information sources, which we will call unbiased sources and biased sources. Information from an unbiased source is modeled simply as data D k that probabilistically inform a voter about a candidate k's positions. Trusting voters are voters that treat information about a candidate as coming from an unbiased source. We show that, in our model, new data about a candidate can be efficiently incorporated by a trusting voter, and anomalous updates (in which "negative" information increases support) are impossible.
Biased sources are information sources j who plan their communications with a goal in mind (namely, encouraging trusting voters to vote in a particular way). To do this, j will access some data C k which is communicated to them only (not to voters directly) and release some possibly-modified version B k of C k , concealing the original C k . B k is chosen based on the utility to j of the probable effect of B k on a trusting voter i.
We then introduce suspicious voters. Unlike trusting voters, who behave as if communications were from unbiased sources, suspicious voters explicitly model the goal-directed behavior of biased sources.
The behavior of rational suspicious voters depends on circumstances-depending on the assumptions made, different effects are possible. If the partisan goals of the biased source j and a suspicious voter i are aligned, then a suspicious voter can safely act as if the information is correct-i.e., perform the same updates as a naive voter. Intuitively, this is because j is choosing information B k strategically to influence a naive voter to achieve j's partisan goals, and since i's goals are the same, it is strategically useful for i to "play along" with the deception; this intutition can be supported rigorously in our model. If the partisan goals of j are unknown, then a rational suspicious voter i may discount or ignore the information B k ; again, this intuition can be made rigorous, if appropriate assumptions are made. Finally, if the partisan goals of j are known to be "opposite" those of i, then a rational suspicious voter may display the "anomalous" behavior discussed above: information B k that would cause decreased support for a suspicious voter will cause increased support for i. Intuitively, this occurs because i recognizes that j may be attempting to decrease support for candidate k, and since i and j have "opposite" partisan alignments, it is rational for i to instead increase support.
In short, in this model, a negative communication about k can have the effect opposite to one's initial expectation; however, the apparent paradox is not due to motivated reasoning, but simply to imprecise planning on the part of j. In particular, j's communication was planned by a biased source with the aim of influencing trusting voters, while in fact, i is a suspicious voter.
Below, we will first summarize related work, and then flesh out these ideas more formally. Our model will be based on a formalism introduced by the artificial intelligence community called "multiagent influence diagrams", which generalize Bayesian networks to settings involving multiple agents with distinct goals.
Related work
This work is inspired for recent work on motivated reasoning and hot cognition in political contexts (for recent overviews, see [9,19]). There is strong experimental evidence that information processing of political information involves emotion, and there recent research has sought to either collect empirical evidence for [20,5], and and build models that explicate [12], the mechanisms behind this phenomenon.
The models of this paper are not intended to dispute role of emotion in political decision making. Indeed, our models reflect situations in which one party deliberately withholds or distorts information to manipulate a second party, and introspection clearly suggests that such situations will typically invoke an emotional response. However, work in social learning (e.g., [22]) and information cascades (e.g., [23]) shows that behaviors (such as "following the herd") which appear to be driven by non-rational emotions may in fact be strategies that are lead to results that are evolutionarily desirable (if not always "rational" from the individual's perspective.) Hence, the identification of emotional aspects to decision-making does not preclude rational-agent explanations; rather, it raises the question of why these mechanisms exist, what evolutionary pressures might cause them to arise, and whether or not those pressures are still relevant in modern settings. This paper makes a step toward these long-term goals by identifying cases in which behavior explainable by motivated reasoning models is also rational, for instance in the result of Theorem 2.
The explanation suggested here for anomalous, motivated-reasoning-like updating is based on a voter recognizing that a source may be biased, and correcting for that bias. While this explanation of anomalous updates is (to our knowledge) novel, it is certainly recognized that trust in the source of information is essential in political persuasion, and that a voter's social connextions strongly influences political decisionmaking (e.g., [1]). More generally, empirical studies of persuasion substantiate a role of confirmatory bias and prior beliefs [7], and show that in non-political contexts (e.g., in investing), sophisticated consumers of information adjust for credibility, while inexperienced agents under-adjust.
The results of this paper are also related to models of media choice-for instance, research in which the implications of a presumed tendency of voters to seek confirmatory news is explored mathematically [4,2,8,25]. Other analyses show why preferences for unbiased news lead to economic incentives to distort the news [3]. This paper does not address these issues, but does contribute by providing a rational-agent model for why such a confirmatory bias exists: in particular, Proposition 3 describes a strategy from using information from biased information sources with similar preferences as a voter. We notice that this strategy is both simple and computationally inexpensive, and might be preferable on these grounds to more complex strategies to "de-noise" biased information from sources with unknown preferences.
A further connection is to formal work on "talk games", such as Crawford and Sobel's model of strategic communication [6]. In this model, a well-informed but possibly deceptive "sender" sends a message to a "receiver" who (like our voter) takes an action that affects both herself and the sender. Variants of this model have explored cases in which information can only be withheld or disclosed, and disclosed information may be verified by the receiver [15]; cases where the receiver uses approximate "coarse" reasoning [16]; and cases where there is a mixed population of strategic and naive recievers, all of whom obtain information from senders acting strategically [17].
The analysis goals in "talk games" is different from the goals of this paper. whereas we investigate whether specific counter-intuitive observed behavior can arise in a plausible (not necessarily temporally stable) situation, this prior work primarily nalyzes the communication efficiency of a a system in equilibrium, Some of the results obtained for talk games are reminiscent of results shown here: for instance, Crawford and Sobel show that in equilibrium, signaling is more informative when the sender and reciever's preferences are more similar. However, other results are less intuitive: for instance, in some models there is no deception at equilibrium [17]. We note that while equilibria are convenient to analyze, there is no particular reason to believe that natural political discourse reaches an equilibria.
Game theory has a long history in analyses of politics; in particular, writing in 1973, Shubik discusses possible applications of game theory to analysis of misinformation [24]. The tools used used in this Draft-to check: subscripts for C, B, R i a voter j a pundit k a candidate T i , T j "target positions" for voter i and source j T k the position of a candidate k dom(T ) the set of values taken on by random variable T S ik , S jk similarity of a target position and a candidate D k data about candidate k Y ik the vote of i for k U i utility function for i C jk data about k that is known only to j B jk biased variant of data C jk about k that has been modified by j R jk reputational cost to pundit j of modifying C jk to B jk P TrV (X|Y ) a conditional probability computed using the trusting voter model P BiP (X|Y ) a conditional probability computed using the biased pundit model P SuV (X|Y ) a conditional probability computed using the suspicious voter model Table 1: Notation used in the Paper paper arose from more recent work in artificial intelligence [10,18], specifically analysis of multi-agent problem solving tasks, in which one agent explicitly models the goals and knowledge of another in settings involving probabilistic knowledge. One small contribution of this paper is introduction of a new set of mathematical techniques, which (to our knowledge) have not been previously used for analysis of political decision-making. We note however that while these tools are convenient, they are not absolutely necessary to obtain our results.
Modeling trusting voters
A model
Consider the influence diagram on the left-hand side of Figure 1. In this model i is a voter, and k is a candidate. A voter i has a preference Ti: for example, Ti might be a member of the set dom(T ) = {goodLiberal, evilLiberal, goodConservative, evilConservative} Likewise T k is candidate k's actual position, which is also an element of dom(T ). S ik measures how similar two positions are. Y ik is a measure of i's support for k, which is chosen by voter i to maximize i's utility. The utility Ui for voter i is a function of how appropriate Y ik is given S ik . Finally, D k is some data about k, generated probabilistically according to the value of T k . As an example, this might be a statement made by k. The notation used in this diagram (and elsewhere below) is summarized in Table 1.
The shapes in the nodes in the diagrams indicate the type of the variable: diamonds for utilities, circles for random variables, and squares for decision variable, which an agent (in this case, voter i) will choose in order to maximize utility. The dotted arrow lines leading into a decision variable indicates information available when the decision is made. The arrows leading into a random variable node indicate "parents"-variables on which the value of the variable is conditioned. This sort of diagram is called an influence diagram, and the general version we will use later (in which multiple agents may exist) is called a MAID (Multi-Agent Influence Diagram) [18,10]. More precisely the model defines a probability distribution generated by this process: Figure 1: Model of a trusting voter, in MAID and Bayes Net notation
• Pick ti ∼ P (Ti), where P (Ti) is a prior on voter preferences.
• Pick t k ∼ P (T k ), where P (T k ) is a prior on candidate positions.
• Pick d k ∼ P (D k |T k = t k ). Equivalently, we could let d k = fD(t k , D )
, where fD is a function and D is a random variable chosen independently of all other random variables in the model. • Pick s ik ∼ P (S kij |T k = t k , Tj = tj). Equivalently, we could let s ik = fS(ti, t k , S ), where fS is a function and S is a random variable, again chosen independently of all other random variables in the model.
• Allow voter i to pick y ik , based on a user-chosen probability distribution Pτ (Y ik |S ik = s ik ), or equivalently computed using fY (s ik , Y ).
• Pick utility ui from P (U |y ik , s ik )-or equivalently, computed as ui = fU (y ik , s ik , U ).
The user can choose any distribution Pτ (Y |S), but we will henceforth assume that she will make the optimal choice-i.e., the probability distribution Pτ will be chosen by i to maximize the expected utility ui, where ui will be picked from P (U |y ik , S ik )P r(S ik )-or equivalently computed as ui = s P (S ik = s ) * fU (y ik , s , U ). In this specific case, the conversion is based on the observation that
P TrV (Y ik = y|S ik = s) = P y = argmax y U fu(y , s, U )d U(1)
and yields the Bayes network on the righthand side of Figure 1. Here Y ik is simply a random variable conditioned on S ik , where the form of the dependency depends on Equation 1. Notice that the link from S ik to Y ik is deterministic. We call this the trusting voter model, since voter i trusts the validity of the information D k . MAIDs (and their single-agent variants, influence diagram networks) have a number of advantages as a formalism. They provide a compact and natural computational representation for situations which are otherwise complex to describe -in particular, situations in which agents have limited knowledge
d P (Ti = d) goodLiberal 0.4 goodConserv 0.4 evilLiberal 0.1 evilConserv 0.1 d P (T k = d) goodLiberal 0.29 goodConserv 0.69 evilLiberal 0.01 evilConserv 0.01 ti t k s ik |ti, t k goodLiberal goodLiberal 5 + goodLiberal goodConserv 1 + goodLiberal evilLiberal −2 + goodLiberal evilConserv −5 + goodConserv goodConserv 5 + goodConserv evilLiberal −5 + goodConserv evilConserv −2 + evilLiberal evilLiberal 5 + evilLiberal evilConserv −5 + evilConserv evilConserv 5 + t k c k P (c k |t k ) goodLiberalt i , t k not shown, s ik |t i , t k = s ik |t k , t i .
of the game structure, or mutually inconsistent beliefs, but act rationally in accordance with these beliefs. In particular, MAIDs relax the assumption usually made in Bayesian games that players' beliefs are consistent, and supports an explicit process in which one player can model another player's strategy.
MAIDs also support an expressive structured representation for a player's beliefs. Together these features make them appropriate for modeling "bounded rationality" situations of the sort we consider here. Further discussion of MAIDs, and their formal relation to other formalisms for games and probability distributiobns, can be found elsewhere [10,18]. Next, we will explore some simplifications of Eq. 1) If fu is deterministic, then Eq. 1 simplifies to the following choice of y ik given s ik :
y ik = fy TrV (s ik ) = argmax y fu(y , s ik )
If only a distribution P (S ik = s) is known, then voter i's optimal strategy is to let
y ik = argmax y s fu(y , s)P (S ik = s)
In any case, however, the model has similar properties: once we assume that i uses an optimal strategy, then the MAID becomes an ordinary Bayes net, defining a joint distribution over the variables Ti, T k , S ik , and Y ik . We will henceforth use P TrV to denote probabilities computed in this model, and reserve the non-superscripted P (A|B) and P (A) for conditional (respectively prior) probability distribution that are assumed to be available as background information.
Example 1 To take a concrete example, igure 2 shows the conditional probability tables for a small example, where candidates and target positions have values like evilLiberal and goodConserv, and the value of a communication C k is a name of something that a candidates might support (e.g.,motherhood or guns).
Implications of the model
New information about a candidate is easy to process
It is obvious how to use the trusting voter model to compute Y ik if T k and Ti are known. However, a more reasonable situation is that D k and Ti are known to i, but the true position of the candidate can only be inferred, indirectly, from D k . Fortunately, using standard Bayes network computations, we can also easily compute a distribution over Y ik given D k .
First, note that we can marginalize over T k and compute Figure 2: Model of a trusting voter with m multiple independent data items about the candidate, in Bayes Net notation and that using Bayes' rule
P (D k = d) = t P (D k = d|t )P (T k = t )P (T k = t|D k = d) = P (D k = d|T k = t)P (T k = t) P (D k = d) = P (D k = d|T k = t) P (D k = d) P (T k = t)(2)
This gives a simple rule for i to use in choosing her vote Y ik given D k :
P TrV (Y = y|Ti = ti, D k = d k ) (3) = s P TrV (Y = y|S ik = s) t P (S ik = s|Ti = ti, T k = t)P (T k = t|D k = d k )(4)
= P TrV (Y = y|S ik )P (S ik |ti, T k )P (T k |d k )
The last line uses a simplified notation from the Bayes net community, where sums used to marginalize are omitted, and the event X = x is replaced with x when the variable X is clear from context. This procedure can be extended easily to the case of multiple data items D k,1 , . . . , D k,m about the candidate, each independently generated from P (D k |T k ), as shown in Figure 2. It can be shown that
P (T k = t|d k,1 , . . . , d k,m ) = P (d k, |T k = t) P (d k, ) P (T k = t)
Put another way, we can define P (T k = t|d k,1 , . . . , d k,m ) recursively as follows:
P (T k = t|d k,1 , . . . , d k,m ) = P (d k,m |T k = t) P TrV (d k,m) · P (T k = t|d k,1 , . . . , d k,m−1 )
Hence, voter i can quickly update beliefs about T k incrementally with each new piece of information d k, and then (as before) use Equation 3 to update her vote.
This incremental update property is worth emphasizing-while it may be complicated (if not computationally complex) to compute P TrV (Y |S), or it may be difficult for a voter to establish her preferences ti, absorbing new information in the trusting voter model is straightforward, and consists of two "natural" steps: estimating the candidate's position T k given the information d k, ; and then updating her support y ik for candidate k, given the updated estimate of the candidate's position.
Positive information increases support
In a number of recent studies of political decision-making, negative information about a candidate k has been observed to strengthen (rather than weaken) a voter's support for k. We will show that under fairly reasonable assumptions this effect can not occur with the trusting voter model. In particular, we will assume that support Y ik increases monotonically with the similarity S ik of the candidate's position T k and the voter's preference Ti.
Recall that in the trusting voter model Y ik is a deterministic function of S ik , defined as
fY TrV (s ik ) = argmax y U fu(y , s, U )d U(5)
If fY TrV has the property that ∀s1 > s2, fY TrV (s1) ≥ fy TrV (s2) then we will say that i's support for k increases monotonically with s ik . This is one assumption needed for our result.
We also need to precisely define "negative" information. We say that d k is strictly negative about k to voter i if there is some partition of dom(S ik ) into triples (a1, b1, δ1), . . . ,(am, bm, δm) so that
• For every triple (a , b , δ ), a < b , δ > 0, P (S ik = a |d k ) = P (S ik = a ) + δ , and P (S ik = b |d k ) = P (S ik = b ) − δ .
In other words, learning D k = d k shifts some positive probability mass δ from the larger similarity value b to the smaller similarity value a .
• For all s ∈ dom(S ik ) that are not in any triple, P (s ik = s|d k ) = P (s ik = s). In other words, the probability mass of values s not in any triples is unchanged.
To illustrate this, consider Figure 3, which illustrates a plausible example of strictly negative information. Strictly positive information is defined analogously.
We can now state formally the claim that negative information will reduce support. An analogous statement holds for strictly positive information.
Theorem 1 Let EP [X] denote the expected value of X in probability distribution P . In a trusted voter model P TrV , if voter i's support for k increases monotonically with s ik and d k is strictly negative about k to voter i, then
E P TrV [Y ik |D k = d k ] < E P TrV [Y ik ].
Proof. Let S = dom(S ik ) − {a1, b1, . . . , am, bm}, where the a , b 's are the triples guaranteed by the strictly-negative property of d k .
E P TrV [Y ik |D k = d k ] = y y · P TrV (Y ik = y|D k = d k ) = s fy TrV (s)P TrV (S ik = s|D k = d k ) = s ∈S fy TrV (s)P TrV (S ik = s|d k ) + m =1 fy TrV (a )P TrV (S ik = a |d k ) + m =1 fy TrV (b )P TrV (S ik = b |d k )
Looking at the three terms of the final sum in turn, clearly with the last step holding because (a) δ > 0 and (b) a < b . (Recall that from the monotonicity of fy, if a < b then fy TrV (a ) ≤ fy TrV (b )). Combining these gives that
E P TrV [Y ik |D k = d k ] = s ∈S fy TrV (s)P TrV (S ik = s|d k ) + m =1 fy TrV (a )P TrV (S ik = a |d k ) + m =1 fy TrV (b )P TrV (S ik = b |d k ) ≤ s ∈S fy TrV (s)P TrV (S ik = s) + m =1 fy TrV (a )P TrV (S ik = a ) + fy TrV (b )P TrV (S ik = b ) = s∈dom(S ik ) fy TrV (s)P TrV (S ik = s) = E P TrV [Y ik ]
This concludes the proof.
Biased pundits
We now consider a new model, as shown in Fig 4. In this model there is a pundit j, who, like a voter, has a target candidate position Tj. Pundit j observes a private datapoint C k from candidate k-perhaps based on a private communication or research-and then publishes a "biased version" B k of C k . However, B k is chosen under the assumption that some trusting voter i will react to B k as if it were D k in the trusting voter model of Figure 1. Specifically, we assume that B k will provoke a vote Y ik according to the trusting voter model. The utility assigned by j to this outcome is a function of i's vote Y ik , the similarity of S jk of T k to j's target Tj, and a "reputational cost" R jk , which is a function of C k and B k . For instance, R jk might be zero if b k = c k , and otherwise some measure of how embarrassing it might be to j if his deception of replacing c k with b k were discovered. More precisely the model defines a probability distribution generated by this process:
• Pick tj ∼ P (Tj), where P (Tj) is a prior on pundit preferences.
• Pick t k ∼ P (T k ), where P (T k ) is a prior on candidate positions.
• Pick c k ∼ P (D k |T k = t k ), or equivalently, c k = fD(t k , D ). (Notice that we assume c k is chosen from the same conditional distribution P (D k |T k ) used in the trusting voter model to chose D k -we're using a different variable here to emphasize the different role it will play.)
• Allow pundit j to pick b k , based some user-chosen distribution Pσ(B k |C k = c k ).
• Pick r ik ∼ P (R jk |B k = b k , C k = c k ), or equivalently, r jk = fR(b k , c k , R).
• Show b k to a trusting voter i, presenting it as a sample from P (D k |T k = t k ), and allow user i to pick yi according to the trusting voter model.
• Pick ui ∼ P (Uj|R jk = r jk , S jk = s jk , Y ik = y ik ), or equivalently, let uj = fU (r jk , s jk , y jk , U ).
To distinguish the two utility functions, we will henceforth use fU TrV for the utility function fU (s, y) used in the trusting voter model, and use fU BiP for the utility function fU (r, s, y) defined above. As below, we will assume pundit j will pick b k , based on available estimates of Sij and knowledge of C k , to maximize the expected utility uj. This can be computed as
uj = r,s,y P TrV (Y ik = y|D k = b k )P (S jk = s|D k = c k )P (R jk = r|b k = c k )fU BiP (r, s, y, U )
where P TrV (Y ik |D k = b k ) is estimated using the trusting voter model; P (S jk |D k = c k ) is computed as in Section 3.2.1, using P (DC k |T k ) and P (S jk |Tj, T k ); and P (R jk = r|b k , c k ) is computed using the given probability function of reputational cost r, as a function of the unbiased data c k and the biased version b k that is released. Since pundit j picks B k to maximize utility, then as before, we can convert this MAID to a Bayes net. Specifically, B k depends on the parents C k and S jk as follows.
P BiP (B k = b|C k = c, S jk = s) = P b = argmax b r,y P TrV (Y ik = y|D k = b k )P (R jk = r|b , c k ) U fU BiP (r, s, y, U )d U
This can be simplified, if we assume that fU BiP and fR are deterministic:
P BiP (B k = b|C k = c, S jk = s) = P b = argmax b y P TrV (Y ik = y|D k = b k )fU BiP (fR(b , c), s, y) (6)
Suspicious voters
Finally, we introduce a model for a "suspicious voter". Intuitively, this model is simple. As before, we assume that j chooses b k according to the biased pundit model of Eq. 6-i.e., that j believes i to be a trusting voter. The suspicious voter will then attempt to reason with this correctly, and find the vote Yi maximizing i's true utility, given then information revealed by j's choice of b k . The MAID and Bayes net versions of this model are shown in Figure 5 and Figure 6 respectively, and probabilities computed in this model will be writted as P SuV . This suspicious voter model does not have a simple closed-form solution for P SuV (S), as in the trusting voter model. The generative process for the suspicious voter model is identical to the biased pundit model, except that after b k is chosen according to Eq. 6, i will pick a value of Y ik from a distribution P SuV (Y ik |B k = b k ), which is chosen to maximize the expected value of ui. We will define P SuV (Y ik = y|S ik = s) = P y = argmax y U fu(y , s, U )d U or, assuming determinacy, y ik = fy SuV (s ik ) = argmax y fu(y , s ik )
This leads to a more complex inference problem for voters. Although the process of computing P (S ik |Ti, T k ) is unchanged, relative to the trusting voter model, a suspicious voter cannot estimate a distribution over T k using d k , as in Eq. 2, because d k is not known. Instead i only has indirect evidence about d k in the form of b k . However, voter i can use this indirect evidence to compute Figure 5: Model of a suspicious voter in MAID notation Figure 6: Model of a suspicious voter in Bayes net notation where P BiP (b|c) = t j P BiP (b|c, tj)P (tj) is the probability, in the biased pundit model, of pundit j publishing B k = b when C k = c. We can now break the summation over c into two cases:
P SuV (T k |B k = b k ) = c P TrV (T k |D k = c) · P BiP (B k = b|C k = c) · P TrV (D k = c) (7)P SuV (T k |B k = b k ) = P TrV (T k |D k = b) · P TrV (D k = b) · P BiP (B k = b|C k = b)(8)+ c =b P TrV (T k |D k = c) · P TrV (D k = c) · P BiP (B k = b|C k = c)(9)
In the term in line 8, j is not altering the original input c k , so we say he is being accurate. In line 9, so we say that j is being deceptive. In order to exploit the information in b k using Eq 8-9 to optimize her utility, the suspicious voter must assess the probability of deception, and adjust inferences about candidate k accordingly-a potentially difficult inference problem.
Implications of the suspicious voter model
To simplify further analysis, we will make some additional assumptions.
• We assume a deterministic reputational cost function fR in the biased pundit model.
• We assume the reputational cost of being accurate is zero, i.e., that ∀c, fR(c, c) = 0, and that the reputational cost of being deceptive is greater than zero, i.e., ∀b = c, fR(b, c) > 0.
• We assume deterministic utility functions fU TrV and fU BiP .
• We assume the utility for pundits is the same as the utility for voters, minus the reputational cost of altering c to b-i.e., that fU BiP (r, s, y) ≡ fU TrV (s, y) − r Given these assumptions, some observations can now be made.
Proposition 1 If the prior P (Tj) such that P BiP (b|c) = P BiP (b |c ) for all communications b, b , c and c , then for all b k , P SuV (T k |b k ) = P SuV (T k ) = P TrV (T k ).
In other words, if P BiP (b|c) is constant, then a biased pundit's publications b k convey no information to i. This can be seen immediately by inspection of Eq. 7. Notice that requiring that P BiP (b|c) does not imply that any individual pundits simply publish information b uniformly at random, without regard to c-instead, it says that if one averages over all pundits and considers t j P BiP (b|c, tj)P (tj), then the cumulative probability of seeing any particular b is constant, and independent of c.
More generally, one can make this observation.
Proposition 2 If the prior P (Tj) such that (1) P BiP (c|c) = α for all c, and (2) P BiP (b|c) = P BiP (b |c) for all communications b = c and b = c , then for all b k ,
P SuV (T k |b k ) = αP TrV (T k |b k ) + (1 − α)P TrV (T k )
In other words, if all deceptions are equally likely, but publications are accurate with fixed probability α, then a suspicious voter's update to T k is simply a mixture of her prior belief P SuV (T k ) = P TrV (T k ) and the belief a trusting voter would have, P TrV (T k |b k ), with mixing coefficient α. Again, this proposition can be be verified immediately by inspection of lines 8-9.
A final observation is that when a biased pundit's preference tj is the same as a voter's preference ti, and this is known to both i and j, then even a suspicious voter will obtain high utility by simply believing j's publication b k . In particular i's utility from adopting the belief that D = b k is just as high as if i had observed c k itself.
Proposition 3 If tj = ti, and b k is a publication from j under the biased pundit model, then the expected utility to i of voting according to P TrV (T k |D k = b k ) is at least as large as the expected utility to i of voting according to P TrV (T k |D k = c k ).
This proposition seems plausible if we recall that b k was chosen to maximize the utility to j of i's belief in b k in the trusting voter model-thus, since the utility to i is the same as the utility to j, it seems reasonable that adopting this belief is also useful to i. To establish it more formally, let us define EU (b|c, ti) to be the expected utility to agent (either i or j), absent reputational costs, of having i adopt the belief in the trusting-voter model that D k = b when in fact D k = c, if Ti = ti. In other words, we define
EU (b|c, ti) ≡ s k ,t k ,y ik P TrV (T k = t k |D k = c)P (S k = s k |t , t k )P TrV (Y ik = y ik |Ti = ti, D k = b)f TrV u(s k , y)
Note that the weighting in the factors P TrV (T k = t k |D k = c)P TrV (S k = s k |t , t k ) holds for both i and j, because here we care about the true distribution over T k , as deduced from c. The weighting in the factor P TrV (Y ik = y|Ti = ti, D k = b) arises because for both i and j, utility is based on i's estimated support y ik for k given i's known preference ti.
If we assume that ti = tj = t, then this simplifies to
EUi(b|c, ti) = EUj(b|c, ti) = s,t k ,y ik P TrV (T k = t k |D k = c)P TrV (S = s|t, t k )P TrV (Y ik = y ik |Ti = t, D k = b)fu(s, y)
and hence we see that in this case, the functions for i and j are indeed the same. Since j has chosen b to maximize EU (b|c) − fR(b, c), and fR is never negative, clearly EUj(b|c, ti) ≥ EUj(b|c, ti), and so EUi(b|c, ti) ≥ EUi(b|c, ti) as well.
As noted in Section 2, there are a number of papers analyzing media bias in which voters are assumed prefer "good news" (i.e., new biased towards their favored candidates) leading to fragmentation and specialization as media companies differentiate by providing news biased for their readers. The results above suggest a rational reason for picking a news source j with the same partisan preferences as one's self: in particular, this sort of news is computionally easier to process. Similarly, biased sources with unknown preferences are "less informative", in the sense that new information leads to changes in support (relative to unbiased sources, or well-aligned partisan sources).
Suspicious voters and "irrationality"
Finally, we address the question of whether suspicous voters can behave in the counter-intuitive manner discussed in the introduction-whether information about the candidate k that is negative (as interpreted by a trusting voter) can increase support for a suspicous voter. We will show that this is possible.
Theorem 2 It can be the case that information b will decrease i's support for k in the trusted voter model, and increase i's support for k in the suspicous voter model: i.e., it may be that
E P TrV [Y ik |D k = b] < E P TrV [Y ik ] but E P SuV [Y ik |D k = b] > E P SuV [Y ik ].
Intuitively, this happens when i believes strongly that j is being deceptive, and i has different candidate preferences from j. The theorem asserts the existence of such behavior, so we are at liberty to make additional assumptions in the proof (preferably, ones that could be imagined to hold in reality).
In the proof, we assume that j's preferences Tj are known to i. This is plausible since context may indicate, for instance, that j is a strong conservative. This does not affect the basic model, since we allow the case of an arbitrary prior on Tj.
We also assume that all information about candidates is either strictly positive for i and strictly negative for j, or else strictly negative for i and strictly positive for j. To see how this is possible, first imagine a hypercube-like space of candidate positions, as in Figure 3, and assume that Ti and Tj are on opposite corners of the cube. The cube might indicate, for example, positions on the environment, abortion, and increased military spending, with i preferring the liberal position on all three and j preferring the conservative positions. Then assume that all information indicates the probability of the candidate's position along these three axis; in this case the assumption is satisfied.
Given these two assumptions, the statement of the theorem holds. First, we need a slightly stronger version of Theorem 1. Informally, this states that if i has partial knowledge of b, but does know that b is strictly negative for i, then i's support for k will be weakened.
Corollary 1
Suppose H is a probability distribution over items of information b, and also b is strictly negative for i for every b with non-zero probability in H.
Let P TrV (Y ik |H) denote b P TrV (Y ik |D k = b)PH (b)
.
Then E P TrV [Y ik |H] < E P TrV [Y ik ].
Proof. For any item s ∈ dom(S ik ), it is clear that
P (Y |E) = b P (Y |b)PH (b)
and also, by marginalization over the (unrelated) variable H, we see that P (Y ) = b P (Y )PH (b). Since for all b with non-zero probability under H we have that E[Y |b] < E[Y ], the result holds. This concludes the proof.
We can now prove Theorem 2. Proof. Suppose information b that is strictly negative for i is observed by i, and consider again the formula for P SuV (T k |B k = b k ) given on lines 8 and 9. This shows that the suspicous voter will reason by cases. In one case, j is being accurate, and the change in probability for T k is in the same direction as in the trusted voter model, and as noted above, this will lead to a belief update that decreases support for k. However, this change is down-weighted by the factor P BiP (B k = b|C k = b)P TrV (D k = b), which can be interpreted as the probability that b was really observed times the probability that j chooses to report accurately. We will assume that P TrV (D k = b) is very small, so that
P SuV (T k |B k = b k ) ≈ c =b P TrV (T k |D k = c) · P TrV (D k = c) · P BiP (B k = b|C k = c)
In this case, j is being deceptive.
Reasoning is this case is complicated by the fact that i must consider all inputs c that could have observed, and compute the product of P TrV (D k = c), the prior probability of c, and also P BiP (B k = b|C k = c), the probability of b being reported in place of c by j. However, since the preferences of i and j are opposite, the latter is quite informative: in particular, since b was deceptively chosen by j to be negative for i, then b must prefer that i give weaker support for k, implying that c is actually negative for j, and hence positive for i. This holds for every c that could have let to the deceptive report b. By Corollary 1, the net change in support for i in the suspicous voter model positive. This concludes the proof.
Concluding Remarks
To summarize, we propose a model in which there are two classes of voters, trusting voters and suspicious voters, and two types of information sources, unbiased sources and biased sources. Information from an unbiased source is modeled simply as observations D k that probabilistically inform a voter about a candidate k's positions, and trusting voters are voters that treat information about a candidate as coming from an unbiased source. We show that reasoning about new information is computationally easy for trusting voters, and that trusting voters behave intuitively: in particular, negative information about candidate k (according to a particular definition) will decrease support for k, and positive information will increase support.
Biased sources are information sources j who plan their communications in order to encourage trusting voters to vote in a particular way). To do this, they report some possibly-modified version B k of a private observation C k , concealing the original C k . In the model, B k is chosen based on the utility to j of the probable effect of Bj on a trusting voter i.
Finally, suspicious voters model the behavior of biased sources. In general this is complicated to do, however, some special cases lead to simple inference algorithms. For instance, under one set of assumptions, all information from biased sources can be ignored. In another set of assumptions, a suspicious voter will make the same sort of updates to her beliefs as a trusting voter, but simply make them less aggressively, discounting the information by a factor related to the probability of deception.
Another interesting tractible reasoning case for suspicious voters is when the biased information source j has the same latent candidate preferences as the voter i. In this case, suspicious voters can act the same way a trusting voter would-even if the information acted on is false, it is intended to achieve a result that is desirable to i (as well as j).
These results are of some interest in light of the frequently-observed preference for partisan voters to collect information from similarly partisan information channels. The results suggest a possibly explanation for this, in terms of information content. The optimal way to process information from a possibly-deceptive unknown source, or a source with a known-to-be-different partisan alignment, is to either discount it, ignore it, or else employ complicated (and likely computationally complex) reasoning schemes. However, reports from partisan source with the same preferences as a voter can be acted on as if they were trusted-even if the reports are actually deceptive.
Finally, we show rigorously that a suspicious voter can, in some circumstances, increase support for a candidate k after receiving negative information about k. Specifically, information that would decrease support for a trusting voter might increase support for a suspicious voter-if she believes the source has different candidate preferences, and if she believes the source is being deceptive. This behavior mimics behavior attributed elsewhere to "motivated reasoning", but does not arise from "irrationality"-instead it is a result of the voters correct identification of, and compensation for, an ineffective attempt at manipulation on the half the information source.
We should note that this hypothesis does not suggest that emotion is not present in such situationsin fact, it seems likely that reports viewed as deceptive would indeed provoke strong emotional responses. It does suggest that some of the emotion associated with these counterintuitive updates may be associated with mechanisms that have an evolutionary social purpose, rather than being a result of some imperfect adaption of humans to modern life.
It seems plausible that additional effects can be predicted from the suspicious-voter model. For instance, although we have not made this conjecture rigorous, if a biased source j does not know i's political preferences, then deceptive messages b will likely tend to be messages that would be interpreted as negative by most voters. For instance, j might assert that the candidate violates some cultural norm, or holds an extremely unpopular political views. (Or , on the other hand, assert the candidate has a property that almost all voters agree is "good". Arguably, most information from deceptive partisan sources would be of this sort, rather than discussion of stands on widely-disagreed-on issues (like gay marriage or abortion in the US).
As they stand, however, the results do suggest a number of specific predictions about how information might be processed in a social setting. First, information provided by persons believed to have political alignments similar to voter i will be more easily assimilated, and have more effect on the view of i, than information provided by persons believed to have different political alignments. Second, information provided by persons with political alignments similar to voter i will be more assimilated in roughly the same speed (and with the same impact) as information from a believed-to-be-neutral source. Third, information provided by persons with political alignments different from voter i may lead to counterintuitive updates, while information from similarly-aligned sources or neutral sources will not. An important topic for future work would be testing these predictions, for instance using the DPTE methodology.
fy
TrV (s)P TrV (S ik = s|d k ) = s ∈S fy TrV (s)P TrV (S ik = s)
Figure 3 :
3An illustration of strictly negative information for a voter i and the last two terms can be written asm =1 fy TrV (a )P TrV (S ik = a |d k ) + fy TrV (b )P TrV (S ik = b |d k ) = m =1 fy TrV (a ) P TrV (S ik = a ) + δ + fy TrV (b ) P TrV (S ik = b ) − δ = m =1 fy TrV (a )P TrV (S ik = a ) + fy TrV (b )P TrV (S ik = b ) + δ (fy TrV (a ) − fy TrV (b )) ≤ m =1fy TrV (a )P TrV (S ik = a ) + fy TrV (b )P TrV (S ik = b )
Figure 4 :
4Model of a biased pundit, in MAID and Bayes net notation 4 Modeling biased pundits and suspicious voters 4.
P (Yi = 1|S ik = s): see Eq 1safety-net
0.4
goodLiberal
motherhood 0.6
goodConserv guns
0.3
goodConserv motherhood 0.7
evilLiberal
safety-net
0.9
evilLiberal
chthulu
0.1
evilConserv
guns
0.8
evilConserv
chthulu
0.2
Table 2 :
2Conditional probability tables (CPT) for a sample example of the trusting voter model. In the table for s ik |t i , t k , is drawn unformly from the set {−1, 0, +1} and for pairs
The social calculus of voting: Interpersonal, media, and organizational influences on presidential choices. P A Beck, R J Dalton, S Greene, R Huckfeldt, American Political Science Review. 96015773P.A. Beck, R.J. Dalton, S. Greene, and R. Huckfeldt. The social calculus of voting: Interpersonal, media, and organizational influences on presidential choices. American Political Science Review, 96(01):5773, 2002.
Political polarization and the electoral effects of media bias. D Bernhardt, S Krasa, M Polborn, Journal of Public Economics. 925-610921104D. Bernhardt, S. Krasa, and M. Polborn. Political polarization and the electoral effects of media bias. Journal of Public Economics, 92(5-6):10921104, 2008.
Unfairly balanced: Unbiased news coverage and information loss. Jeremy Burke, Annual Meeting of the American Political Science Association. Chicago, ILJeremy Burke. Unfairly balanced: Unbiased news coverage and information loss. In Annual Meeting of the American Political Science Association, Chicago, IL, 2007.
Introductions and requests: Rhetorical strategies that elicit online community response. M Burke, E Joyce, T Kim, V Anand, R Kraut, Proceedings of the The third communities and technologies conference. the The third communities and technologies conferenceNew York, NY2140M. Burke, E. Joyce, T. Kim, V. Anand, and R. Kraut. Introductions and requests: Rhetorical strategies that elicit online community response. In Proceedings of the The third communities and technologies conference, New York, NY, page 2140, 2007.
Voters, emotions, and memory. J W Andrew, David P Civettini, Redlawsk, Political Psychology. 301Andrew J. W. Civettini and David P. Redlawsk. Voters, emotions, and memory. Political Psychology, 30(1):125-151, 2009.
Strategic information transmission. V Crawford, J Sobel, Econometrica: Journal of the Econometric Society. 14311451V. P Crawford and J. Sobel. Strategic information transmission. Econometrica: Journal of the Econometric Society, page 14311451, 1982.
Persuasion: Empirical evidence. S Dellavigna, M Gentzkow, Annual Review of Economics. 2S. DellaVigna and M. Gentzkow. Persuasion: Empirical evidence. Annual Review of Economics, 2:643-670, 2010.
A spatial theory of media slant and voter choice. J Duggan, C Martinelli, The Review of Economic Studies. 782640J. Duggan and C. Martinelli. A spatial theory of media slant and voter choice. The Review of Economic Studies, 78(2):640, 2011.
Political communication faces the 21st century. D A Graber, J M Smith, Journal of Communication. 553479D.A. Graber and J.M. Smith. Political communication faces the 21st century. Journal of Commu- nication, 55(3):479, 2005.
Multi-agent influence diagrams for representing and solving games. D Koller, B Milch, Games and Economic Behavior. 451181221D. Koller and B. Milch. Multi-agent influence diagrams for representing and solving games. Games and Economic Behavior, 45(1):181221, 2003.
The case for motivated reasoning. Z Kunda, Psychological Bulletin. 1083Z Kunda. The case for motivated reasoning. Psychological Bulletin, 108(3):480-498, 1990.
First steps toward a Dual-Process accessibility model of political beliefs, attitudes, and behavior. Feeling politics: Emotion in political information processing. M Lodge, C Taber, C Weber, 1130M. Lodge, C. Taber, and C. Weber. First steps toward a Dual-Process accessibility model of political beliefs, attitudes, and behavior. Feeling politics: Emotion in political information processing, page 1130, 2006.
An impression-driven model of candidate evaluation. Milton Lodge, Kathleen M Mcgraw, Patrick Stroh, The American Political Science Review. 832Milton Lodge, Kathleen M. McGraw, and Patrick Stroh. An impression-driven model of candidate evaluation. The American Political Science Review, 83(2):pp. 399-419, 1989.
Three Steps toward a Theory of Motivated Political Reasoning. Milton Lodge, Charles Taber, Cambridge University PressMilton Lodge and Charles Taber. Three Steps toward a Theory of Motivated Political Reasoning, pages 183-213. Cambridge University Press, 2000.
Relying on the information of interested parties. P Milgrom, J Roberts, The RAND Journal of Economics. 1832P. Milgrom and J. Roberts. Relying on the information of interested parties. The RAND Journal of Economics, page 1832, 1986.
Coarse thinking and persuasion. S Mullainathan, J Schwartzstein, A Shleifer, The Quarterly Journal of Economics. 1232577S. Mullainathan, J. Schwartzstein, and A. Shleifer. Coarse thinking and persuasion. The Quarterly Journal of Economics, 123(2):577, 2008.
Naive audience and communication bias. M Ottaviani, F Squintani, International Journal of Game Theory. 351129150M. Ottaviani and F. Squintani. Naive audience and communication bias. International Journal of Game Theory, 35(1):129150, 2006.
Networks of influence diagrams: A formalism for representing agents beliefs and decisionmaking processes. A Pfeffer, Journal of Artificial Intelligence Research. 33109147A. Pfeffer. Networks of influence diagrams: A formalism for representing agents beliefs and decision- making processes. Journal of Artificial Intelligence Research, 33:109147, 2008.
Feeling politics: Emotion in political information processing. D Redlawsk, Palgrave MacmillanD. P Redlawsk. Feeling politics: Emotion in political information processing. Palgrave Macmillan, 2006.
Hot cognition or cool consideration? testing the effects of motivated reasoning on political decision making. David P Redlawsk, The Journal of Politics. 6404David P. Redlawsk. Hot cognition or cool consideration? testing the effects of motivated reasoning on political decision making. The Journal of Politics, 64(04):1021-1044, 2002.
The affective tipping point: Do motivated reasoners ever get it?. David P Redlawsk, J W Andrew, Karen M Civettini, Emmerson, Political Psychology. 314David P. Redlawsk, Andrew J. W. Civettini, and Karen M. Emmerson. The affective tipping point: Do motivated reasoners ever get it? Political Psychology, 31(4):563-593, 2010.
Why copy others? insights from the social learning strategies tournament. L Rendell, R Boyd, D Cownden, M Enquist, K Eriksson, M W Feldman, L Fogarty, S Ghirlanda, T Lillicrap, K N Laland, Science. 3285975208L. Rendell, R. Boyd, D. Cownden, M. Enquist, K. Eriksson, M. W. Feldman, L. Fogarty, S. Ghirlanda, T. Lillicrap, and K. N. Laland. Why copy others? insights from the social learn- ing strategies tournament. Science, 328(5975):208, 2010.
Experimental study of inequality and unpredictability in an artificial cultural market. M Salganik, P Dodds, D Watts, Science. 3115762854M. J Salganik, P. S Dodds, and D. J Watts. Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311(5762):854, 2006.
Game theory and political science. Martin Shubik, Paper No. 351Martin Shubik. Game theory and political science. Paper No. 351, 1973.
Ideological media bias. D F Stone, Journal of Economic Behavior & Organization. 78D.F. Stone. Ideological media bias. Journal of Economic Behavior & Organization, 78:256-271, 2011.
| []
|
[
"Programmable Cellular Automata Based Efficient Parallel AES Encryption Algorithm",
"Programmable Cellular Automata Based Efficient Parallel AES Encryption Algorithm"
]
| [
"Debasis Das [email protected] \nDepartment of Computer Science and Engineering\nIndian Institute of Technology\nPatna-800013Patna, BiharIndia\n",
"Rajiv Misra [email protected] \nDepartment of Computer Science and Engineering\nIndian Institute of Technology\nPatna-800013Patna, BiharIndia\n"
]
| [
"Department of Computer Science and Engineering\nIndian Institute of Technology\nPatna-800013Patna, BiharIndia",
"Department of Computer Science and Engineering\nIndian Institute of Technology\nPatna-800013Patna, BiharIndia"
]
| [
"International Journal of Network Security & Its Applications (IJNSA)"
]
| Cellular Automata(CA) is a discrete computing model which provides simple, flexible and efficient platform for simulating complicated systems and performing complex computation based on the neighborhoods information. CA consists of two components 1) a set of cells and 2) a set of rules .Programmable Cellular Automata(PCA) employs some control signals on a Cellular Automata(CA) structure. Programmable Cellular Automata were successfully applied for simulation of biological systems, physical systems and recently to design parallel and distributed algorithms for solving task density and synchronization problems. In this paper PCA is applied to develop cryptography algorithms. This paper deals with the cryptography for a parallel AES encryption algorithm based on programmable cellular automata. This proposed algorithm based on symmetric key systems. | 10.5121/ijnsa.2011.3615 | [
"https://arxiv.org/pdf/1112.2021v1.pdf"
]
| 35,896,032 | 1112.2021 | b805e8bbf48122bf2d67ca444053e899663b7293 |
Programmable Cellular Automata Based Efficient Parallel AES Encryption Algorithm
November 2011
Debasis Das [email protected]
Department of Computer Science and Engineering
Indian Institute of Technology
Patna-800013Patna, BiharIndia
Rajiv Misra [email protected]
Department of Computer Science and Engineering
Indian Institute of Technology
Patna-800013Patna, BiharIndia
Programmable Cellular Automata Based Efficient Parallel AES Encryption Algorithm
International Journal of Network Security & Its Applications (IJNSA)
36November 201110.5121/ijnsa.2011.3615CAPCACryptographyAESSymmetric Key
Cellular Automata(CA) is a discrete computing model which provides simple, flexible and efficient platform for simulating complicated systems and performing complex computation based on the neighborhoods information. CA consists of two components 1) a set of cells and 2) a set of rules .Programmable Cellular Automata(PCA) employs some control signals on a Cellular Automata(CA) structure. Programmable Cellular Automata were successfully applied for simulation of biological systems, physical systems and recently to design parallel and distributed algorithms for solving task density and synchronization problems. In this paper PCA is applied to develop cryptography algorithms. This paper deals with the cryptography for a parallel AES encryption algorithm based on programmable cellular automata. This proposed algorithm based on symmetric key systems.
INTRODUCTION
A Cellular Automaton (CA) [1] is a computing model of complex system using simple rule. Researchers, scientists and practitioners from different fields have exploited the CA paradigm of local information, decentralized control and universal computation for modeling different applications. Wolfram [1] has investigated cellular automata using empirical observations and simulations. For 2-state 3-neighborhood CA, the evolution of the ith cell can be represented as a function of the present states of (i−1)th, (i)th, and (i+1)th cells(shown in Figure 1) as: x i (t+1) = f(x i−1 (t), x i (t), x i+1 (t)) where f, represents the combinational logic. For a 2-state 3neighborhood cellular automaton there are 2 3 =8distinct neighborhood configurations and 2 8 =256 distinct mappings from all these neighborhood configurations to the next state, each mapping representing a CA rule. The main aspect of cryptography and network security due to rapid development of information technology application. Cryptographic technique [2] based on two categories (1)symmetric key and (2)public key. CA based public cipher was proposed by guan [3].Stream CA based encryption algorithm was first proposed by wolfram [4]. Block encryption using hybrid additive cellular automata was proposed by Petre Anghelescu et. al [5].Cellular Automata computations and secret key cryptography was proposed by F. Seredynski et. al [6]. Block cipher based on reversible cellular automata was proposed by M. Seredynski and P. Bouvary [7].
Concept of Cellular Automata
Cellular Automata(CA) [1] is a collection of cells and each cell change in states by following a local rule that depends on the environment of the cell. The environment of a cell is usually taken to be a small number of neighboring cells. Figure 2 shows two typical neighborhood options (a) Von Neumann Neighborhood (b) Moore Neighborhood.
Concept of Programmable Cellular Automata
In Programmable Cellular Automata (PCA) [1], the Combinational Logic (CL) of each cell is not fixed but controlled by a number of control signals. As the matter of fact, PCA are essentially a modified CA structure. It employs some control signals on a CA structure. By specifying certain values of control signals at run time, a PCA can implement various functions dynamically in terms of different rules. A huge flexibility into this programmable structure can be introduced via control signals in CL. For an n-cell CA structure can be used for implementing 2 n CA configurations. In Figure 3 shows a 3-cell programmable CA structure and a PCA cell.
Type of Cellular Automata
Different variation of CA have been proposed to ease the design and modeling of complex Systems.
Linear CA
The Linear Cellular Automata have been explored by S. Nandi, B.K. Kar, and P. Pal Chaudhuri et al. [10]. If the Rule of CA involves only XOR logic then it is called the linear rules .A CA with all the cells having linear rules is called linear CA. In linear CA, the next state function applied at each cell follows the operation of Galois field(GF()) [11]. The linear CA are also termed as GF(q) CA where q is a prime number.
Complement CA
The Complement Cellular Automata have been explored by S. Nandi, B.K. Kar, and P. Pal Chaudhuri et al [10]. If the Rule of CA involves only XNOR logic then it is called the Complement rules . A CA with all the cells having Complements rules is called Complement CA.
Additive CA
The Additive Cellular Automata have been explored by S.Nandi, B.K. Kar, and P. Pal Chaudhuri et al [10].A CA having a combination of XOR and XNOR rules is called Additive CA. They matrix algebraic tools that characterize Additive CA and help develop its applications in the field of VLSI testing. The Additive CA schemes based on easily testable FSM, bit-error correcting code, byte error correcting code, and characterization of 2D cellular automata. The Additive CA used in universal pattern generation, data encryption, and synthesis of easily testable combinational logic. The new characterizations of additive CA behavior , Additive CA-based tools for fault diagnosis, and a wide variety of applications to solve real-life problems.
Uniform CA
The Uniform Cellular Automata have been explored by S.Nandi, B.K. Kar, and P. Pal Chaudhuri et al [10]. If all the cells obey the same rule,then the CA said to be a Uniform CA.
Hybrid CA
The Hybrid Cellular Automata have been explored by P. Anghelescu,S. Ionita and E. Sofron et al [10].If all the cells obey the different rule, then the CA said to be a Hybrid CA. The hybrid CA has been especially applied in a linear/additive variant in which the rule set can be analyzed through matrix algebra [10]. In [11] Das has shown that a three neighborhood additive CA can be represented by a tri diagonal matrix a matrix which has the elements of its diagonal and two off-diagonals as non-zero. The properties of CA with varying (non-uniform) neighborhoods.
Null Boundary CA
The Null Boundary Cellular Automata have been explored by A. Kundu and A.R.Paul et al. [8].A CA said to be a null boundary CA if both the left and right neighbour of the leftmost and rightmost terminal cell is connected to logic 0. One-dimensional (1D) Cellular Automata (CA)over finite fields are studied in which each interior (local) cell is updated to contain the sum of the previous values of its two nearest (left & right) neighbors along with its own cell value. Boundary cells are updated according to Null Boundary conditions. For a given initial configuration, the CA evolves through state transitions to an attracting cycle which is defined as attractor / basin . The number of cycles can be determined from the minimal polynomial and characteristic polynomial of the updated matrix which is formed by the linear CA. For detailed theoretical study, follow [10]. But, in case of non-linear CA, matrix can not be formed since it does not follow any regular mathematics.
Periodic Boundary CA
The Periodic Boundary Cellular Automata have been explored by P. Anghelescu,S. Ionita and E. Sofron et al [8].In Periodic Boundary CA the rightmost cell as the left neighbour of leftmost cell. Similarly ,the leftmost cell is considered as the right neighbour of rightmost cell. So, it is like a circular linked list data structure.
Programmable CA
The Programmable Cellular Automata have been explored by P. Anghelescu,S. Ionita and E. Sofron et al [12].A CA is called Programmable CA if it employs some control signals. By specifying values of control signal at run time, programmable CA can implement various function dynamically.
Reversible CA
The Reversible Cellular Automata have been explored by M. Seredynski and P. Bouvry et al [7]. A CA is said to be reversible CA in the sense that the CA will always return to its initial state. The Interesting Property of Being the Reversible which Means that not only forward but also reverse iteration is possible. Using Reversible Rule it is always possible to return to an initial state of CA at any point. One Rule is used for forward iteration and Another Rule, reversible to the first one ,is used for backward iteration This type CA used in Cryptography.
Non-Linear CA
The Non-Linear Cellular Automata have been explored by S. Das et al [13]. In non linear CA we are used CA with all possible logic. This paper establishes the non-linear CA as a powerful pattern recognizer.
Generalized Multiple Attractor CA
The special class of CA, referred to as GMACA [15] (Generalized Multiple Attractor Cellular Automata), is employed for the design. The desired CA model, evolved through an efficient implementation of genetic algorithm, is found to be at the edge of chaos. Cellular automata are mathematical idealizations of complex systems in discrete space and time.
Fuzzy CA:
The Fuzzy Cellular Automata have been explored by P. Maji and P. Pal Chaudhuri et al [14]. Fuzzy CA means CA with fuzzy logic. Application of fuzzy CA in pattern recognition. A special class of CA referred to as Fuzzy CA (FCA) [14] is employed to design the pattern classifier. In simple CA can handle only the Binary Patterns. In Fuzzy Cellular Automata, Each cell assumes a state and a Rational Value in [0,1].If We develop Hybrid System using CA then it is the combination of CA, Neural Network and fuzzy set or the combination of CA, Fuzzy set and Rough set.
. Advantages of CA in Various Research Fields
Sequential Fault Convergence
In Hardware Implementation [9] of CA, the experimental Result show that our cellular Automata produces better sequential fault convergence then the linear feedback shift register .Here we are applying the linear hybrid cellular automata rules [12].
Memorizing Capacity
The memorizing capacity of a highbred 3-neighborhood CA is better then that of Hopfield network. the Hopfield network is the model of neural network known for it association capacity.
Simulation Performance
A cellular Automata Machine can achieve simulation performance of at least several order of magnitude higher than that can be achieved with a conventional computer at compactable cost.
Theoretical Framework
A theoretical framework to study CA evolution based on graph theoretic formulation. A graph named as RVG ( Rule Vector Graph ) can be derived from the rule vector of a CA employing linear and non-linear rules. CA evolution can be characterized from the study of RVG properties.
Soft Computing
A soft computing tool for CA synthesis A methodology is under development for evolution of SOCA ( Self Organizing CA ) to realize a given global behavior.
Modeling Tools
Modeling Tools Based on the CA theory developed, a general methodology is under development to build a CA based model to simulate a system. The modeling tool enables design of a program to be executed on PCA ( Programmable CA) to simulate the given system environment.
Pattern recognition
Pattern recognition in the current Cyber Age, has got wide varieties of applications. CA based Pattern Classification / Clustering methodologies are under development based on the theoretical framework.
CA-Encompression
CA-Encompression (Encryption + Compression ) ,In the current cyber age, large volume of different classes of data -text, image, graphics, video, audio, voice, custom data files are stored and/or transferred over communication links. Compression and security of such data files are of major concern. Solutions to these problems lie in the development of high speed low cost software/hardware for data compression and data encryption. CA-Encompression technology is being developed as a single integrated operation for both compression and encryption of specific classes of data files such as medical image, voice data, video conference , DNA sequence, Protein sequence etc. Both lossy and lossless encompression are under development based on CA model.
CA Compression
Standalone CA Compression or CA-Encryption Technology Instead of a single integrated operation of compression and encryption, if a user demands only Compression or only Encryption, it can be supported using standalone packages (software / hardware version).
CA Based AES
CA based AES (Advanced Encryption System) ,As AES is the most popular security package, CA based implementation of AES algorithm in underway for development of low cost, high speed hardwired version of AES, is under development.
AES Encryption Algorithm
The Advance Encryption Standard [2] is a block cipher that encrypts and decrypts a data block of 128 bits. It provides extra flexibility over that required of an AES candidate, in that both the key size and the block size may be chosen to be any of 128, 192, or 256 bits but for the Advanced Encryption Standard (AES) the only length allowed is 128. It uses 10, 12 or 14 rounds [2]. The key size, which can be 128, 192 or 256 bits [2], depends on the number of round.
General Design of AES Encryption
In Figure 4 [2] shows the general design for the encryption algorithm; the decryption algorithm [2] is similar, but round keys are applied in the reverse order. In this figure-4 Nr defines the number of rounds. There is a relationship between number of rounds and the key size, which means we can have different AES versions; they are AES-128, AES-192 and AES-256. The round keys, which are created by the key-expansion algorithm, are always 128 bits, the same size as the plaintext or cipher text block.
The above figure 4 shows the structure of each round. Each round takes a state and creates another state to be used for the next transformation or the next round. The pre-round section uses only one transformation (AddRoundKey); the last round uses only three transformation(MixColumns transformation is missing).
To provide security, AES uses four types of transformations: substitution, permutation, mixing and key adding.
Substitution
The first transformation, SubBytes, is used at the encryption site. In the SubByte transformation, the state is treated as a 4x4 matrix of bytes. Transformation is done one byte at a time. The SubByte operation involves 16 independent byte-to-byte transformation. This transformation is non-linear byte transformation.
InvSubByte is the inverse of SubBytes. The transformation is used at decryption site.
Permutation
Next transformation in round is shifting, which permutes the bytes. Shifting is done at the byte level. In the encryption the transformation is called ShiftRows and the shifting is to the left. The number of shifts depends on the row number(0,1,2 or 3) of the state matrix.
In the decryption, the shifting is called InvShiftRows and the shifting is to the right.
Mixing
The mixing transformation changes the contents of each byte by taking four bytes at a time and combining them to recreate four new bytes. The mixing can be provided by matrix multiplication. The MixColumn transformation operates at the column level; it transforms each column of the state to a new column. The transformation is actually a matrix multiplication of a state column by a constant square matrix.
The InvMixColumn transformation is basically the same as the MixColumns transformation and it is used at the decryption site.
Key Adding
AddRoundKey also proceeds one column at a time. AddRoundKey adds a round key word with each state column matrix.
Analysis of AES
a. AES is more secure than DES due to the larger key size. For DES we need 2 56 tests to find the keys; for AES we need 2 128 tests to find the key. b. The strong diffusion and confusion provided by the different transformation removes any frequency pattern in the plaintext. c. The algorithms used in AES are so simple that they can be easily implemented using cheap processors and a minimum amount of memory.
PROPOSED AES ENCRYPTION ALGORITHM BASED ON PCA
Introduction
The Programmable Cellular Automata based on the elementary CA. proposed scheme is based on two CA one is elementary CA and the other is PCA. This PCA is used to provide real time keys for the block cipher. The block diagram of programmable cellular automata encryption systems is presented in Figure 5.
11:
The cipher text is deciphered into plain text
Rules for PCA
The rules specify the evolution of the PCA from the neighborhood configuration to the next state and these are presented in Table 1. The operation of the simple PCA can be represented by the state transition graph. Each node of the transition graph represents one of the possible states of the PCA. The directed edges of the graph correspond to a single time step transition of the automata.
Procedure to Construct Transition Diagram
Considering the rule vector < 51,51,195,153> with length 4 so, the total number of states are 2 4 = 16 states means 0000 to 1111. By using the rule vector if the start state is 0000 then next state is 1111 as shown in Figure 6 and continuing the process finally it returns back to state 0000 by completing a cycle. Initial state at time (t) : 0 0 0 0(left and right most cell connected to logic 0). If the start is 0001 then next state will be 1110 (shown in Figure 7) and continuing the process finally it returns back to state 0001 by completing a cycle. Initial state at time (t) : 0 0 0 1(left and right most cell connected to logic 0). If the start is 0100 then next state will be 1001 (shown in Figure 8) and continuing the process finally it returns back to state 0100 by completing a cycle. Initial state at time (t) : 0 1 0 0 (left and right most cell connected to logic 0). If the start is 0101 then next state will be 1000 (shown in Figure 9) and continuing the process finally it returns back to state 0101 by completing a cycle. Initial state at time (t) : 0 1 0 1 (left and right most cell connected to logic 0). In Figure 10.
PERFORMANCE ANALYSIS
The ICEBERG [9] scheme that proposed with the objective for efficient hardware implementation was not efficient for software implementation. The execution speed of AES code and the proposed code on a Intel Core 2 Duo 2.0 GHZ, in openMP platform. The results are tabulated in Table 3.
Table 3: Execution Time for AES and Proposed Scheme
Implementation speed of our scheme was found to be faster than AES for all key sizes. This could be possible due to the inherited parallelism feature of PCA. Performance result of AES and Proposed Scheme shown in figure 11. The comparision result of AES and proposed scheme based on execution time(In micro second) and different key size(128 bit, 192 bit, 256 bit).
CONCLUSION
The proposed model in this paper presents a parallel AES encryption algorithm which is based on Programmable Cellular Automata(PCA). PCA provides higher parallelism and simplification of software implementation. The AES Encryption algorithm is being implemented on a parallel platform (OpenMP) which ensures high encryption/decryption speed. The proposed model of this paper can be implemented on other parallel platform (other than OpenMP) which ensure more security with minimum processing time. Further development of a parallel AES encryption algorithm using two CA concepts PCA and Reversible Cellular Automata (RCA). In the PCA based efficient parallel encryption algorithm , the same cipher text may be generated from different plain text which is based on the different PCA rule configuration.
Figure 1 :
1One dimentional Cellular Automata
Figure 2 :
2(a) Von Neumann Neighborhood (b)Moore Neighborhood
Figure 3 :
3(a) A 3-cell Programmable CA Structure (b) A PCA cell
Figure 4 :
4AES Block Diagram
Figure 5 :
5Block Diagram of AES Encryption System Based on PCA 2.2 Proposed Algorithm Algorithm: AES Enciphering and Deciphering Process Based on PCA Input : Given Plain Text / Cipher Text Output : Cipher Text / Plain Text 1 : Enter the initial state of PCA, Convert decimal value to binary and store in an Array, Apply the corresponding rule on the ith Cell, A[i].
The corresponding combinational logic of rule 51, rule 195 and rule 153 for CA can be expressed as follows: Rule 51: a i (t+1) : NOT(a i (t)) Rule 195 : a i (t+1) : a i-1 (t) XNOR a i (t) Rule 153 : a i (t+1) : a i (t) XNOR a i+1 (t)
Figure 6 :
6State Changes from 0-15-2-13-0 using Rule Vector <51, 51,195, 153 >
Figure 7 :
7State Changes, 1-14-3-12-1 using Rule Vector <51, 51, 195, 153>
Figure 8 :
8State Changes, 4-9-6-11-4 using Rule Vector <51, 51, 195, 153>
Figure 9 : 153> Figure 10 :
9153>10State Changes, 5-8-7-10-5 using Rule Vector <51, 51, 195, State Transition Diagram of PCA
Figure 11 :
11Comparision result of AES and Proposed Scheme
Table 1 :
1The rules That Updated The next state of the CA cells :Rule
111
110
101
100
011
010
001
000
153
1
0
0
1
1
0
0
1
195
1
1
0
0
0
0
1
1
51
0
0
1
1
0
0
1
1
Table 2 :
2Rule Selection TableC1
C2
Rule
Applied
0
0
51
0
1
51
1
0
195
1
1
153
the State Transition Diagram of PCA has four equal length cycles, each cycle has a cycle length 4. The rule selection table presented in Table 2. Considering this PCA as an enciphering function and defining a plain text as its original state it goes to its intermediate state after two cycles which is enciphering process. After running another four cycles, the intermediate state returns back to its original state which deciphers cipher text into plain text ensuring deciphering process.
A new kind of science. S Wolfram, Wolfram MediaS. Wolfram, ( 2002) "A new kind of science", Wolfram Media.
Cryptography and Network Security. W Stallings, Prentice Hall3rd editionW. Stallings,(2003) "Cryptography and Network Security", 3rd edition, Prentice Hall.
Cellular Automaton Public Key Cryptosystem. P Guan, complex system 1Guan P,(1987) " Cellular Automaton Public Key Cryptosystem", complex system 1, pp.51-56.
Cryptography with cellular Automata. S Wolfram, SpringerS. Wolfram,(1985) " Cryptography with cellular Automata", pp.429-432. Springer.
Block Encryption Using Hybrid Additive Cellular Automata. 7th International conference on Hybrid Intelligent Systems. Petre Anghelescu, Silviu Ionita & Emil SafronIEEEPetre Anghelescu, Silviu Ionita & Emil Safron(2007) "Block Encryption Using Hybrid Additive Cellular Automata", 7th International conference on Hybrid Intelligent Systems, IEEE.
Cellular Automata Computations and Secret Key Cryptography. F Seredynski, P Bouvry & Albert, Y Zomaya, ElsevierF. Seredynski, P. Bouvry & Albert Y. Zomaya(2004), "Cellular Automata Computations and Secret Key Cryptography". Elsevier.
Block Cipher Based On Reversible Cellular Automata. M Seredynski, & P Bouvary, Ohmsha Ltd and SpringerNew Generation ComputingM. Seredynski & P. Bouvary(2005) "Block Cipher Based On Reversible Cellular Automata", New Generation Computing, pp. 245-258.,Ohmsha Ltd and Springer.
. A Kundu, A R Pal, T Sarkar, M Banarjee, S K , A. Kundu, A.R. Pal, T. Sarkar, M. Banarjee, S. K..
Comparative Study on Null Boundary and Periodic Boundary Neighbourhood Mulriple Attractor Cellular Automata for Classification. & D Guha, Mukhopadhayay, IEEEGuha & D. Mukhopadhayay,(2008) "Comparative Study on Null Boundary and Periodic Boundary Neighbourhood Mulriple Attractor Cellular Automata for Classification", IEEE.
ICEBERG: An involutional Cipher efficient for block encryption in reconfigurable Hardware. F Standaert, G Piret, G Rouvroy, J Quisquater, & J Legat, LNCS. 3017Springer VerlagF. Standaert ,G. Piret, G.Rouvroy , J. Quisquater, & J. Legat,(2004) "ICEBERG: An involutional Cipher efficient for block encryption in reconfigurable Hardware", LNCS 3017. pp. 279-299. Springer Verlag.
Theory and Applications of Cellular Automata in Cryptography. S Nandi, B K Kar, & P Chaudhuri, IEEE transactions on computers. 4312S.Nandi, B.K. Kar & P. Pal Chaudhuri, (1994) "Theory and Applications of Cellular Automata in Cryptography" , IEEE transactions on computers, vol. 43, no. 12.
On Characterization of Cellular Automata with Matrix Algebra. A K Das, A Sanyal, & P Chaudhuri, Information ScienceA. K. Das, A. Sanyal & P. Pal Chaudhuri,(1991) "On Characterization of Cellular Automata with Matrix Algebra", Information Science.
FPGA Implementation of Hybrid Additive Programmable Cellular Automata. Eight International conference on Hybrid Intelligent Systems. Petre Anghelescu, Silviu Ionita & Emil SafronIEEEPetre Anghelescu, Silviu Ionita & Emil Safron,(2008) "FPGA Implementation of Hybrid Additive Programmable Cellular Automata", Eight International conference on Hybrid Intelligent Systems,IEEE.
Theory and Applications of Nonlinear Cellular Automata In VLSI Design. S Das, PhD thesis, B. E. CollegeS. Das,(2006) "Theory and Applications of Nonlinear Cellular Automata In VLSI Design", PhD thesis, B. E. College.
A Fuzzy Cellular Automata Based Pattern Classifier. P Maji & P. Pal, Chaudhuri, DASFAA, LNCS-2973. P. Maji & P. Pal Chaudhuri,(2004) "A Fuzzy Cellular Automata Based Pattern Classifier", DASFAA, LNCS-2973, pp.494-505.
Characterization of Nonlinear Cellular Automata Model for Pattern Recognition. Niloy Ganguly, P Maji, A Das, B K Sikdar, &p Chaudhuri, AFSS 2002, LNAI 2275. SpringerD in Computer Science and Engineering from Indian Institute of Technology Patna, India. He received M. Tech in Computer Science and Engineering degree from KIIT University, Bhubaneswar in 2010. His research interests include Computer Network, Algorithm, Network Security and Cellular AutomataAuthors: Mr. Debasis Das is currently pursuing PhNiloy Ganguly ,P. Maji, A. Das, B. K. Sikdar,&P. Pal Chaudhuri ,(2002) "Characterization of Non- linear Cellular Automata Model for Pattern Recognition", AFSS 2002, LNAI 2275, pp. 214-220, Springer. Authors: Mr. Debasis Das is currently pursuing Ph.D in Computer Science and Engineering from Indian Institute of Technology Patna, India. He received M. Tech in Computer Science and Engineering degree from KIIT University, Bhubaneswar in 2010. His research interests include Computer Network, Algorithm, Network Security and Cellular Automata.
His research interests include Mobile Computing, Ad hoc Networks and Sensor Networks, Vehicular Networks and Intelligent Transportation System. He has published papers in IEEE Transaction in Mobile Computing and IEEE Transaction in Parallel and Distributed Systems. Dr, India. He received Ph.D from IIT Kharagpur in field of Mobile Computing in 2010. He holds M Tech degree in Computer Science and Engineering from the Indian Institute of Technology (IIT), Bombay, in 1989and BE degree in Computer Science from the MNIT Allahabad. Assistant Professor in Department of Computer Science and Engineering in Indian Institute of Technology Patna,Rajiv Misra is currently working as. He is a member of the IEEEDr. Rajiv Misra is currently working as Assistant Professor in Department of Computer Science and Engineering in Indian Institute of Technology Patna, India. He received Ph.D from IIT Kharagpur in field of Mobile Computing in 2010. He holds M Tech degree in Computer Science and Engineering from the Indian Institute of Technology (IIT), Bombay, in 1989and BE degree in Computer Science from the MNIT Allahabad, in 1987. His research interests include Mobile Computing, Ad hoc Networks and Sensor Networks, Vehicular Networks and Intelligent Transportation System. He has published papers in IEEE Transaction in Mobile Computing and IEEE Transaction in Parallel and Distributed Systems. He is a member of the IEEE.
| []
|
[
"Fluid and gyrofluid modeling of low-β e plasmas: phenomenology of kinetic Alfvén wave turbulence",
"Fluid and gyrofluid modeling of low-β e plasmas: phenomenology of kinetic Alfvén wave turbulence"
]
| [
"T Passot \nUniversité Côte d'Azur\nCNRS\nObservatoire de la Côte d'Azur\nLaboratoire J.L. Lagrange\nBoulevard de l'Observatoire34229, 06304, Cedex 4NiceCSFrance\n",
"P L Sulem \nUniversité Côte d'Azur\nCNRS\nObservatoire de la Côte d'Azur\nLaboratoire J.L. Lagrange\nBoulevard de l'Observatoire34229, 06304, Cedex 4NiceCSFrance\n",
"E Tassi \nAix Marseille Univ\nUniv Toulon\nCNRS\nMarseilleCPTFrance\n"
]
| [
"Université Côte d'Azur\nCNRS\nObservatoire de la Côte d'Azur\nLaboratoire J.L. Lagrange\nBoulevard de l'Observatoire34229, 06304, Cedex 4NiceCSFrance",
"Université Côte d'Azur\nCNRS\nObservatoire de la Côte d'Azur\nLaboratoire J.L. Lagrange\nBoulevard de l'Observatoire34229, 06304, Cedex 4NiceCSFrance",
"Aix Marseille Univ\nUniv Toulon\nCNRS\nMarseilleCPTFrance"
]
| []
| Reduced fluid models including electron inertia and ion finite Larmor radius corrections are derived asymptotically, both from fluid basic equations and from a gyrofluid model. They apply to collisionless plasmas with small ion-to-electron equilibrium temperature ratio and low βe, where βe indicates the ratio between the equilibrium electron pressure and the magnetic pressure exerted by a strong, constant and uniform magnetic guide field. The consistency between the fluid and gyrofluid approaches is ensured when choosing ion closure relations prescribed by the underlying ordering. A two-field reduction of the gyrofluid model valid for arbitrary equilibrium temperature ratio is also introduced, and is shown to have a noncanonical Hamiltonian structure. This model provides a convenient framework for studying kinetic Alfvén wave turbulence, from MHD to sub-de scales (where de holds for the electron skin depth). Magnetic energy spectra are phenomenologically determined within energy and generalized helicity cascades in the perpendicular spectral plane. Arguments based on absolute statistical equilibria are used to predict the direction of the transfers, pointing out that, within the sub-ion range associated with a k −7/3 ⊥ transverse magnetic spectrum, the generalized helicity could display an inverse cascade if injected at small scales, for example by reconnection processes. | 10.1063/1.5022528 | [
"https://arxiv.org/pdf/1801.06120v1.pdf"
]
| 59,273,500 | 1801.06120 | 450fedab5012c92141ffea0df366810ff0ae9a4c |
Fluid and gyrofluid modeling of low-β e plasmas: phenomenology of kinetic Alfvén wave turbulence
18 Jan 2018
T Passot
Université Côte d'Azur
CNRS
Observatoire de la Côte d'Azur
Laboratoire J.L. Lagrange
Boulevard de l'Observatoire34229, 06304, Cedex 4NiceCSFrance
P L Sulem
Université Côte d'Azur
CNRS
Observatoire de la Côte d'Azur
Laboratoire J.L. Lagrange
Boulevard de l'Observatoire34229, 06304, Cedex 4NiceCSFrance
E Tassi
Aix Marseille Univ
Univ Toulon
CNRS
MarseilleCPTFrance
Fluid and gyrofluid modeling of low-β e plasmas: phenomenology of kinetic Alfvén wave turbulence
18 Jan 2018arXiv:1801.06120v1 [physics.plasm-ph]
Reduced fluid models including electron inertia and ion finite Larmor radius corrections are derived asymptotically, both from fluid basic equations and from a gyrofluid model. They apply to collisionless plasmas with small ion-to-electron equilibrium temperature ratio and low βe, where βe indicates the ratio between the equilibrium electron pressure and the magnetic pressure exerted by a strong, constant and uniform magnetic guide field. The consistency between the fluid and gyrofluid approaches is ensured when choosing ion closure relations prescribed by the underlying ordering. A two-field reduction of the gyrofluid model valid for arbitrary equilibrium temperature ratio is also introduced, and is shown to have a noncanonical Hamiltonian structure. This model provides a convenient framework for studying kinetic Alfvén wave turbulence, from MHD to sub-de scales (where de holds for the electron skin depth). Magnetic energy spectra are phenomenologically determined within energy and generalized helicity cascades in the perpendicular spectral plane. Arguments based on absolute statistical equilibria are used to predict the direction of the transfers, pointing out that, within the sub-ion range associated with a k −7/3 ⊥ transverse magnetic spectrum, the generalized helicity could display an inverse cascade if injected at small scales, for example by reconnection processes.
I. INTRODUCTION
Reduced fluid models including electron inertia are classically used to study collisionless magnetic reconnection. These models, which are limited to scales large with respect to the electron Larmor radius ρ e , require a small value of the electron beta parameter β e defined as the ratio between the equilibrium electron pressure and the magnetic pressure exerted by a strong, constant and uniform magnetic guide field. Similarly, at the level of the ions, a fluid computation of ion finite Larmor radius (FLR) corrections restricts the considered scales to be either much larger than the ion Larmor radius ρ i (for which a perturbative approach is possible) or much smaller than ρ i , a case studied in Ref. [1], where the ion velocity is negligible. Denoting with τ a constant equilibrium ion-to-electron temperature ratio, when concentrating on scales of the order of the sonic Larmor radius ρ s , defined as ρ s = ρ i / √ 2τ , these regimes correspond to a value of τ much smaller or much larger than unity, respectively. In the case where magnetic fluctuations along the guide field are retained, the case of negligible τ was addressed in two dimensions in Refs. [2,3] and extended to three dimensions in Ref. [4]. The case where τ is small but not totally negligible (or finite, provided the considered scales are assumed larger than ρ i ) was addressed in Ref. [5], when electron inertia is neglected. One of the motivations of the present paper is to extend this fourfield model by retaining electron inertia, using a rigorous asymptotic ordering. Such a small-τ asymptotics, performed at scales of the order of the sonic Larmor radius ρ s , involves a second order computation of the ion FLR corrections in terms of k ⊥ ρ i , where k ⊥ refers to the transverse wavenumber of the fluctuations. As will be shown, the resulting reduced fluid model can also be obtained as an asymptotic limit of the gyrofluid model derived in Ref. [6]. The question then arises of the consistency of the two approaches, an issue which may be sensitive to the closure assumptions. The case of finite τ can be addressed using a gyrofluid approach which, retaining the parallel magnetic fluctuations B z , remains valid for somewhat larger values of β e , at least at large enough scales. When reduced to two fields by neglecting the coupling to the parallel ion velocity u i and thus to the slow magnetoacoustic modes, the resulting gyrofluid model isolates the dynamics of kinetic Alfvén waves (KAWs) which are supposed to play a main role in the solar wind.
Another aim of this paper is to use this two-field gyrofluid model to study phenomenologically criticallybalanced KAW turbulence at scales ranging from MHD to sub-d e scales (where d e stands for the electron skin depth), paying a special attention to the transverse magnetic energy spectra in the energy or the generalized helicity cascades, and to the direct or inverse character of these cascades. Such Kolmogorov-like phenomenology dismisses the possible effect of coherent structures such as current sheets which form as the result of the turbulent MHD cascade and which, in some instances, can be destabilized by magnetic reconnection. Recent twodimensional hybrid-kinetic simulations [7] suggest that, in the non-collisional regime, this process is fast enough to compete with the wave mode interactions, in a way that could affect the cascade at scales comparable to the ion inertial length d i , typical of the current sheet width.
In a small β i plasma, where β i = τ β e , this scale is significantly larger than ρ s , and the spectral break can indeed take place at d i , as suggested by recent two-dimensional hybrid simulations [8]. The above gyrofluid model can provide an efficient tool to address this issue.
At this point, it is useful to order the various relevant scales estimated in a homogeneous equilibrium state characterized by a density n 0 , isotropic ion and electron temperatures T 0i and T 0e , and subject to a strong ambient magnetic field of amplitude B 0 along the z-direction. In terms of the sonic Larmor radius ρ s = c s /Ω i , where c s = T 0e /m i is the sound speed and Ω i = eB 0 /(mc) the ion gyrofrequency, one has
d i = 2 β e ρ s , d e = 2 β e δρ s , ρ i = √ 2τ ρ s , ρ e = √ 2δρ s ,(1)
where β e = 8πn 0 T 0e /B 2 0 , δ 2 = m e /m i is the electron to ion mass ratio and τ = T 0i /T 0e . We have here defined the particle Larmor radii (r = i for the ions, r = e for the electrons) by ρ r = v th r /Ω r where the particle thermal velocities are given by v th r = (2T r /m r ) 1/2 and the inertial lengths by d r = v A /Ω r where v A = B 0 /(4πn 0 m i ) 1/2 = c s 2/β e is the Alfvén velocity.
The models to be derived should cover a spectral range which includes both scales large compared to d i (typical of the width of the generated current sheets) and scales comparable to d e (typical of collisionless reconnection processes). The considered scales will also be assumed to remain large compared to ρ e , so that electron FLR corrections reduce to the contribution ensuring the gyroviscous cancellation. This in particular implies the condition that ρ e /d e = β 1/2 e be small enough. On the other side, the ion to electron temperature ratio τ determines the magnitude of ρ i relatively to the considered scales. If τ ≫ 1, they are much smaller than ρ i , which makes ion velocities negligible. This case can be addressed using a fluid model, as shown in Ref. [1]. For τ ≪ 1, they are much larger than ρ i , and the problem is also amenable to a fluid approach with ion FLR corrections estimated perturbatively. This regime is addressed in Section II. For intermediate values of τ , a gyrofluid approach is required. It is the object of Section III. In Section IV, a two-fluid restriction of this model is used for a phenomenological study of critically-balanced kinetic Alfvén wave (KAW) turbulence. Section V presents a short summary together with a few comments.
II. FLUID MODELING FOR SMALL τ Two regimes will be here considered with β e scaling either like δ (scaling I) or like δ 2 (scaling II). The value of τ must then be chosen so that ρ i be small enough compared to ρ s (taken as the characteristic scale), but also smaller than d e . Since ρ i /ρ s = (2τ ) 1/2 and ρ i /d e = (τ β e ) 1/2 /δ, one should take τ = O(δ 3/2 ) for scaling I and τ = O(δ) for scaling II. In the case of scaling I d i /ρ s ≃ ρ s /d e ≃ d e /ρ e ≃ 1/δ 1/2 , and d e /ρ i ≃ 1/δ 1/4 , whereas for scaling II, d e and ρ s are comparable and clearly separated from d i (by a factor 1/δ) and ρ i (by a factor 1/δ 1/2 ).
It is convenient to take the sonic Larmor radius ρ s , the sound speed c s and the inverse ion gyrofrequency Ω −1 i as length, velocity and time units. Using the same nondimensional units as in Ref. [9], the amplitude of the fluctuations of density n and of the electric potential ϕ are controlled by the parameter ε ≪ 1, as n ∼ ϕ = O(ε). We assume that at scale ρ s , ∂ t = O(ε) and ∇ ⊥ ∼ O(1). We denote by A the parallel component of the magnetic potential, by u i and u e the parallel ion and electron velocity respectively and by B z the longitudinal magnetic field fluctuations. In the case of scaling I, A ∼ u i ∼ ∂ z = O(εδ 1/2 ), u e = O(ε/δ 1/2 ), and B z = O(εδ) (thus to be retained). Differently, for scaling II, A ∼ u i ∼ ∂ z = O(εδ), u e = O(ε/δ), and B z = O(εδ 2 ) (thus negligible). We furthermore denote by P r the pressure tensor of the r particle species, given by the sum of a gyrotropic part involving the parallel and perpendicular pressure fluctuations p r and p ⊥r and of a non-gyrotropic contribution Π r . Such scalings lead to the derivation of reduced fluid models retaining corrections O(τ ) or O(δ), relatively to the leading order.
The Ampère equation reads
∆ ⊥ A = β e 2 (u e − u i ).(2)
Summing the equations satisfied by the ion and electron velocities u i and u e leads to
(1 + n){∂ t u i + δ 2 u e + u i ·∇ ⊥ u i + δ 2 u e ·∇ ⊥ u e } +∇ ⊥ · τ P i + P e − 2 β e J ×B = 0,(3)
which, in the small-amplitude (weakly nonlinear) and quasi-transverse asymptotics, gives
d i dt u i + δ 2 d e dt u e + ∇ (τ p i + p e ) + τ b·∇ · Π i + b·∇ · Π e = 0
(4) for the parallel components. Here, we introduced the
parallel derivative ∇ f = b·∇f = −[A , f ] + ∂ z f , with [f, g] = z·(∇f × ∇g) = ∂ x f ∂ y g − ∂ y f ∂ x g,
where f and g refer to scalar functions, and z to the unit vector along the guide field. Noting that, to leading order,
b·∇ ⊥ ×(J ×B) = ∇ J = −∇ ∆ ⊥ A ,(5)
one also gets
d i dt ∆ ⊥ ϕ i + δ 2 d e dt ∆ ⊥ ϕ e + b · ∇ ⊥ × ∇ ⊥ · (τ P i + P e ) + 2 β e ∇ ∆ ⊥ A = 0(6)
for the sum of the ion and electron vorticities. Here b is the unit vector along the local magnetic field and the convective time derivative d r /dt stands for ∂ t + [ϕ r , ·], where the potentials ϕ r of the leading order transverse velocities of the r-particle species (u ⊥r = b × ∇ϕ r ), are given by ϕ i = ϕ + τ p ⊥i − (τ /2)∆ ⊥ ϕ i and ϕ e = ϕ − p ⊥e − (δ 2 /2)∆ ⊥ ϕ e . In these formulas, the first term is associated with the so-called E × B drift, the second one to the diamagnetic drift, while the last one originates from the leading order non-gyrotropic pressure contribution. Here and in the rest of the paper, electrons will be taken isothermal, leading to p e = p ⊥e = n.
The equations for the magnetic potential and for the parallel magnetic field component are easily obtained (see Ref. [1]) as
∂ t (A − δ 2 u e ) + ∇ (ϕ − n) − δ 2 [ϕ e , u e ] − b · ∇ · Π e = 0 (7) d e dt (B z − n − δ 2 ∆ ⊥ ϕ e ) − ∇ u e − b · ∇ ⊥ × ∇ ⊥ · ( Π e 1 + n ) = 0.(8)
The system of governing equations is supplemented by the perpendicular pressure balance, obtained by taking the transverse divergence of the transverse component of Eq.
(3) considered to leading order,
2 β e B z = τ 2 ∆ ⊥ ϕ i − τ p ⊥i − n.(9)
A. The case of negligible ion temperature
In the case of scaling I, the system made up of Eqs. (2), (4), (6)-(9) greatly simplifies as all the non-gyrotropic pressure components become sub-dominant, except the electronic ones associated with the gyroviscous cancellation. The τ contributions also drop out, and we obtain,
writing d dt = ∂ t + [ϕ, ·], d dt (1 + 2 β e )B z − ∇ u e = 0 (10) d dt (A − δ 2 u e ) + ∂ z ϕ + 2 β e ∇ B z = 0 (11) d dt (u i + δ 2 u e ) − 2 β e ∇ B z = 0 (12) d dt ∆ ⊥ ϕ + 2 β e ∇ ∆ ⊥ A = 0 (13) u e − u i = 2 β e ∆ ⊥ A ,(14)
which is a 3D extension of the model presented in Ref. [10], when taken in the cold-ion limit. Note that in this system, B z = −(β e /2)n. As mentioned in Ref. [10], it is easy to verify that, up to terms of order δ 2 , and after a simple rescaling, these equations also identify, in the 2D case, to those of Refs. [2,3]. A 3D extension of the latter model was given in Ref. [4]. Both systems possess a Hamiltonian formulation, with the same Poisson bracket structure. In particular, in the 2D limit, they both possess four infinite families of Casimir invariants, three of which associated with Lagrangian invariants. When writing the above system using the Alfvén velocity instead of the ion sound speed as velocity unit (i.e. substituting u i = 2/β e u ′ i , ϕ = 2/β e ϕ ′ , ∂ t = 2/β e ∂ ′ t ) and neglecting the electron inertia, we recover the reduced Hall-magnetohydrodynamics (RHMHD) equations (E19) and (E20) of Ref. [11]. Furthermore, as noted in Ref. [12], when concentrating on Alfvén waves and thus neglecting the coupling to u i , one easily checks that, in the present low β e limit where the coefficient 1 + 2/β e in Eq. (10) reduces to 2/β e , the β e parameter can be scaled out by writing B z = β e /2B ′ z , thus making ρ s the only characteristic scale of this system.
When neglecting electron inertia, Eqs. (10)- (14) can be considered for any value of β e . In the large β e limit and in 2D, by using the above rescalings for u i , ϕ and time, the resulting system identifies with Eqs. (20)-(23) of Ref. [13] for incompressible two-fluid MHD, when taking ε = 2/β e which measures d i in units of ρ s . Note that the system derived in Ref. [13] involves an equation for A z instead of A (quantities which identify at the considered order). In this case, the last term of Eqs. (11) originates from the Hall term, while it here results from the electron pressure in Ohm's law. Pressure balance ensures the equality of these two contributions.
B. The case of small but finite ion temperature
Derivation of the ion FLR contributions
Since with scaling II, ρ i /ρ s = O(δ 1/2 ), ion FLR corrections enter the dynamics as contributions of order δ. Using this scaling, we first derive the electron equations. At the required order in Eq. (7), we have ϕ e = ϕ − n and, from Ref. [1],
b·(∇ ⊥ ·Π e ) = δ 2 [n, u e ].(15)
This contribution cancels the diamagnetic drift δ 2 [n, u e ] that originates from the second term of ϕ e . Equation (7) thus rewrites
d dt (A − δ 2 u e ) + ∂ z ϕ − ∇ n = 0.(16)
In Eq. (8), B z is negligible as well as the nongyrotropic pressure contribution. This equation thus reduces to
d dt n + ∇ u e = 0.(17)
We now turn to the velocity equations (4) and (6). The ion non-gyrotropic pressure tensor can be estimated within a perturbative computation in terms of the parameters ε and τ from the coupled system provided by Eq.
(A6) of Ref. [14] and a drift expansion of the ion transverse velocity. Neglecting the heat flux contributions to Π i , we are led, in practice, to repeat the calculations made in Appendix A of Ref. [1], only replacing pressures and velocities of the electrons by those of the ions and dropping the factors −δ 2 and δ 4 , which corresponds to changing the charge and the mass when replacing electrons by ions. This results in expressing the parallel component of the nongyrotropic ion pressure force as
b·(∇ ⊥ ·Π i ) = −[p ⊥i − B z , u i ] − ∇ ∆ ⊥ ϕ i −[∇ ⊥ ϕ i ; ∇ ⊥ A ] − ∂ t ∆ ⊥ u i − [ϕ i , ∆ ⊥ u i ] + 1 2 [∆ ⊥ ϕ i , u i ] − [∇ ⊥ ϕ i ; ∇ ⊥ u i ],(18)
where we use the notation [∇f
; ∇g] = i [∂ i f, ∂ i g].
Equation (4) then rewrites
∂ t (u i − τ ∆ ⊥ u i + δ 2 u e ) + [ϕ i , u i − τ ∆ ⊥ u i ] + [ϕ, δ 2 u e ] −τ [p ⊥i − 1 2 ∆ ⊥ ϕ i , u i ] − τ [∇ ⊥ ϕ i ; ∇ ⊥ (A + u i )] +∇ (n + τ p i − τ ∆ ⊥ ϕ i ) = 0.(19)
For the vorticity equation, we need to express (20) where the last line of Eq. (20) is obtained by a computation to second order in terms of scale separation. The latter computation is rather cumbersome and was performed using MAPLE symbolic calculation software. In this expression, it is of interest to rewrite
b·∇ ⊥ × (∇ ⊥ ·Π i ) = −[p ⊥i , ∆ ⊥ ϕ i ] − [∇ ⊥ p ⊥i ; ∇ ⊥ ϕ i ] + 1 2 ∇ ∆ ⊥ u i + 1 2 [∆ ⊥ A , u i ] + 1 2 ∆ ⊥ (∇·u i ) − 1 4 ∂ t ∆ 2 ⊥ ϕ i + [ϕ i , ∆ 2 ⊥ ϕ i ] − [∇ ⊥ ϕ i ; ∇ ⊥ ∆ ⊥ ϕ i ],∆ ⊥ (∇·u i ) = −∆ ⊥ (∂ t n + [ϕ i , n]) = −∂ t ∆ ⊥ n −[ϕ i , ∆ ⊥ n] − [∆ ⊥ ϕ i , n] − 2[∇ ⊥ ϕ i ; ∇ ⊥ n],(21)
where one can make the replacement
∂ t ∆ ⊥ n = −∆ ⊥ [ϕ i + τ 2 ∆ ⊥ ϕ i , n] + ∇ u e ,(22)
the second term in the bracket becoming subdominant when substituted into the vorticity equation. At the considered order, noting that the contribution of the ion gyrotropic pressure is of lower order, the vorticity equation becomes, after writing p ⊥i = n + t ⊥i , where t ⊥i refers to the perpendicular ion temperature fluctuations (and t i to the parallel ones),
∂ t ∆ ⊥ ϕ i − τ 4 ∆ 2 ⊥ ϕ i + [ϕ i , ∆ ⊥ ϕ i − τ 4 ∆ 2 ⊥ ϕ i ] + 2 β e ∇ ∆ ⊥ A + τ 2 ∆ ⊥ ∇ u e + τ [∆ ⊥ ϕ i , n] +τ [∇ ⊥ ϕ i ; ∇ ⊥ (n − ∆ ⊥ ϕ i )] − τ ∇ ⊥ ·[t ⊥i , ∇ ⊥ ϕ i ] = 0.(23)
Determination of the temperature fluctuations: As discussed in Appendix A, the present scaling suggests considering an adiabatic regime for the ions, where gyrotropic heat fluxes are negligible. In this case, neglecting also the fourth-rank cumulant contributions (B z being small in the present ordering), one has
d i dt t i + 2∇ u i + τ [t i , p ⊥i ] = 0 (24) or d dt t i − τ 2 [∆ ⊥ ϕ, t i ] + 2∇ u i = 0.(25)
Similarly,
d i dt (t ⊥i − n) − ∇ u i + 2τ [t ⊥i , p ⊥i ] = 0,(26)
which rewrites
d dt (t ⊥i − n) − τ 2 [∆ ⊥ ϕ, t ⊥i − n] − ∇ u i = 0.(27)
The terms of the form ∇ u i are subdominant within scaling II. If one is not interested in the own dynamics of the temperatures, they only need to be determined at the dominant order, and it is possible to take
d dt t i = 0 (28) d dt t ⊥i = d dt n = −∇ u e .(29)
Since we also have
∆ ⊥ A = β e 2 u e ,(30)
we conclude that, within scaling II, the equation for u i is decoupled. The system of Eqs. (23), (16), (17), together with (30), (29) and the relation ϕ = ϕ i + τ 2 ∆ ⊥ ϕ i − τ n − τ t ⊥i , conserves the energy
E 1 = 1 2 |∇ ⊥ ϕ i | 2 + δ 2 u 2 e + τ 4 (∆ ⊥ ϕ i ) 2 + 2 β e |∇ ⊥ A | 2 + (1 + τ )n 2 + τ t 2 ⊥i d 3 x. (31)
A further simplification is possible (with a proper choice of initial conditions) where temperatures are determined algebraically. For this purpose, one can remark that the number density n is also given by the ion continuity equation in the form (after using the expression for ϕ i )
dn dt − τ 2 [∆ ⊥ ϕ, n] + τ [p ⊥i , n] + ∇ · u i = 0.(32)
In order to estimate ∇ ⊥ · u ⊥i , we consider the drift expansion of the transverse velocity.
u ⊥i = 1 B b × ∇ ⊥ ϕ + τ 1 + n ∇ ⊥ p ⊥i + τ 1 + n (∇ ⊥ ·Π i ) + ∂ t A ⊥ + d (i) dt u ⊥i .(33)
where B = |B| = 1+B z +O(ε 2 ) and A ⊥ is the transverse component of magnetic vector potential. As at scales comparable to ρ s , B z = O(εβ e ) and A ⊥ also scales as εβ e , it follows that
b × d (i) dt u ⊥i = −∂ t ∇ϕ − [ϕ, ∇ ⊥ ϕ] + O(ε 2 τ ).(34)
As one also has
∇ ⊥ ·( b × ∇ ⊥ ϕ) = O(ε 2 (ε + β e )), it follows that ∇ ⊥ ·u ⊥i = − d dt ∆ ⊥ ϕ + O(ε 2 (τ + ε)),(35)
and consequently
d dt (t ⊥i − ∆ ⊥ ϕ) = O(ε 2 (τ + ε)).(36)
For suitable initial conditions, one can thus write t i = 0 and t ⊥i = ∆ ⊥ ϕ, which reproduces the closure for the perpendicular ion temperature used in Ref. [5]. The system can then be reduced to a 3-field model made up of Eqs. (16), (17) and of the equation for the parallel vorticity
∂ t ∆ ⊥ ϕ i − τ 4 ∆ 2 ⊥ ϕ i + [ϕ i , ∆ ⊥ ϕ i − τ 4 ∆ 2 ⊥ ϕ i ] + 2 β e ∇ ∆ ⊥ A + τ 2 ∆ ⊥ ∇ u e + τ [∆ ⊥ ϕ i , n] +τ [∇ ⊥ ϕ i ; ∇ ⊥ n] = 0,(37)
together with Eq. (30) and the expression for ϕ in terms of ϕ i , which now rewrites
ϕ = ϕ i − τ 2 ∆ ⊥ ϕ i − τ n.(38)
This system does not conserve energy. In a way similar to what is done in Ref. [5], adding to Eq. (37) the equation
−τ (∂ t ∆ 2 ⊥ ϕ i +[ϕ i , ∆ 2 ⊥ ϕ i ]+∆ ⊥ ∇ u e +2[∇ϕ i ; ∇∆ ⊥ ϕ i ]) = 0,(39)
obtained after taking the Laplacian of the vorticity equation at dominant order, we obtain a new system, equivalent to the previous one at order O(τ ) in the form
∂ t ∆ ⊥ ϕ * − 5 4 τ ∆ 2 ⊥ ϕ * + [ϕ * , ∆ ⊥ ϕ * − 5 4 τ ∆ 2 ⊥ ϕ * ] + 2 β e ∇ ∆ ⊥ A − τ 2 ∆ ⊥ ∇ u e + τ [∆ ⊥ ϕ * , n] +τ [∇ ⊥ ϕ * ; ∇ ⊥ (n − 2∆ ⊥ ϕ * )] = 0 (40) d dt n + ∇ u e = 0 (41) d dt (A − δ 2 u e ) + ∂ z ϕ − ∇ n = 0,(42)
where we introduced a new potential
ϕ * = ϕ + τ n + (τ /2)∆ ⊥ ϕ * ,(43)
The above system conserves the energy
E 2 = 1 2 |∇ ⊥ ϕ * | 2 + δ 2 u 2 e + 5τ 4 (∆ ⊥ ϕ * ) 2 + 2 β e |∇ ⊥ A | 2 +(1 + τ )n 2 d 3 x.(44)
This model introduces ion FLR corrections but neglects the coupling with the ion parallel velocity. The ordering is indeed limited to scales where u e is much larger than u i , a condition which excludes scales of order d i or larger.
Extension of the model to larger scales
At larger scales, another scaling (scaling III) must be used where, keeping β e = O(δ 2 ) and τ = O(δ), one as-
sumes ∇ ⊥ ∼ δ, ϕ = O(ε), n ∼ u i ∼ u e ∼ A = O(δε), ∂ t ∼ δ 2 ε and ∂ z ∼ δ 3 ε.
In this regime, the system takes the form of the RHMHD equations (in the small β limit), where electron inertia and finite Larmor radius corrections are absent. It is then easy to build a uniform model that reduces to the latter large-scale model or to the former 3-field model when scalings III or II are applied respectively. It contains terms that are negligible in one or the other specific limits, and also sub-dominant additional terms, corresponding to the first two terms of the second line of Eq. (20), needed for the energy to be conserved.
Keeping the dynamical equations for the temperature fluctuations but neglecting the O(τ ) corrections which turn out to be irrelevant at the order of the asymptotics, we are led to write the reduced fluid model in the form
∂ t ∆ ⊥ ϕ i − τ 4 ∆ 2 ⊥ ϕ i + [ϕ i , ∆ ⊥ ϕ i − τ 4 ∆ 2 ⊥ ϕ i ] + 2 β e ∇ ∆ ⊥ A + τ 2 ∆ ⊥ ∇ u e + τ [∆ ⊥ ϕ i , n] +τ [∇ ⊥ ϕ i ; ∇ ⊥ (n − ∆ ⊥ ϕ i )] + τ 2 ∇ ∆ ⊥ u i + τ 2 [∆ ⊥ A , u i ] − τ ∇ ⊥ ·[t ⊥i , ∇ ⊥ ϕ i ] = 0 (45) ∂ t (u i − τ ∆ ⊥ u i + δ 2 u e ) + [ϕ i , u i − τ ∆ ⊥ u i ] + [ϕ, δ 2 u e ] −τ [p ⊥i − 1 2 ∆ ⊥ ϕ i , u i ] − τ [∇ ⊥ ϕ i ; ∇ ⊥ (A + u i )] +∇ (n + τ p i − τ ∆ ⊥ ϕ i ) = 0 (46) d dt n + ∇ u e = 0 (47) d dt (A − δ 2 u e ) + ∂ z ϕ − ∇ n = 0 (48) d dt t i + 2∇ u i = 0 (49) d dt (t ⊥i − n) − ∇ u i = 0(50)∆ ⊥ A = β e 2 (u e − u i )(51)ϕ = ϕ i + τ 2 ∆ ⊥ ϕ i − τ p ⊥i (52) p ⊥i = n + t ⊥i , p i = n + t i .(53)
The energy is given by
E 3 = 1 2 u 2 i + τ |∇ ⊥ u i | 2 + |∇ ⊥ ϕ i | 2 + δ 2 u 2 e + τ 4 (∆ ⊥ ϕ i ) 2 + 2 β e |∇ ⊥ A | 2 + (1 + τ )n 2 + τ t 2 ⊥i + τ 2 t i 2 d 3 x. (54)
Similarly to what was done at the level of the 3-field model, it is possible to simplify this system (assuming suitable initial conditions) by prescribing t i = 0 and t ⊥i = ∆ ⊥ ϕ (or equivalently, at the level of the present ordering, t ⊥i = ∆ ⊥ ϕ * ) and perform the same combination with the Laplacian of the vorticity equation in order to ensure energy conservation. In this case, we obtain
∂ t ∆ ⊥ ϕ * − 5τ 4 ∆ 2 ⊥ ϕ * + [ϕ * , ∆ ⊥ ϕ * − 5τ 4 ∆ 2 ⊥ ϕ * ] + 2 β e ∇ ∆ ⊥ A − τ 2 ∆ ⊥ ∇ u e + τ [∆ ⊥ ϕ * , n] +τ [∇ ⊥ ϕ * ; ∇ ⊥ (n − 2∆ ⊥ ϕ * )] + τ 2 ∇ ∆ ⊥ u i + τ 2 [∆ ⊥ A , u i ] = 0 (55) ∂ t (u i − τ ∆ ⊥ u i + δ 2 u e ) + [ϕ * , u i − τ ∆ ⊥ u i ] + [ϕ, δ 2 u e ] −τ [n + 1 2 ∆ ⊥ ϕ * , u i ] − τ [∇ ⊥ ϕ * ; ∇ ⊥ (A + u i )] +∇ ((1 + τ )n − τ ∆ ⊥ ϕ * ) = 0 (56) d dt n + ∇ u e = 0 (57) d dt (A − δ 2 u e ) + ∂ z ϕ − ∇ n = 0 (58) ∆ ⊥ A = β e 2 (u e − u i ) (59) ϕ = ϕ * − (τ /2)∆ ⊥ ϕ * − τ n,(60)
which provides a four-field model valid from the MHD to the sub-d e scales, in the regime where the parameters β e and τ are both small. For this system, the energy reads
E 4 = 1 2 u 2 i + τ |∇ ⊥ u i | 2 + |∇ ⊥ ϕ i | 2 + δ 2 u 2 e + 5τ 4 (∆ ⊥ ϕ i ) 2 + 2 β e |∇ ⊥ A | 2 + (1 + τ )n 2 d 3 x,(61)
When taking τ = 0 and recalling that n = −(2/β e )B z , this system reduces to Eqs. (10)- (14) where, in Eq. (10), the coefficient 1 is neglected compared to 2/β e .
III. GYROFLUID MODELING FOR ARBITRARY τ
In this Section, we consider as the starting point the gyrofluid system (B1)-(B11) which allows considering all the values of the ion-electron temperature ratio. As a first step, it is of interest to reproduce the reduced fluid models of Secs. II A, II B 1 and II B 2, using the corresponding scalings with regard to particle moments, electromagnetic fields, parameters, length and time scales. In addition, we specify orderings for the gyrofluid moments. This comparison is of interest in that it points out that consistency between the two approaches requires the prescription of closure relations that are consistent with the assumed scalings. In this context, we recall that previous analyses of relations between gyrofluid and FLR reduced fluid models were carried out in Refs. [6,15,16].
In all three cases, it is understood that the electron fluid is assumed to be isothermal and that contributions due to heat flux and energy-weighted pressure tensors in the ion fluid equations are negligible. Also, we assume negligible gyrofluid ion perpendicular temperature fluctuations, i.e. P ⊥i − N i = 0. Denoting by T ⊥α = P ⊥α − N α and T α = P α − N α the perpendicular and parallel gyrofluid temperature fluctuations related to the species α, we remark that the assumption T ⊥i = 0 is satisfied if the underlying perturbation of the ion gyrocenter distribution functionF i , in dimensional form, is given by
F i = F eq i N i n 0 + 2 v v th i U i v th i + 1 2 2 v 2 v 2 th i − 1 T i T 0i (62) where the tilde denotes a dimensional quantity, v th i = √
2τ c s is the thermal ion speed and
F eq i (v, µ) = n 0 m i 2πT 0i 3/2 exp −m i v 2 2T 0i − µB 0 T 0i ,(63)
is an equilibrium Maxwellian distribution function with v and µ indicating the parallel velocity and the ion magnetic moment, respectively. We remark that this choice of F eq i yields Q ⊥i = Q i = R ⊥i = R ⊥⊥i = 0, which is consistent with the above assumption of neglecting heat flux and energy-weighted pressure tensor contributions.
Finally, Alfvén speed is assumed to be non-relativistic, i.e. v A ≪ c.
A. Small ion temperatures
Negligible ion temperature
In order to derive a cold-ion model, we assume
β e = O(δ), τ = O(δ 3/2 ), ∇ ⊥ = O(1),(64)U e ∼ u e = O ε δ 1/2 , B z = O(δε),(65)A ∼ ∂ z ∼ U i ∼ u i = O(δ 1/2 ε),(66)∂ t ∼ N e,i ∼ ϕ ∼ P e,i ∼ P ⊥e,i ∼ n e,i ∼ p e,i ∼ p ⊥e,i = O(ε).(67)
Ordering (64)-(67), devoid of gyrofluid variables, corresponds to scaling I of Sec. II A.
We apply ordering (64)-(67), together with the above assumptions on the closures and the non-relativistic character of the Alfvén speed, to Eqs. (B1), (B2), (B5), (B6), (B9), (B10), (B11). Retaining, in each dynamical equation, the leading order terms and the corrections of order δ, we obtain
∂N e ∂t + [ϕ, N e ] − [B z , P ⊥e ] + ∇ U e = 0,(68)∂ ∂t (δ 2 U e − A ) + [ϕ, δ 2 U e − A ] + ∇ (P e + B z ) − ∂ z ϕ = 0, (69) ∂N i ∂t + [ϕ, N i ] + ∇ U i = 0,(70)∂ ∂t (U i + A ) + [ϕ, U i + A ] + ∂ z ϕ = 0,(71)0 = N e − N i − ∆ ⊥ ϕ,(72)∆ ⊥ A = β e 2 (U e − U i ),(73)B z = − β e 2 (P ⊥e + 2B z ).(74)
The evolution equations for P e , P ⊥e and P ⊥i have not been considered because the closure relations will replace them. The evolution equation for P i is not necessary either, because the ordering made the contribution of P i in Eq. (71) negligible, thus decoupling the evolution of the ion gyrofluid parallel pressure. In order to express the system (68)-(74), closed with the electron isothermal relation p e = p ⊥e = n e , in terms of particle moments, it is necessary to resort to the transformation from gyrofluid to particle moments [6] which, for the scaling under consideration, accounting for corrections of order δ, reads
N e = n e − B z , U e = u e ,(75)P e = p e − B z , P ⊥e = p ⊥e − 2B z ,(76)N i = n i − ∆ ⊥ ϕ − B z , U i = u i ,(77)P i = p i − ∆ ⊥ ϕ − B z , P ⊥i = p ⊥i − 2∆ ⊥ ϕ − 2B z .(78)
Making use of the aforementioned electron isothermal closure, after inserting relations (75)-(78) into Eqs. (72)-(74), we get
n e = n i = n,(79)∆ ⊥ A = β e 2 (u e − u i ),(80)B z = − β e 2 p ⊥e = − β e 2 n.(81)
Inserting the transformations (75)-(78) into Eqs. (68)-(71), retaining only first order corrections in δ, and making use of relations (79) and (81), we obtain the system
d dt (1 + 2 β e )B z − ∇ u e = 0 (82) d dt (A − δ 2 u e ) + ∂ z ϕ + 2 β e ∇ B z = 0 (83) d dt (u i + δ 2 u e ) − 2 β e ∇ B z = 0 (84) d dt ∆ ⊥ ϕ + 2 β e ∇ ∆ ⊥ A = 0,(85)
which, together with Eq. (80), coincides with the system (10)- (14) derived from a two-fluid description.
Derivation of the ion FLR contributions
We consider here the ordering
β e = O(δ 2 ), τ = O(δ), ∇ ⊥ = O(1),(86)U e ∼ u e = O ε δ , B z = O(δ 2 ε),(87)A ∼ ∂ z ∼ U i ∼ u i = O(δε),(88)∂ t ∼ N e,i ∼ ϕ ∼ P e,i ∼ P ⊥e,i ∼ n e,i ∼ p e,i ∼ p ⊥e,i = O(ε),(89)∂N e ∂t + [ϕ, N e ] + ∇ U e = 0,(90)∂ ∂t (δ 2 U e − A ) + [ϕ, δ 2 U e − A ] + ∇ P e − ∂ z ϕ = 0,(91)∂N i ∂t + [ϕ, N i ] + τ 2 [∆ ⊥ ϕ, N i ] = 0,(92)∂ ∂t U i + A + τ 2 ∆ ⊥ A + [ϕ, U i + A + τ 2 ∆ ⊥ A ] + τ 2 [∆ ⊥ ϕ, U i ] + ∇ τ P i + τ 2 ∆ ⊥ ϕ + ∂ z ϕ = 0, (93) ∂P i ∂t + [ϕ, P i ] + τ 2 [∆ ⊥ ϕ, P i ] = 0,(94)0 = N e − N i − ∆ ⊥ ϕ − τ 2 ∆ ⊥ N i − 3 4 τ ∆ 2 ⊥ ϕ,(95)∆ ⊥ A = β e 2 U e .(96)N e = n e , U e = u e ,(97)P e = p e , P ⊥e = p ⊥e ,(98)N i = n i − ∆ ⊥ ϕ − τ 2 ∆ ⊥ n i − τ 4 ∆ 2 ⊥ ϕ,(99)U i = u i − τ 2 ∆ ⊥ u i ,(100)P i = p i − ∆ ⊥ ϕ − τ 2 ∆ ⊥ p i − τ 4 ∆ 2 ⊥ ϕ,(101)P ⊥i = p ⊥i − 2∆ ⊥ ϕ − τ ∆ ⊥ p ⊥i − 9 4 τ ∆ 2 ⊥ ϕ.(102)
Applying this transformation to Eqs. (90), (91), (92), (95) and (96), retaining first order corrections in τ and using the assumptions on the closures for the electron fluid, we obtain after some algebra
dn dt + ∇ u e = 0,(103)d dt (A − δ 2 u e ) + ∂ z ϕ − ∇ n = 0,(104)d dt ∆ ⊥ ϕ + τ 4 ∆ 2 ⊥ ϕ − τ [∆ ⊥ ϕ, n] − τ [∇ ⊥ ϕ; ∇ ⊥ n] + ∇ u e − τ 2 ∆ ⊥ ∇ u e = 0,(105)
n e = n i = n,
∆ ⊥ A = β e 2 u e .(106)
We now remark that the continuity equation (103) and the generalized Ohm's law (104) correspond to Eqs. (17) and (16) 92) and (94), after transforming into particle moments by means of Eqs.
(99) and (101), we obtain, to leading order.
dt i dt = 0,(108)
which coincides with Eq. (28). Finally, the decoupled parallel ion velocity equation, obtained from Eq. (93) after transforming to particle moments, reads
d dt (u i − τ ∆ ⊥ u i + δ 2 u e ) + ∇ (n + τ p i − τ ∆ ⊥ ϕ) − τ [∇ ⊥ ϕ; ∇ ⊥ (u i + A )] = 0.(109)
Equation (109), after replacing ϕ in favor of ϕ i , coincides with Eq. (19), once the above mentioned closure condition t ⊥i = ∆ ⊥ ϕ has been inserted in this equation,
Extension to larger scales
Analogously to Sec. II B 2, we here consider a scaling valid for scales much larger than ρ s , which introduces a coupling with the parallel ion velocity. The scaling reads
β e = O(δ 2 ), τ ∼ ∇ ⊥ = O(δ), (110) N e,i ∼ U e,i ∼ P e,i ∼ P ⊥e,i ∼ A ∼ n e,i ∼ u e,i ∼ p e,i ∼ p ⊥e,i = O(δε),(111)ϕ = O(ε), ∂ t = O(δ 2 ε),(112)∂ z ∼ B z = O(δ 3 ε),(113)
and corresponds to scaling III. Proceeding similarly to Secs. III A 1 and III A 2, from the parent gyrofluid model (B1)-(B11), retaining first order corrections in δ, we obtain, from scaling (110)-(113), the following equations
∂N e ∂t + [ϕ, N e ] + ∇ U e = 0,(114)∂A ∂t + [ϕ, A ] − ∇ P e + ∂ z ϕ = 0,(115)∂N i ∂t + [ϕ, N i ] + ∇ U i = 0,(116)∂ ∂t U i + A + [ϕ, U i + A ] + ∂ z ϕ = 0,(117)∂P i ∂t + [ϕ, P i ] + 3∇ U i = 0,(118)0 = N e − N i − ∆ ⊥ ϕ (119) ∆ ⊥ A = β e 2 (U e − U i ).(120)
As in the case of ordering (86)-(89), parallel magnetic fluctuations become negligible. The transformation from gyrofluid to particle moments is in this case given by N e = n e , U e = u e , (121) P e = p e , P ⊥e = p ⊥e ,
N i = n i − ∆ ⊥ ϕ, U i = u i (123) P i = p i − ∆ ⊥ ϕ, P ⊥i = p ⊥i − 2∆ ⊥ ϕ.(122)
Applying this transformation to Eqs. (114)-(120) yields, upon retaining first order corrections in δ and carrying out a few algebraic manipulations, the following equations
dn dt + ∇ u e = 0,(125)dA dt + ∂ z ϕ − ∇ n = 0,(126)d∆ ⊥ ϕ dt + 2 β e ∇ ∆ ⊥ A = 0,(127)du i dt + ∇ n = 0,(128)dt i dt + 2∇ u i = 0,(129)d dt (t ⊥i − n) − ∇ u i = 0,(130)n e = n i = n,(131)∆ ⊥ A = β e 2 (u e − u i ).(132)
The system composed by Eqs. (125), (126), (127), (128), (132) corresponds to the RHMHD system in the small β e limit, which was the result of applying scaling III from the two-fluid approach, as mentioned in Sec. II B 2. We added to such system the resulting evolution equations for the ion temperatures, corresponding to Eqs. (129) and (130). Equation (129), expressed in terms of particle moments, descends from Eqs. We thus derived, in Secs. III A 2 and III A 3, by means of a gyrofluid approach, the same models derived from the two-fluid description using scalings II and III and imposing t ⊥i = ∆ ⊥ ϕ at leading order. The uniform model (55)-(60) then directly follows by applying the procedure adopted in Sec. II B 2.
We remark that, although the model of Ref. [6] was taken as starting point for the gyrofluid derivation, for the model involving ion FLR corrections, other low-β e gyrofluid models, such as those of Refs. [17,18], could have been taken as parent models and would have led to the same result. The models of Refs. [17,18] adopt different closures for the gyroaveraging operators, compared to Ref. [6]. However, as far as the first order corrections in τ are concerned, which is sufficient for our derivations, the different gyroaveraging operators yield the same expansion. On the other hand, the gyrofluid model of Ref. [6] accounts for parallel magnetic perturbations, which allows for the derivation of the model of Sec. III A 1, which refers to a higher β e regime.
B. A two-field gyrofluid model for KAW dynamics
The gyrofluid model presented in Appendix B greatly simplifies when restricting to the evolution of the electron gyrocenter density and parallel velocity (assuming N i = T ⊥i = U i = 0, with furthermore an isothermal assumption for the electrons, i.e. T e = 0 and T ⊥e = −B z as deduced from Eqs. (3.68a)-(3.69b) of Ref. [9]). Such a reduced system allows one to focus on Alfvén wave dynamics, neglecting the coupling with slow magnetosonic waves. It retains corrections associated with electron inertia and with temperature ratios of order up to 1/β e ∼ 1/δ, which will in turn imply accounting also for an electron FLR contribution. In order to derive the simplified gyrofluid model, we introduce two further scalings, denoted as scaling IV and V, respectively. Scaling IV is given by
β e = O(δ), τ ∼ ∇ ⊥ = O(1),(133)U e = O ε δ 1/2 , B z = O(δε),(134)A ∼ ∂ z = O(δ 1/2 ε),(135)∂ t ∼ N e ∼ ϕ = O(ε),(136)
whereas scaling V corresponds to
β e = O(δ), τ = O(1/δ), ∇ ⊥ = O(1), (137) U e = O ε δ 1/2 , N e ∼ B z = O(δε),(138)A ∼ ∂ z = O(δ 1/2 ε),(139)∂ t ∼ ϕ = O(ε).(140)
Scaling V accounts for corrections relevant for large τ but is valid for smaller electron gyrocenter density fluctuations.
One then proceeds with applying the scalings IV and V to Eqs. (B1), (B2), (B9), (B10) and (B11), retaining the leading order terms, the corrections of order δ as well as one correction of order δ 2 in Eq. (B2) which, as will be seen a posteriori, allows the final system to be cast in Hamiltonian form. Taking into account the closure relations mentioned at the beginning of Sec. III B and neglecting heat fluxes, as mentioned at the beginning of Sec. III, one obtains two closed systems. Retaining all terms present in both models, similarly with what was done in the case of the uniform model of Sec. II B 2, one is led to the following two-field gyrofluid model
∂ t N e + [ϕ, N e ] − [B z , N e ] + 2 β e ∇ ∆ ⊥ A = 0 (141) ∂ t (1 − 2δ 2 β e ∆ ⊥ )A − [ϕ, 2δ 2 β e ∆ ⊥ A ] + [B z , 2δ 2 β e ∆ ⊥ A ] +∇ (ϕ − N e − B z ) = 0 (142) with 2 β e + (1 + 2τ )( Γ 0 − Γ 1 ) B z = 1 − ( Γ 0 − 1 τ ) − Γ 0 + Γ 1 ϕ (143) N e = ( Γ 0 − 1 τ ) + δ 2 ∆ ⊥ ϕ −(1 − Γ 0 + Γ 1 )B z .(144)
Here, Γ n denotes the (non-local) operator Γ n (−τ ∆ ⊥ ) associated to the Fourier multiplyer Γ n (τ k 2 ⊥ ), defined by Γ n (x) = I n (x)e −x where I n is the modified Bessel function of first type of order n.
In Eq. (142), the term [B z , 2δ 2 βe ∆ ⊥ A ] is sub-dominant in both scalings IV and V but, as mentioned above, it has been retained for it allows for a Hamiltonian formulation of the model in terms of a Lie-Poisson structure for the 2D limit, extended to 3D according to the procedure discussed in Ref. [4]. We remark that the model and its Hamiltonian structure could also be derived from a driftkinetic equation, by providing the relations (143)-(144) and applying the procedure described in Ref. [19].
We note also that the second term on the right-hand side of the relation (144), which is proportional to δ 2 , corresponds to the above mentioned electron FLR correction, which is relevant when τ β e ∼ 1. Remark: When neglecting the electon mass i.e. the δ 2 contributions, expression (144) for N e gives
n i = n e = N e + B z = Γ 0 − 1 τ ϕ + ( Γ 0 − Γ 1 )B z ,(145)
consistent with Eq. (B1) of Ref. [20], originating from the low-frequency linear kinetic theory taken in the regime of adiabatic ions (ζ i = ω/(k z v ti ) ≫ 1 and thus R(ζ i ) ≪ 1). Substituting the expressions for N e and B z in Eqs. (141)-(142), the resulting model only involves the electric and magnetic potentials ϕ and A . In the limit τ ≪ 1 where N e = ∆ ⊥ ϕ and B z = − βe/2 1+βe/2 ∆ ⊥ ϕ, and at large scales, where electron inertia can be neglected, one recovers Eqs. (3.2)-(3.3) and (3.10)-(3.12) of Ref. [9] (when taking the same assumptions mentioned at the beginning of the present Section). In this limit, it is possible to consider a finite value of β e . If, on the other hand, electron inertia is kept into account, this system identifies (neglecting the subdominant term mentioned above) with the reduction to two fields (neglecting the coupling to u i ) of Eqs. (10)- (14). When β e is taken small enough so as to neglect B z contributions, Eqs. (141)-(142) lead to the 2-field model of Refs. [21,22].
∂ t ∆ ⊥ ϕ + [ϕ, ∆ ⊥ ϕ] + 2 β e ∇ ∆ ⊥ A = 0 (146) ∂ t (1 − 2δ 2 β e ∆ ⊥ )A − [ϕ, 2δ 2 β e ∆ ⊥ A ] +∇ (ϕ − ∆ ⊥ ϕ) = 0.(147)
This model can also be derived from Eqs. (55)-(58) in the case τ = 0. It also corresponds the "low-β case" of the two-fluid model of Ref. [23] which restricts to 2D, when the electron pressure gradient in Ohm's law, usually referred as parallel electron compressibility (term ∇ ∆ϕ in Eq. (147)) is not retained.
When τ β e ∼ 1 one has (taking the limit τ ≫ 1), B z = βe 2 ϕ and N e = − βe 2 (1 + 2 βi − 2δ 2 βe ∆ ⊥ )ϕ, where β i denotes the ion beta parameter. After neglecting subdominant corrections proportional to β e , the system reduces to
∂ t (1 + 2 β i − 2δ 2 β e ∆ ⊥ )ϕ − [ϕ, 2δ 2 β e ∆ ⊥ ϕ] − 4 β 2 e ∇ ∆ ⊥ A = 0 (148) ∂ t (1 − 2δ 2 β e ∆ ⊥ )A − [ϕ, 2δ 2 β e ∆ ⊥ A ] +∇ ϕ = 0,(149)
which identifies with the isothermal system (5.9)-(5.10) of Ref. [1] taken for large values of τ when electron FLR corrections are neglected (see also Ref. [24]). This system also reproduces the "high-β case" of Ref. [23] when restricted to 2D. Similarly to many other reduced fluid and gyrofluid models (see Ref. [25] for a recent review), the system (141)-(142), as above mentioned, possesses a noncanonical Hamiltonian structure. In order to show this point, we first observe that the system (141)-(142) can be formulated as an infinite-dimensional dynamical system with the fields N e and A e ≡ (1 − 2δ 2 ∆ ⊥ /β e )A as dynamical variables. Indeed, upon introducing the following positive definite operators
L 1 = 2 β e + (1 + 2τ )( Γ 0 − Γ 1 )(150)L 2 = 1 + 1 − Γ 0 τ − Γ 0 + Γ 1 (151) L 3 = 1 − Γ 0 τ − δ 2 ∆ ⊥ (152) L 4 = 1 − Γ 0 + Γ 1 ,(153)one can write B z = M 1 ϕ, with M 1 = L −1 1 L 2 , and ϕ = −M −1 2 N e , where M 2 = (L 3 + L 4 L −1 1 L 2 )
is positive definite, as numerically seen on its Fourier transform. Also, A = (1 − 2δ 2 ∆ ⊥ /β e ) −1 A e . Thus, B z , ϕ and A can be expressed in terms of the dynamical variables N e and A e . Proving that the system possesses a Hamiltonian structure amounts to show that, given any observable F of the system, i.e. a functional of N e and A e , its evolution can be cast in the form [26]
∂F ∂t = {F, E},(154)
where E is an observable corresponding to the Hamiltonian functional and { , } is a Poisson bracket.
For the system (141)-(142), the Hamiltonian is given by the conserved functional
E = 1 2 2 β e |∇ ⊥ A | 2 + 4δ 2 β 2 e |∆ ⊥ A | 2 − N e (ϕ − N e − B z ) d 3 x,(155)
whereas the Poisson bracket reads
{F, G} = (N e ([F Ne , G Ne ] + δ 2 [F Ae , G Ae ]) +A e ([F Ne , G Ae ] + [F Ae , G Ne ]) +F Ne ∂ z G Ae + F Ae ∂ z G Ne ) d 3 x,(156)
for two observables F and G, and where subscripts on functionals denote functional derivatives. The Poisson bracket (156) corresponds, up to the normalization, to the Poisson bracket for the model of Ref. [21], when the latter is reduced to a two-field model by setting the ion density fluctuations proportional to the vorticity fluctuations. As is common with noncanonical Hamiltonian systems [26], the Poisson bracket (156) possesses Casimir invariants, corresponding to
C ± = G ± d 3 x,(157)
where G ± = A e ± δN e are referred to as normal fields [27]. In terms of the normal fields, the system (141)-(142) rewrites in the form
∂ t G ± + [ϕ ± , G ± ] + ∂ z ϕ ± ∓ 1 δ G ± = 0,(158)
where ϕ ± = ϕ − B z ± 1 δ A . In the 2D limit with translational symmetry along z, the Poisson bracket takes the form of a direct product and the system possesses two infinite families of Casimir invariants, given by
C ± = C ± (G ± )d 2 x,(159)
with C ± arbitrary functions. In particular, one has the quadratic invariants G 2 ± d 2 x, leading to the classical conservation of the magnetic potential in 2D MHD. In 2D, Eqs. (158) take the form of advection equations for the Lagrangian invariants G ± transported by incompressible velocity fields v ± =ẑ × ∇ϕ ± . Such Lagrangian invariants and velocity fields generalize those of the model of Ref. [28].
We observe that the system admits also a further conserved quantity (which is not a Casimir invariant) corresponding to the generalized helicity
H = 1 2 N e 1 − 2δ 2 β e ∆ ⊥ A d 3 x.(160)
This expression is similar (to dominant order) to the electron generalized helicity when making the assumptions u i = 0 and τ ≪ 1, where N e then identifies to the vorticity. The latter also rewrites
H = 1 8 (G 2 + − G 2 − )d 3 x.(161)
At large scales, where N e = ∆ ⊥ ϕ and A e = A , one has H = −(1/2) ∇A ·∇ϕd 3 x = (1/2) B ⊥ ·u ⊥ d 3 x which is the usual MHD cross-helicity.
IV. PHENOMENOLOGY OF CRITICALLY-BALANCED KAW TURBULENCE
In this section, we use the two-field gyrofluid model to phenomenologically characterize the energy and/or helicity cascades which develop in strong KAWs turbulence. The aim is to predict the transverse magnetic energy spectrum together with the direct or inverse character of the cascades in the different spectral ranges delimited by the plasma characteristic scales. for the three values of τ , with the same color code as for v ph . Transition between MHD and sub-ion scales occurs at the smallest of the two scales ρi and ρs (which corresponds to k ⊥ = 1). The orange straight line indicates the k −1 ⊥ asymptotic behavior in the large τ limit.
A. Linear theory
At the linear level, using a hat to indicate Fourier transform of fields and Fourier symbols of operators, one has the phase velocity v ph given by the dispersion relation
v 2 ph ≡ ω k z 2 = 2 β e k 2 ⊥ 1 + 2δ 2 k 2 ⊥ βe 1 − M 1 + M 2 M 2 ,(162)
where 1 − M 1 + M 2 is strictly positive for all k ⊥ . The associated eigenmodes obey
A = β e 2 v ph M 2 k 2 ⊥ ϕ.(163)
A graph of v ph (k ⊥ ) is displayed in Fig. 1 for the cases β e = 0.002, τ = 100 (red), β e = 0.01, τ = 0.5 (black) and β e = 0.05, τ = 0.001 (blue). An important difference that appears at large τ , in addition to the shift of the dispersive zone towards smaller k ⊥ (due to the fact that ρ i is larger than ρ s , here by a factor √ 200), is that at sub-d e scales, v ph does not stay constant but decreases as k ⊥ increases (asymptotically like k −1 ⊥ in the large τ limit), as in the full kinetic theory [1]. In the absence of the δ 2 term in L 3 , v ph would be constant at small scales.
Interestingly, when assuming relation (163) in formula (155) for the energy E, the sum of the first two terms of the energy E equals that of the last three ones.
The magnetic compressibility χ = | B z | 2 /| B ⊥ | 2 associated with the Alfvén eigenmode is then given by
χ = 2 β e 1 + 2δ 2 k 2 ⊥ β e M 2 1 (1 − M 1 + M 2 ) M 2 .(164)
Small τ limit (τ ∼ β 1 2 e ): In this regime, M 1 ∼ β e k 2 ⊥ /2 and is thus negligible (and so is B z ). On the other hand,
M 2 = (1 − Γ 0 )/τ + O(δ 2 ) ≈ k 2 ⊥ (1 − 3τ k 2 ⊥ /4), leading to the dispersion relation ω k z 2 = 2 β e 1 1 + 2δ 2 k 2 ⊥ βe (1 + k 2 ⊥ + 3 4 τ k 2 ⊥ ),(165)
consistent with the fluid formula given by Eq. (A5).
B. Absolute equilibria
The invariants can be rewritten
E = 1 2 (1 − M 1 + M 2 ) M 2 | ϕ| 2 + 2k 2 ⊥ β e 1 + 2δ 2 k 2 ⊥ β e | A | 2 d 2 k ⊥ dk z (166) H = − 1 2 M 2 1 + 2δ 2 k 2 ⊥ β 2 e ( ϕ R A R + ϕ I A I ) d 2 k ⊥ dk z(167)
with ϕ = ϕ R +i ϕ I and A = A R +i A I , when separating real and imaginary parts. Based on the existence of such quadratic invariants, a classical tool for predicting the direction of turbulent cascades is provided by the behavior of the spectral density of the corresponding invariants in the regime of absolute equilibrium. Albeit turbulence is intrinsically a nonequilibrium regime and a turbulent spectrum strongly differs from an equilibrium spectrum, the increasing or decreasing variation of the latter in the considered spectral range can be viewed as reflecting the direction of the turbulent transfer and thus the direct or inverse character of the cascade. An early application of this approach to incompressible MHD is found in Ref. [29].
In order to apply equilibrium statistical mechanics to the system consisting in a finite number of Fourier modes obtained by spectral truncation of the fields A and ϕ governed by Eqs. (141) and (144), one first easily checks that the solution satisfies the Liouville's theorem conditions in the form
k ∂ ∂ ϕ Rk ∂ ϕ Rk ∂t + ∂ ∂ ϕ Ik ∂ ϕ Ik ∂t = 0 (168) k ∂ ∂ A Rk ∂ A Rk ∂t + ∂ ∂ A Ik ∂ A k ∂t = 0.(169)
The density in phase space of the canonical equilibrium ensembles for the system (141)-(142), truncated in Fourier space, is given by
ρ = Z −1 exp(−λE − µH) = Z −1 exp(−M ij x i x j /2), where Z is the partition function. The matrix M is defined as M = f 0 h 0 0 f 0 h h 0 g 0 0 h 0 g where f = λ(1− M 1 + M 2 ) M 2 , g = λ 2k 2 ⊥ βe 1 + 2δ 2 k 2 ⊥ βe and h = µ 2 M 2 1 + 2δ 2 k 2 ⊥ β 2 e .
Here, λ and µ denote numerical constants prescribed by the values of the total energy and helicity. The symbols x i i=1,4 refer to ϕ R , ϕ I , A R and A I . The inverse matrix easily writes
M −1 = 1 ∆ g 0 −h 0 0 g 0 −h −h 0 f 0 0 −h 0 f ,
with ∆ = f g − h 2 . Without dissipation, the statistical equilibrium has an energy spectral density
E k ∼ 1 λ 2πk(f E ϕ k + gE A k )(170)
and a helicity spectral density
H k ∼ 1 µ 4πk ⊥ hE ϕA k ,(171)
where E ϕ k = g/∆, E
A k = f /∆ and E ϕA k = −h/∆. The cascade directions are forward or backward, depending on whether the absolute equilibrium spectra are respectively growing or decreasing in the wavenumber ranges of interest. The energy spectrum rewrites
E k ∼ 4π λ k ⊥ 1 − µ 2 4λ 2 1 v 2 ph .(172)
Positivity condition prescribes constraints on the wavenumber domain where this formula applies. The condition µ/λ 2 min(v ph ) (where min(v ph ) = 8/β e for small τ but is smaller for larger values of τ ), ensures that the energy spectrum is defined for all wavenumbers.
For larger values of µ/λ, there is a lower bound in k ⊥ and possibly also an upper bound, for which E k > 0. As v ph is bounded from above, it might happen that the energy is never positive. A more detailed study would require to explicitly relate the constants µ and λ to the total energy and helicity. Nevertheless, in all the cases where it is defined, the energy is found to be a growing function of k ⊥ (except possibly near the lower k ⊥ bound where it has a singular behavior), whatever the values of β e and τ , indicating a forward cascade. The generalized helicity spectrum, on the other hand, rewrites
H k ∼ − 4π µ k ⊥ 4λ 2 µ 2 v 2 ph − 1 ,(173)
which is negative. We thus have the relation H k = −µ/(4λ)E k /v 2 ph . Note however that there is no definite sign for this spectrum. In the same wavenumber ranges where the energy is positive, its absolute value is a growing quantity both at MHD and sub-d e scales. However, in the intermediate (sub-ρ s or sub-ρ i ) range, where ω/k z ∼ k ⊥ , it is a decreasing function of k ⊥ , indicating an inverse cascade. Note that when the −7/3 power law of the turbulent transverse magnetic energy spectrum is not well developed (see next Section), the range of generalized helicity inverse cascade is also very limited. Similar results showing an inverse (or direct) helicity cascade in the Hall (respectively sub-electronic) range are obtained in Ref. [30] based on absolute equilibrium arguments in extended MHD (XMHD).
C. Turbulent spectra
Energy cascade
We here discuss the turbulent state in the presence of a small amount of dissipation at small scales (leading to a finite flux of energy), focusing on the case of a critically balanced KAW cascade (with equal amount of positively and negatively propagating waves). Following the discussion of Section 7 in Ref. [1], the magnetic spectrum is easily obtained by imposing a constant energy flux, estimated by ratio of the spectral energy density at a given scale by the nonlinear transfer time at this scale. In the strong wave (critically-balanced) turbulence regime, this energy transfer time reduces to the nonlinear timescale. To estimate these quantities, it is first necessary to relate the Fourier components of the electric and magnetic potentials. This is achieved assuming the linear relationship provided by Eq. (163), characteristic of Alfvén modes. After inserting this relation into the energy E one finds that the total 3D spectral energy density writes
E 3D k = 2 β e k 2 ⊥ 1 + 2δ 2 k 2 ⊥ β e | A k | 2(174)
Due to the quasi-2D character of the dynamics, it is convenient to deal with the 2D energy spectrum
E 2D k = 2 β e k 2 ⊥ 1 + 2δ 2 k 2 ⊥ β e | A k ⊥ | 2(175)
where we used the notation
| A k ⊥ | 2 = | A k | 2 dk z ,(176)
and assume statistical isotropy in the transverse plane. Similar definitions are used for the other relevant fields, namely the electrostatic potential ϕ and the transverse magnetic field B ⊥ . The nonlinear timescale is estimated from Eq. (142) which, after discarding the B z terms (smaller by a factor β e ) and the ∂ z terms, can be rewritten
∂ t A e + [ϕ, A e ] − [A , M 2 ϕ] = 0.(177)
Assuming locality of the nonlinear interactions in Fourier space, the typical frequencies at wavenumber k ⊥ associated with the two nonlinear terms of the above equation take the form
τ −1 N L1 (k ⊥ ) ∼ k 2 ⊥ | ϕ k ⊥ | and τ −1 N L2 (k ⊥ ) ∼ k 2 ⊥ M 2 | ϕ k ⊥ |/(1 + 2δ 2 k 2 ⊥ /β e ) respectively.
The global nonlinear frequency of the system can be estimated by a linear combination of these two frequencies. Taking equal weights leads to the estimate
τ −1 N L (k ⊥ ) ∼ 2 β e k 4 ⊥ 1 + M 2 1 + 2δ 2 k 2 ⊥ βe 1 M 2 v ph | A k ⊥ |.
(178) In two-dimensions, when assuming isotropy, the transverse magnetic energy spectral density
| B ⊥ (k ⊥ )| 2 ∼ k 2 ⊥ | A k ⊥ | 2 is related to the transverse magnetic energy spectrum by E B ⊥ (k ⊥ ) ∼ k −1 ⊥ | B ⊥ (k ⊥ )| 2 , the energy flux ε writes ε ∼ 4 β 2 e 1 + 2δ 2 k 2 ⊥ β e + M 2 1 M 2 v ph k 3 ⊥ | B ⊥ (k ⊥ )| 3 ,(179)
and thus, assuming a constant energy flux, one gets
E B ⊥ (k ⊥ ) ∼ ε 2/3 β 4/3 e k −3 ⊥ v ph M 2 1 + 2δ 2 k 2 ⊥ βe + M 2 2/3 .(180)
All the regimes of KAW energy cascade can be recovered from Eq. (180).
• MHD range At scales large compared to ρ s and ρ i , one has v ph ∼ (2/β e ) 1/2 , M 2 = k 2 ⊥ and k ≪ 1. One thus immediately finds
E B (k) ∼ ε 2/3 k −5/3 ⊥ .
• Sub-ρ i range When β e /2/δ ≥ k ⊥ (2τ ) −1/2 and τ ≥ 1 (i.e. for scales smaller than the ion gyroradius (assumed larger than ρ s ), for which Γ 0 ≈ 0 and Γ 1 ≈ 0, and large enough for electron inertia to be negligible), one has M 2 ∼ 1/τ + β e (1+τ )/(2τ ) ∼ constant and v ph ∼ k ⊥ , so that E B (k) ∼ ε 2/3 k −7/3 ⊥ .
• Sub-ρ s range When, on the other hand, τ ≤ 1, for scales intermediate between ρ s and d e , characterized by k ⊥ ≫ 1 and 2δ 2 k 2 ⊥ /β e ≪ 1, one finds M 2 ∼ k 2 ⊥ and v ph ∼ (2/β e ) 1/2 k ⊥ , so that again E B ⊥ (k ⊥ ) ∼ ε 2/3 k −7/3 ⊥ . It is however to be noted that in this case, the smallest nonlinear time scale is not the stretching time τ N L1 but rather τ N L2 , associated with the electron pressure term in Ohm's law or equivalently to the Hall term, as previously mentioned.
• Sub-d e range When β e is small enough, it is possible to observe a third power law at scales smaller that the electron inertial length (but still larger than the electron Larmor radius).
− When τ ≪ 1, the −7/3 power-law zone is almost inexistent. It is replaced by a smooth transition between the −5/3 power-law and a steeper zone where v ph ∼ cst,
M 2 ∼ k 2 ⊥ and thus where E B (k) ∼ ε 2/3 k −3 ⊥ . − If τ , is taken larger than unity, v ph ∼ k −1 ⊥ and M 2 ∼ k 2 ⊥ , leading to E B ⊥ (k ⊥ ) ∼ ε 2/3 k −11/3 ⊥ .
− Note that for a small range of parameters where β e ≪ 1 and τ = O(1), a regime where one can have v ph ∼ constant and M 2 ∼ constant, one recovers a spectrum of the form E B ⊥ (k ⊥ ) ∼ ε 2/3 k −13/3 ⊥ , as mentioned in Ref.
[1].
Generalized helicity cascade
We here derive the expected transverse magnetic energy spectrum associated with a generalized helicity cascade. Proceeding as in the case of the energy cascade, we first write the 3D spectral density (taken positive)
H 3D k = 1 β e v ph (1 + 2δ 2 k 2 β e )k 2 ⊥ | A k | 2 .(181)
Keeping the same estimate for the transfer time, and assuming a constant generalized helicity flux rate η, we obtain the magnetic spectrum in the helicity cascade
E B ⊥ (k ⊥ ) ∼ η 2/3 β 4/3 e k −3 ⊥ v 2 ph M 2 1 + 2δ 2 k 2 ⊥ βe + M 2 2/3 . (182)
Going through the same estimates in the various wavenumber domains as for the energy cascade, we now see that the magnetic spectrum in the helicity cascade obeys a −5/3 power law from the MHD range to the electron scale. At scales smaller that d e , we differently finds that for τ ≪ 1 the spectrum is proportional to k −3 ⊥ , while it is otherwise proportional to k
−13/3 ⊥ .
It is of interest to remark that this latter scaling is somewhat similar to the M H+ spectrum of [31] associated to the magnetic spectrum of the magneto-sonic cyclotron branch in the so-called H-generalized helicity cascade computed on exact solutions of an extended MHD model (with the caveat that in [31] a singularity appears at the d e scale).
Examples of transverse magnetic energy spectra are displayed for the parameters β e = 0.002, τ = 100 (Fig. 2), β e = 0.01, τ = 0.5 (Fig. 3) and β e = 0.05, τ = 0.001 (Fig. 4), both for the absolute equilibria (long dashed lines) of the energy (black) and the generalized helicity (red) and for the turbulent magnetic spectra (solid lines) associated to the energy cascade (black) and the helicity cascade (red). The helicity inverse cascade associated with the decreasing absolute equilibrium spectrum in sub-ion range, is conspicuous in the case of large τ , but less pronounced for τ of order unity.
V. DISCUSSION AND CONCLUSION
In this paper, two new reduced models have been derived for low-β e plasmas. One of them, given by Eqs. (55)-(60), concerns the small τ -regime and extends the four-field model of Ref. [5] by retaining electron inertia. Both a fluid derivation and a reduction of the gyrofluid model of Ref. [6] are presented. Interestingly, agreement between the two formulations requires closure assumptions consistent with the underlying scaling, such as adiabatic ions. The other model, given by Eqs. (148)-(149), is a two-field gyrofluid model, valid for any τ , which retains both electron inertia and B z fluctuations, in addition to ion FLR contributions. It is used to present a comprehensive phenomenological description of the Alfvén Fig. 2 for βe = 0.05, τ = 0.001. A −7/3 power-law turbulent magnetic spectrum in the energy cascade is visible for k ⊥ > 1, while, both for the energy and helicity cascades, the magnetic spectrum displays a −3 sub-de power-law range. wave magnetic energy spectrum from the MHD scales to scales smaller than d e (while larger than ρ e ). Assuming the existence of energy or helicity cascades, this leads to the prediction of the magnetic energy spectrum when neglecting possible intermittency effects originating from the presence of coherent structures. The existence of these cascades needs to be confirmed by numerical simulations of the gyrofluid equations supplemented by dissipation and energy and/or helicity injection. In particular, the inverse helicity cascade is expected to occur only when the system is driven at a scale close to d e , in a way that mostly injects helicity rather than energy. In fact, Eq. (161) shows that a non-zero helicity corresponds to an imbalanced regime where either G + or G − dominates. It is interesting to note that the evidence of an inverse helicity cascade in numerical simulations of imbalanced EMHD turbulence was reported in Refs. [32,33]. Analytic considerations on the role of helicity in weak REMHD turbulence can also be found in Ref. [34]. An imbalanced energy injection could possibly originate from magnetic reconnection that takes place at the electronic scales. This scenario was recently considered in Ref. [35] on the basis of 2D hybrid PIC and Vlasov simulations where the development of a sub-ion magnetic energy spectrum occurs in relation with the reconnection instability, before the direct energy cascade reaches this scale.
In the framework of the two-field gyrofluid model, the transition scale between the k −5/3 ⊥ and the k −7/3 ⊥ ranges occurs at the largest of the two scales ρ i and ρ s . When τ is small, this will also be the case with Eqs. (55)-(60) that retain the coupling to n = −(2/β e )B z and u i , as shown by using the same arguments as in Appendix E.4 of Ref. [11]. Differently for τ ∼ 1 and small β e , a spectral transition is observed to take place at scale d i , both in the solar wind [36] and in hybrid-PIC simulations [8]. The question arises whether a similar transition could also be observed in numerical simulations of reduced models, induced by the presence of current sheets and the occurence of reconnection processes, or if more physics has to be taken into account.
Note that while the magnetic energy spectrum displays a k −7/3 ⊥ range both below the ion Larmor radius and below ρ s when β e is small, the perpendicular electric field spectrum scales like k −1/3 ⊥ in the former regime and like k −13/3 ⊥ in the latter one.
The two-field gyrofluid model derived in this paper could be extended to account for electron Landau damping, a crucial ingredient at small β e , with either a Landau fluid formulation, as suggested in Ref. [1], or with the coupling with a drift-kinetic equation. In the latter case, it could provide an interesting generalization of the model presented in [37], by taking into account the parallel magnetic field fluctuations and thus permitting larger values of β e .
At sub-d e scales, a new regime is uncovered in the case of cold ions (small τ ), where the magnetic energy density scales like k −3 ⊥ . Compressibility here plays a central role, which explains the difference with the cases τ ∼ 1 where the spectrum scales like k −13/3 ⊥ or τ ≫ 1 (a quasiincompressible limit) where it scales like k −11/3 ⊥ . Scales smaller than ρ e are not considered in this paper, as they require a full description of the electron FLR effects. In this regime, the spectrum is observed to be even steeper [38], possibly associated with a phase-space entropy cascade [11].
We have here considered the regime of strong wave turbulence where critical balance holds. Due to this property, the estimates of the nonlinear times and the relation between the fields turn out to be identical to those of the purely non-linear regime that occurs for example in two dimensions.
which corresponds to the scaling II treated in Sec. II B 1.Applying ordering (86)-(89) to Eqs. (B1), (B2), (B5), (B6), (B7), (B9), (B10), imposing P ⊥i − N i = 0, neglecting the term proportional to v 2 A /c 2 in Eq. (B9) and retaining leading order terms as well as corrections of order τ (or, equivalently, of order δ), we obtain
, respectively. Combining Eq. (103) with Eq. (105) and introducing the potential ϕ i defined in Eq.(38), we obtain Eq. (37). Closing the system by means of relation (107), we then retrieve the 3-field model derived in Sec. II B 1. With regard to ion temperature fluctuations, due to the assumption T ⊥i = P ⊥i − N i = 0, from Eqs. (99) and (102), we obtain t ⊥i = ∆ ⊥ ϕ, up to corrections of order O(τ ε), which is namely the hypothesis underlying the closure of the 3-field model, as derived from the two-fluid description. With regard to the parallel temperature, from Eqs. (
(118) and (116), whereas Eq. (130) can be obtained from Eq. (116), when applying transformation (123) and imposing, as previously assumed, ∆ ⊥ ϕ = t ⊥i . Equations (129) and (130) coincide with Eqs. (49) and (50) respectively.
FIG. 1 .
1Phase velocity of KAWs v ph versus k ⊥ for βe = 0.002, τ = 100 (red), βe = 0.01, τ = 0.5 (black) and βe = 0.05, τ = 0.001 (blue). The vertical dotted lines refer to the inverse ion Larmor radius ρ −1 i
FIG. 2 .
2Turbulent magnetic spectra (solid lines) in energy (black) and generalized helicity (red) cascades, together with absolute equilibrium energy (black long dashed lines) and generalized helicity (red long dashed lines) spectra for βe = 0.002, τ = 100. Straight orange lines refer to the slopes of the various power-law inertial ranges: −5/3 in the MHD range, −7/3 in the sub-ion Larmor radius range and −11/3 (for the energy cascade) or −13/3 (for the helicity cascade) in the sub-de range. The blue solid vertical line refers to ρ −1 s , the brown and blue long-dashed (respectively dotted) vertical lines to the inverse ion and electron inertial lengths (respectively Larmor radii)
FIG. 3 .
3Same as for Fig. 2 for βe = 0.01, τ = 0.5. No sub-ion power-law range is visible. Both for the energy and helicity cascades, the magnetic spectrum displays a −13/3 sub-de power-law range. FIG. 4. Same as for
Acknowledgments: We are thankful to W. Dorland for useful discussions.Appendix A: Dispersion relationThe system (55)-(58), when linearized about a uniform state, leads towhere ω, k z , k ⊥ are respectively the frequency, parallel and perpendicular wavenumbers of harmonic perturbations whose Fourier complex coefficients are denoted with a . symbol. This system supports two kind of waves, kinetic Alfvén waves (KAWs) and slow-magnetosonic waves (SWs). Ion parallel velocity plays a minor role in the dispersion relation of KAWs that can thus be approximated by(A5) It turns out that this approximation is excellent for a wide range of values of τ and β e in the whole spectral domain. Another simplification consisting in taking the cold ion limit and dropping some subdominant contributions proportional to δ 2 , allows one to obtain the slow branch. The dispersion relation then reduces toIt is easy to verify that the KAW dispersion relation given in Eq. (A5) taken for τ = 0, can be recovered from Eq. (A6) when ω kz ≫ 1. The slow magnetosonic branch is such that ω kz ∼ 1 at large scale, with a small dispersive component at small scale (a good approximation to the solution is given by ω/k z = (1 + k 2 ⊥ ) −1/2 ). From these results, one can estimate, for both kinds of waves and within scaling II, the values of ζ r = ω/(k z v thr ) both for ions (for which v thi ∼ τ 1/2 ∼ δ 1/2 ) and for electrons (for which v the ∼ δ −1 ). On has, for KAWs, ζ i ∼ δ −3/2 ≫ 1 and ζ e ∼ 1, while for SWs, ζ i ∼ δ −1/2 ≫ 1 and ζ e ∼ δ ≪ 1. It is thus a reasonable approximation to assume adiabatic ions and isothermal electrons. The good agreement between kinetic theory and an isothermal equation of state for the electrons, even when ζ e ∼ 1, is shown in Ref.[1].Appendix B: Parent gyrofluid modelWe adopt the same definitions of Ref.[9]and consider the following gyrofluid equations for the evolutions of the gyrocenter moments N e,i , U e,i , P e,i , P ⊥e,i , Q e,i , Q ⊥e,i , R ⊥e,i and R ⊥⊥e,i corresponding to the the normalized fluctuations of gyrocenter density, parallel velocity, parallel and perpendicular pressure, parallel and perpendicular heat flux, and of the parallel/parallel and parallel/perpendicular components of the energy weighted pressure tensor respectively, with the subscript e and i referring to electrons and ions∂P ⊥e ∂t + [(1 + δ 2 ∆ s )e δ 2 ∆s ϕ, P ⊥e ]together with Poisson's equations and parallel and perpendicular Ampère's laws, which respectively read v 2 A c 2 ∆ ⊥ ϕ = e δ 2 ∆s N e + δ 2 ∆ s e δ 2 ∆s (P ⊥e − N e ) − (I 0 (2δ 2 ∆ s )e 2δ 2 ∆s − 1)ϕand B z = − β e 2 e δ 2 ∆s P ⊥e + δ 2 ∆ s e δ 2 ∆s (P ⊥e − N e ) −(I 0 (2δ 2 ∆ s ) − I 1 (2δ 2 ∆ s ))e 2δ 2 ∆s ϕ +2(I 0 (2δ 2 ∆ s ) − I 1 (2δ 2 ∆ s ))e 2δ 2 ∆s B z +τ e τ ∆s P ⊥i + τ 2 ∆ s e τ ∆s (P ⊥i − N i )The operators Γ 0 and Γ 1 and ∆ s are defined as Γ 0 (z, z ′ ) = I 0 (zz ′ ) exp(z + z ′ ), Γ 1 (z, z ′ ) = I 1 (zz ′ ) exp(z + z ′ ) and ∆ s = 1 2 ∆ ⊥ , with I 0 and I 1 indicating the modified Bessel function of the first kind of order zero and one, respectively.The set of gyrofluid equations (B1)-(B11) was derived in Ref.[6], although with a different normalization and with the combination I 0 + I 1 instead of I 0 − I 1 in Eqs. (B9) and (B11). In Eqs. (B1)-(B11), we corrected a few typographical errors that were present in the corresponding equations of Ref.[9](where they had no effect in the considered asymptotics).
. T Passot, P L Sulem, E Tassi, J. Plasma Phys. 83715830402T. Passot, P. L. Sulem, and E. Tassi, J. Plasma Phys. 83, 715830402 (2017).
. R Fitzpatrick, F Porcelli, Phys. Plasmas. 114713R. Fitzpatrick and F. Porcelli, Phys. Plasmas 11, 4713 (2004).
. R Fitzpatrick, F Porcelli, Phys. Plasmas. 1449902R. Fitzpatrick and F. Porcelli, Phys. Plasmas 14, 049902 (2007).
. E Tassi, P J Morrison, D Grasso, F Pegoraro, Nucl. Fusion. 5034007E. Tassi, P. J. Morrison, D. Grasso, and F. Pegoraro, Nucl. Fusion 50, 034007 (2010).
. C T Hsu, R D Hazeltine, P J Morrison, Phys. Fluids. 291480C. T. Hsu, R. D. Hazeltine, and P. J. Morrison, Phys. Fluids 29, 1480 (1986).
. A Brizard, Phys. Fluids B. 41213A. Brizard, Phys. Fluids B 4, 1213 (1992).
. S S Cerri, F Califano, New J. Phys. 1925007S. S. Cerri and F. Califano, New J. Phys. 19, 025007 (2017).
. L Franci, S Landi, L Matteini, A Verdini, P Hellinger, 10.3847/1538-4357/833/1/91arXiv:1610.05158Astrophys. J. 833physics.space-phL. Franci, S. Landi, L. Matteini, A. Verdini, and P. Hellinger, Astrophys. J. 833, 91 (2016), arXiv:1610.05158 [physics.space-ph].
. E Tassi, P L Sulem, T Passot, J. Plasma Phys. 82705820601E. Tassi, P. L. Sulem, and T. Passot, J. Plasma Phys. 82, 705820601 (2016).
. L Comisso, D Grasso, E Tassi, F L Waelbroeck, 10.1063/1.3697860Physics of Plasmas. 1942103L. Comisso, D. Grasso, E. Tassi, and F. L. Waelbroeck, Physics of Plasmas 19, 042103 (2012).
. A A Schekochihin, S C Cowley, W Dorland, G W Hammett, G G Howes, E Quataert, T Tatsuno, Astrophys. J. Suppl. 182310A. A. Schekochihin, S. C. Cowley, W. Dorland, G. W. Hammett, G. G. Howes, E. Quataert, and T. Tatsuno, Astrophys. J. Suppl. 182, 310 (2009).
. S Boldyrev, C H K Chen, Q Xia, V Zhdankin, 10.1088/0004-637X/806/2/238arXiv:1507.00416Astrophys. J. 806physics.space-phS. Boldyrev, C. H. K. Chen, Q. Xia, and V. Zhdankin, Astrophys. J. 806, 238 (2015), arXiv:1507.00416 [physics.space-ph].
. N Andrés, L Martin, P Dmitruk, D Gómez, 10.1063/1.4890021Phys. Plasmas. 2172904N. Andrés, L. Martin, P. Dmitruk, and D. Gómez, Phys. Plasmas 21, 072904 (2014).
. A A Schekochihin, S C Cowley, F Rincon, M S Rosin, Mon. Not. R. Astron. Soc. 405291A. A. Schekochihin, S. C. Cowley, F. Rincon, and M. S. Rosin, Mon. Not. R. Astron. Soc. 405, 291 (2010).
. B D Scott, Phys. Plasmas. 14102318B. D. Scott, Phys. Plasmas 14, 102318 (2007).
. E V Belova, Phys. Plasmas. 83936E. V. Belova, Phys. Plasmas 8, 3936 (2001).
. B Scott, Phys. Plasmas. 17102306B. Scott, Phys. Plasmas 17, 102306 (2010).
. P B Snyder, G W Hammett, Phys. Plasmas. 83199P. B. Snyder and G. W. Hammett, Phys. Plasmas 8, 3199 (2001).
. E Tassi, Annals of Physics. 362239E. Tassi, Annals of Physics 362, 239 (2015).
. T Passot, P L Sulem, Phys. Plasmas. 1482502T. Passot and P. L. Sulem, Phys. Plasmas 14, 082502 (2007).
. T J Schep, F Pegoraro, B N Kuvshinov, Phys. Plasmas. 2843T. J. Schep, F. Pegoraro, and B. N. Kuvshinov, Phys. Plasmas , 2843 (1994).
. D Borgogno, D Grasso, F Porcelli, F Califano, D Pegoraro, F Farina, Phys. Plasmas. 1232309D. Borgogno, D. Grasso, F. Porcelli, F. Califano, and D. Pegoraro, F. Farina, Phys. Plasmas 12, 032309 (2005).
. D Biskamp, E Schwarz, J F Drake, 10.1063/1.872211Phys. Plasmas. 41002D. Biskamp, E. Schwarz, and J. F. Drake, Phys. Plasmas 4, 1002 (1997).
. C H K Chen, S Boldyrev, arXiv:1705.08558v1Astrophys. J. 842physics-space-phC. H. K. Chen and S. Boldyrev, Astrophys. J 842, 122 (2017), arXiv:1705.08558v1 [physics-space-ph].
. E Tassi, Eur. Phys. J. D. 71269E. Tassi, Eur. Phys. J. D 71, 269 (2017).
. P J Morrison, Rev. Mod. Phys. 70467P. J. Morrison, Rev. Mod. Phys. 70, 467 (1998).
. F L Waelbroeck, R D Hazeltine, P J Morrison, Phys. Plasmas. 1632109F. L. Waelbroeck, R. D. Hazeltine, and P. J. Morrison, Phys. Plasmas 16, 032109 (2009).
. E Cafaro, D Grasso, F Pegoraro, F Porcelli, A Saluzzi, Phys. Rev. Lett. 804430E. Cafaro, D. Grasso, F. Pegoraro, F. Porcelli, and A. Saluzzi, Phys. Rev. Lett. 80, 4430 (1998).
. U Frisch, A Pouquet, J Léorat, A Mazure, J. Fluid Mech. 68769U. Frisch, A. Pouquet, J. Léorat, and A. Mazure, J. Fluid Mech. 68, 769 (1975).
. G Miloshevich, M Lingam, P J Morrison, New Journal of Physics. 1915007G. Miloshevich, M. Lingam, and P. J. Morrison, New Journal of Physics 19, 015007 (2017).
. H M Abdelhamid, M Lingam, S M Mahajan, Astrophys. J. 82987H. M. Abdelhamid, M. Lingam, and S. M. Mahajan, Astrophys. J. 829, 87 (2016).
. J Cho, J. Phys. Conf. Ser. 71912001J. Cho, J. Phys. Conf. Ser. 719, 012001 (2016).
. H Kim, J Cho, Astrophys. J. 80175H. Kim and J. Cho, Astrophys. J. 801, 75 (2015).
. S Galtier, R Meyrand, J Plasma Phys, 81325810106S. Galtier and R. Meyrand, J. Plasma Phys. 81, 325810106 (2015).
. L Franci, S S Cerri, F Califano, S Landi, E Papini, A Verdini, L Matteini, F Jenko, P Hellinger, 10.3847/2041-8213/aa93fbarXiv:1707.06548Astrophys. J. Lett. 85016physics.space-phL. Franci, S. S. Cerri, F. Califano, S. Landi, E. Papini, A. Verdini, L. Matteini, F. Jenko, and P. Hellinger, Astrophys. J. Lett. 850, L16 (2017), arXiv:1707.06548 [physics.space-ph].
. C H K Chen, L Leung, S Boldyrev, B A Maruca, S D Bale, 10.1002/2014GL062009Geophys. Res. Lett. 418081C. H. K. Chen, L. Leung, S. Boldyrev, B. A. Maruca, and S. D. Bale, Geophys. Res. Lett. 41, 8081 (2014).
. A Zocco, A Schekochihin, Phys. Plasmas. 18102309A. Zocco and A. Schekochihin, Phys. Plasmas 18, 102309 (2011).
. S Y Huang, F Sahraoui, X H Deng, J S He, Z G Yuan, M Zhou, Y Pang, H S Fu, Astrophys. J. Lett. 78928S. Y. Huang, F. Sahraoui, X. H. Deng, J. S. He, Z. G. Yuan, M. Zhou, Y. Pang, and H. S. Fu, Astrophys. J. Lett. 789, L28 (2014).
| []
|
[
"COSMOLOGICAL CLUSTER TENSION",
"COSMOLOGICAL CLUSTER TENSION"
]
| [
"A Blanchard \nIRAP\nUniversité de Toulouse\nCNRS\nCNES\nUPS\nToulouseFrance\n",
"Z Sakr \nIRAP\nUniversité de Toulouse\nCNRS\nCNES\nUPS\nToulouseFrance\n\nFaculty of Sciences\nUniversité St Joseph\nUR EGFEM\nBeirutLebanon\n",
"S Ilić \nIRAP\nUniversité de Toulouse\nCNRS\nCNES\nUPS\nToulouseFrance\n\nInstitute of Physics\nCEICO\nCzech Academy of Sciences\nNa Slovance 2Praha\n",
"\nCzech Republic\n"
]
| [
"IRAP\nUniversité de Toulouse\nCNRS\nCNES\nUPS\nToulouseFrance",
"IRAP\nUniversité de Toulouse\nCNRS\nCNES\nUPS\nToulouseFrance",
"Faculty of Sciences\nUniversité St Joseph\nUR EGFEM\nBeirutLebanon",
"IRAP\nUniversité de Toulouse\nCNRS\nCNES\nUPS\nToulouseFrance",
"Institute of Physics\nCEICO\nCzech Academy of Sciences\nNa Slovance 2Praha",
"Czech Republic"
]
| []
| The abundance of clusters is a classical cosmological probe sensitive to both the geometrical aspects and the growth rate of structures. The abundance of clusters of galaxies measured by Planck has been found to be in tension with the prediction of the ΛCDM models normalized to Planck CMB fluctuations power spectra. The same tension appears with X-ray cluster local abundance. Massive neutrinos and modified gravity are two possible solutions to fix this tension. Alternatively, others options include a bias in the selection procedure or in the mass calibration of clusters. We present a study, based on our recent work 4 , updating the present situation on this topic and discuss the likelihood of the various options. | null | [
"https://arxiv.org/pdf/1805.06976v1.pdf"
]
| 119,409,837 | 1805.06976 | 4695ff73ca4e6adb9a1b96294d03c5976d72a285 |
COSMOLOGICAL CLUSTER TENSION
A Blanchard
IRAP
Université de Toulouse
CNRS
CNES
UPS
ToulouseFrance
Z Sakr
IRAP
Université de Toulouse
CNRS
CNES
UPS
ToulouseFrance
Faculty of Sciences
Université St Joseph
UR EGFEM
BeirutLebanon
S Ilić
IRAP
Université de Toulouse
CNRS
CNES
UPS
ToulouseFrance
Institute of Physics
CEICO
Czech Academy of Sciences
Na Slovance 2Praha
Czech Republic
COSMOLOGICAL CLUSTER TENSION
The abundance of clusters is a classical cosmological probe sensitive to both the geometrical aspects and the growth rate of structures. The abundance of clusters of galaxies measured by Planck has been found to be in tension with the prediction of the ΛCDM models normalized to Planck CMB fluctuations power spectra. The same tension appears with X-ray cluster local abundance. Massive neutrinos and modified gravity are two possible solutions to fix this tension. Alternatively, others options include a bias in the selection procedure or in the mass calibration of clusters. We present a study, based on our recent work 4 , updating the present situation on this topic and discuss the likelihood of the various options.
Introduction
The ΛCDM scenario has become the standard scenario of modern cosmology, thanks to its quantitative agreement with several major observational results 1 , on top of which its good agreement with the angular power spectrum of the CMB fluctuations as measured by Planck 2 , and its ability to predict the large scale distribution of matter as measured by the correlation function of galaxies 3 . Despite of this success, some observables show to be in tension with the predictions of the ΛCDM when normalized to the Planck CMB data. This is probably to be expected given the high accuracy of the modern observations relevant to cosmology, but it is impornat to figure out its possible origin. This may have to do with some unidentified residual systematics in at least one set of observations or may be the signature for the need of a fundamental modification of the standard scenario, i.e. a hint for new physics. The CMB fluctuations provide a direct estimation of the amplitude of matter fluctuations at z ∼ 1100 which can be extrapolated down to redshift zero through the linear growth rate of the model. The tighest constraint obtained from Planck (including CMB lensing) is σ 8 = 0.8150 ± 0.0087 (68%). Several obsevations from the low redshift universe are however indicative of a lower amplitude of matter fluctuations.
The abundance of clusters provides one way to constrain the amplitude of matter fluctuations. The number counts of clusters detected through their Sunyaev-Zel'dovich imprint as found by Planck lead to a lower amplitude ∼ 0.75 for the same Ω m with a specific value of the mass calibration. Although the abundance of local x-ray clusters provides a less stringent tension, it yields an amplitude σ 8 ∼ 0.75 similar to the one derived from SZ counts, indicative that selection procedure is not an issue. This tension is illustrated in Fig. 1. In the following, according to our recent work 4 , we provide details on the methodology on how clusters are modelled to establish this result. 2 How to use clusters for cosmology
The mass function
Within a given cosmological model from a specific framework, it should be currently possible to compute the linear matter power spectrum at any epoch. This allows to compute the angular power spectrum of the CMB fluctuations as well as the large scale properties of the galaxies distribution (provide one assumes a bias model). Knowing the power spectrum P (k) one can compute the amplitude of matter fluctuations after a smoothing of the field by mean of a window function of scale R. The mass of structures on scale R correspond to the mass enclosed by the window function, which for a spherical top-hat window is just 4/3ρ m πR 3 . Going from the linear amplitude of matter fluctuations σ(m, z) is possible thanks to the magic of the (extended) Press and Schechter approach. From general arguments the non-linear mass function of objects resulting from gravitational collapse can be written in the form:
n(M, z) = − ρ M 2 σ(M ) δ N L (z) d ln σ d ln M F(ν N L )(1)
i.e. a scaling law with mass and redshift 5 . In the case of standard gaussian fluctuations, the mass function has been the subject of numerous numerical studies and analytical expressions for the function F have been proposed providing an accurate fit to the mass function inferred from CDM simulations. The situation has been slightly obscured by the fact that different definitions were used for the definition of an "object" and claims for departures from scaling of the mass function. However, theses differences are very minor in light of the above tension. The Tinker et al. 6 fit has been widely used. Despali et al. 7 provided a new analysis showing that standard scaling is preserved when virial radius is used while departures appear when different mass definitions are used (like the most used M 500 ).
From mass to observable
In order to compare predictions from a specific model to observations one has to specify the relation between the mass and the observable. Such a relation can be deduced from scaling arguments 8 . When applied to gas temperature of x-ray clusters, this reads : A T −M being a calibration. The amplitude of the theoretical temperature distribution function is strongly sensitive to Ω m and σ 8 , strictly independent on h and weakly dependent on the shape of the power spectrum P (k) (it's also depends on the gaussian or not nature of the fluctuations).
T = A T −M (h M ∆ ) 2/3 Ω m ∆(z) 178 1/3 (1 + z)(2)
The abundance of local x-ray clusters can then be used as a powerful constraint on the parameters (Ω m , σ 8 ), but the relation is degenerated with the calibration. Once normalized to present day data, the redshift evolution essentially relies on the (linear) growth rate of fluctuations, making clusters abundance a non-geometrical powerful cosmological test. This test can be implemented from clusters detected by various technics. Applications to local x-ray clusters already showed some puzzling features 9 . However, the most famous recent example is certainly the cluster number counts obtained by Planck through the SZ effect. Indeed taken at face value the observed counts are lower by a factor 3-4 than expectations from the best ΛCDM fitting CMB. However this tension relies entirely on the assumption, or prior, on the calibration: if the calibration is let free both SZ counts and local x-ray abundance can be fitted with the same calibration : A T −M ∼ 7.3 ± 0.3 (at the virial radius), corresponding to 1 − b ∼ 0.6 in Planck convention) 10 , while Planck standard calibration is arround 8.7 corresponding to 1 − b = 0.8. A solution to solve this tension is to advocate a massive neutrino contribution that would alter the matter power spectrum, leading to a lower σ 8 . We have examined in detail this possibility by running MCMC chains on CMB + local abundance of x-ray clusters with a free calibration. Our results showed that the likelihood on neutrino mass m ν is unchanged and that no correlation between A T M and m ν shows up in Fig.2. Identically the likelihood on A T M is unchanged.
As an alternative we examine whether a modified gravity model, represented by a simple γ growth rate, could solve the issue. Not surprisingly, we found that this possibility can indeed restore consistency between the Planck calibration and clusters counts in CMB normalized cosmology, but at the expense of a large value of γ (of the order to 0.9±0.1), with a tight correlation between A T M and m ν , independent of the details of the models or additional constraints used, see Figure 3 -The calibration AT M is tighly correlated to the parameter γ in a simple representation of a modified theory of gravity. The correlation with massive neutrino (grey contour) is essentially the same to the massless case (green). The addition of the BAO and Lyα, respectively red and blue, lead to very similar contours.
Conclusion
The CMB-cluster tension, consistently appearing in SZ and x-ray, relies uniquely on the cluster mass calibration used in scaling laws. We found that massive neutrinos does not alleviate the tension while a modified gravity model represented by a γ parametrization of the growth rate can accomodate both data sets provide γ ∼ 0.9 ± 0.1. We conclude that if the standard Planck calibration 1 − b ∼ 0.8 is reliabily confirmed, it would provide a strong indication of some form of exotic physics in the dark sector.
Figure 1 -
1This figure illustrates the cluster tension: in the Ωm − σ8 plane the contours obtained from the CMB alone (or with a free AT M calibration) do not overlap. The choice of the fitting function for the mass function (T08 or D16) has negligible effect.
Figure 2 -
2The calibration AT M does not appear to be correlated with a possible non zero neutrino mass when x-ray cluster constraint is combined with the CMB. Different prescription for the mass function in the presence of massive neutrinos does not lead to appreciable differences. Both liklihoods on AT M and on neutrino mass are essentially unchanged compared to the massless case (and when using different prescriptions).
Fig.
AcknowledgmentsWe acknowledge C. Yèche and collaborators for providing us their code to implement Lyman-α constraints.
. D H Weinberg, M J Mortonson, D J Eisenstein, Phys. Rep. 53087Weinberg, D. H., Mortonson, M. J., Eisenstein, D. J., et al. 2013, Phys. Rep., 530, 87
. P A R Ade, Planck CollaborationN Aghanim, Planck CollaborationA&A. 59413Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2016, A&A, 594, A13
. A Blanchard, M Douspis, M Rowan-Robinson, S Sarkar, A&A. 449925Blanchard, A., Douspis, M., Rowan-Robinson, M., & Sarkar, S. 2006, A&A, 449, 925
. Z Sakr, S Ilić, A Blanchard, arXiv:1803.11170Sakr, Z., Ilić, S., & Blanchard, A. 2018, arXiv:1803.11170
. A Blanchard, D Valls-Gabaud, G A Mamon, A&A. 264365Blanchard, A., Valls-Gabaud, D., & Mamon, G. A. 1992, A&A, 264, 365
. J Tinker, A V Kravtsov, A Klypin, ApJ. 688Tinker, J., Kravtsov, A. V., Klypin, A., et al. 2008, ApJ, 688, 709-728
. G Despali, C Giocoli, R E Angulo, MNRAS. 4562486Despali, G., Giocoli, C., Angulo, R. E., et al. 2016, MNRAS, 456, 2486
. N Kaiser, MNRAS. 222323Kaiser, N. 1986, MNRAS, 222, 323
. A Blanchard, M Douspis, A&A. 436411Blanchard, A., & Douspis, M. 2005, A&A, 436, 411
. S Ilić, A Blanchard, M Douspis, A&A. 58279Ilić, S., Blanchard, A., & Douspis, M. 2015, A&A, 582, A79
| []
|
[
"Virtualizing Mixed-Criticality Systems: A Survey on Industrial Trends and Issues",
"Virtualizing Mixed-Criticality Systems: A Survey on Industrial Trends and Issues"
]
| [
"Marcello Cinque \nDIETI -Università degli Studi di Napoli Federico II\nVia Claudio 2180125NapoliItaly\n",
"Domenico Cotroneo \nDIETI -Università degli Studi di Napoli Federico II\nVia Claudio 2180125NapoliItaly\n",
"Luigi De Simone \nDIETI -Università degli Studi di Napoli Federico II\nVia Claudio 2180125NapoliItaly\n",
"Stefano Rosiello \nDIETI -Università degli Studi di Napoli Federico II\nVia Claudio 2180125NapoliItaly\n"
]
| [
"DIETI -Università degli Studi di Napoli Federico II\nVia Claudio 2180125NapoliItaly",
"DIETI -Università degli Studi di Napoli Federico II\nVia Claudio 2180125NapoliItaly",
"DIETI -Università degli Studi di Napoli Federico II\nVia Claudio 2180125NapoliItaly",
"DIETI -Università degli Studi di Napoli Federico II\nVia Claudio 2180125NapoliItaly"
]
| []
| Virtualization is gaining attraction in the industry as it promises a flexible way to integrate, manage, and re-use heterogeneous software components with mixed-criticality levels, on a shared hardware platform, while obtaining isolation guarantees. This work surveys the state-of-the-practice of real-time virtualization technologies by discussing common issues in the industry. In particular, we analyze how different virtualization approaches and solutions can impact isolation guarantees and testing/certification activities, and how they deal with dependability challenges. The aim is to highlight current industry trends and support industrial practitioners to choose the most suitable solution according to their application domains. | 10.1016/j.future.2021.12.002 | [
"https://arxiv.org/pdf/2112.06875v1.pdf"
]
| 244,925,207 | 2112.06875 | f22c8d26ee391b62b5d4e21a5677ef6ddbe7f5d8 |
Virtualizing Mixed-Criticality Systems: A Survey on Industrial Trends and Issues
Marcello Cinque
DIETI -Università degli Studi di Napoli Federico II
Via Claudio 2180125NapoliItaly
Domenico Cotroneo
DIETI -Università degli Studi di Napoli Federico II
Via Claudio 2180125NapoliItaly
Luigi De Simone
DIETI -Università degli Studi di Napoli Federico II
Via Claudio 2180125NapoliItaly
Stefano Rosiello
DIETI -Università degli Studi di Napoli Federico II
Via Claudio 2180125NapoliItaly
Virtualizing Mixed-Criticality Systems: A Survey on Industrial Trends and Issues
VirtualizationReal-time applicationsMixed-criticality systemsResource IsolationSafety CertificationDependability
Virtualization is gaining attraction in the industry as it promises a flexible way to integrate, manage, and re-use heterogeneous software components with mixed-criticality levels, on a shared hardware platform, while obtaining isolation guarantees. This work surveys the state-of-the-practice of real-time virtualization technologies by discussing common issues in the industry. In particular, we analyze how different virtualization approaches and solutions can impact isolation guarantees and testing/certification activities, and how they deal with dependability challenges. The aim is to highlight current industry trends and support industrial practitioners to choose the most suitable solution according to their application domains.
Introduction
In recent years, we are witnessing the increasing adoption of virtualization technologies in industrial domains, such as railways, avionic, automotive, Industrial Internet of Things (IIoT), but also in telco systems with the recent development of 5G. [1,2,3,4,5,6,7]. In such industrial domains, it is quite common to deal with so-called mixed-criticality systems, which integrate functionalities of different safety and/or time-critical into a common platform to reduce the size, weight, power, and cost of hardware. The integration of functionalities with different safety and time requirements leads to numerous challenges, especially when adopting virtualization technologies.
Even though virtualization easily supports mixed-criticality compositions since it implicitly provides software support for partitioning and running tasks on heterogeneous OS (real-time and general-purpose) environments [8], it poses serious challenges, as described in the following.
The development of mixed-criticality systems in the industry has to satisfy stringent requirements provided by safetycritical standards, such as those for the avionics [9], automotive [10], and railway [11]. These standards refer to temporal and spatial isolation among software components, which are the most critical properties that these systems have to verify. Temporal isolation is about limiting the impact of resource consumption (e.g., tasks running on a virtual machine) on the performance of other software components (e.g., tasks running on the other virtual machines). Spatial isolation includes the capability of isolating code and data between virtual machines preventing tasks to alter private data belonging to other tasks, including the allocated (memory-mapped) devices. Usually, the above-mentioned standards recommend providing documentation about evidence of a fail-safe and/or fail-stop behavior for such systems, which ultimately prevent failures leading to human and cost losses.
To face these issues, many solutions and initiatives have been developed over the years, both from industry and academia. This resulted in a variety of, and often partial, solutions that make very hard the decision from industry practitioners to choose a proper virtualization platform, given the domain constraints. The main factors that come into play during the process of choosing the proper virtualization solution are the following: hypervisor footprint, which is crucial especially for embedded applications; the compliance with industry safety-related standards; the license of the solution (e.g., proprietary or opensource); the explicit support to high availability; fault tolerance and security; and the supported hardware platforms.
In the light of the above factors of virtualization for industrial needs, this work surveys the state-of-the-practice of the most representative virtualization approaches adopted or promising for industrial mixed-criticality systems. We group solutions into four main categories:
• Solutions based on separation kernel and microkernel, specifically designed for industrial and embedded domains;
• Solutions that try to enhance general-purpose hypervisors (e.g., Xen and KVM) to support real-time properties, to foster the adoption of mainstream cloud virtualization solutions in the industry.
• Solutions that take advantage of the isolation support provided by Security CPU hardware extensions (e.g., ARM TrustZone, Intel SGX), to achieve stricter isolation guarantees thanks to the latest hardware extensions;
• Solutions based on lightweight virtualization, such as con-tainers or unikernels, which try to achieve a compromise between isolation and the small footprint required in some industrial domains.
Although there exist other surveys in the current literature, which cover the most common virtualization technologies in embedded real-time domain [12,13,14,15,16,17], we aim at analyzing existing virtualization platforms and approaches in a different light, considering the common issues that arise in the industry. As already said, examples are isolation properties, real-time performance, testing, and certification issues. The ultimate aim is to support industrial practitioners to choose the most suitable virtualization solution, according to their specific needs or domain.
The paper is structured as follows. Section 2 presents the concept and terms about virtualization, the technical issues for industry application, and discusses related surveys. Section 3 delineates industrial dimensions with respect to the virtualization paradigm. Section 4 surveys and compares the state-of-thepractice solutions among real-time hypervisors in the light of industrial dimensions. A discussion according to the analyzed solutions, by highlighting the current industrial and scientific trends in virtualization, is provided in Section 5. Section 6 concludes the paper.
Virtualization in Critical Systems
Virtualization is among the most promising architectural approaches to implement mixed-criticality systems, i.e., to integrate software components with different levels of criticality on a shared hardware platform [14]. This objective can be achieved using different approaches, such as hypervisors or OS-level virtualization. A hypervisor (or Virtual Machine Monitor -VMM) is a software layer that abstracts the hardware resources with the aim to run different and isolated application environments, called Virtual Machines (VMs) or guests, on the same physical machine. A Virtual Machine is an execution environment typically containing an Operating System (OS), called guest OS, and the application software.
Taxonomy and applications. Virtualization approaches can be classified along with several directions. A first distinction is based on the presence of a host OS between the hypervisor and the hardware. A type-1 hypervisor, often referred to as a "bare metal" hypervisor, is run directly on the hardware, acting as a classic OS and controlling directly the hardware resources. A type-2 hypervisor is executed instead on top of an existing "host" OS, which is used to manage hardware resources.
A second distinction is between full-virtualization or paravirtualization. A fully virtualized hypervisor abstracts completely the hardware resources (e.g., CPU, memory, etc.) to the guests, emulating privileged instructions and I/O operations. Examples are VMware ESXi [18], KVM [19], and Microsoft Hyper-V [20]. This type of hypervisor has the advantage to let guest OSes or applications run unmodified, as they were running on the physical machine. With paravirtualization, the guest is instead aware of the hypervisor. In this case, a guest OS has to be modified to communicate directly with the underlying hypervisor, through so-called hypercalls. Such an approach is adopted, for instance, by the Xen hypervisor [21]. Similarly to operating systems, hypervisors can be classified as embedded, if targeted for a specific application, system, or mission, otherwise they are general-purpose [22].
Comparing traditional, non-real-time hypervisors with realtime hypervisors, the latter add explicit support (e.g., specific scheduling algorithms) for the management of the time budged allotted to VMs, in order to assure that individual VMs can comply to stringent and explicit timing constraints. Real-time hypervisors can be further classified as dynamic or static. Dynamic hypervisors map VM-related resources, like virtual CPUs and virtual memory areas, at run-time as needed. On the contrary, static hypervisors can be seen as configuration layers that partition hardware resources, with a one-to-one mapping between virtual CPUs and physical CPUs, and devices mapped directly into the guest memory areas. Static solutions are often employed for embedded safety-critical or mixed-criticality applications as they are more robust to failures due to misconfigurations and usually introduce less overhead due to virtualization. Moreover, they are usually smaller in terms of lines of code, and thus they are easier to test and certify according to industrial standards.
Different types of guests can be run on top of a hypervisor. Typically, either the guest VM contains a full-fledged OS, or the application software directly without any need of a VM, the latter being a more convenient choice in embedded environments. The OS can be either a full-fledged GPOS (General-Purpose OS), such as Linux or Windows, or an RTOS (Real-Time OS). This opens up to different combinations, depending on applications' requirements and constraints (see Figure 1): RTOS and/or GPOS on top of a real-time hypervisor, and RTOS and/or GPOS on top of a non-real-time hypervisor.
Concerning Figure 1, the focus of this survey is on solutions for the upper-right quadrant, including mixed-criticality systems, being them of major interest for industrial systems.
The lower-left quadrant includes traditional hypervisor application domains, such as server consolidation in cloud environments. Using instead a GPOS on a real-time hypervisor (lower-right quadrant) might be needed to isolate RTOSes running on the same platform, while providing non-critical services in the GPOS, and it might be beneficial if the guest OS has stringent QoS and performance requirements that can be guaranteed with the time budged allocated by the real-time hypervisor. Finally, using an RTOS on a non-real-time hypervisor (upper-left quadrant) is not a common industrial practice, but it might be useful for functional testing, debugging, and prototyping purposes.
A different option that is again gaining popularity lately is to run guests within unikernels, which are a promising lightweight solution for embedded domains. This model does not preclude the selection of a hypervisor since it includes that an application is linked directly with the OS, treated as a library containing basic functions such as memory management, scheduling, networking stack, and basic I/O drivers. The guest binary will thus embed both the application and the OS code, and can be An alternative or complement to hypervisor-based virtualization is OS-level virtualization. The goal is to obtain a virtual domain, called container, with its own virtual CPU and virtual memory as in the traditional processes of an operating system, a virtual filesystem, a virtual network, process and user management. These virtual resources are distinct for each container in the system.
A container is not a virtual machine in the traditional sense, since there is no emulation of the physical hardware. For this reason, compared to full-and paravirtualization, this type of virtualization is lighter. For this reason, containers are more and more used in cloud environments to further improve application consolidation on the same hardware, avoiding replicating the OS stack. For the same reason, container-based virtualization is gaining momentum also in real-time systems [17,23], especially when stringent scalability and size constraints must be met, providing additional isolation level, while leveraging container orchestration capabilities (e.g., Kubernetes [24]).
We remark that both container-and unikernel-based virtualization do not include virtualization in the strict sense. However, both containers and unikernels are very spread concepts in the context of virtualization systems literature, and are starting to gain attention in industrial and real-time systems as well, thus we discuss these kinds of approaches in this survey. Figure 2 wraps up the different virtualization approaches discussed above, which are only a partial view of the entire virtualization spectrum.
Isolation properties. As mentioned previously, virtualization is one of the enablers for mixed-criticality systems, where in general there is the need to create strongly isolated partitions that run applications a different level of criticality.
In this respect, virtualization must ensure isolation between virtual instances [25,26,27]. In simple terms, this means that applications running on a virtual domain must have the illusion of being the only ones running on the physical machine. In the context of virtualization, we mainly consider three isolation properties.
Temporal isolation, or temporal segregation, is the ability to isolate or limit the impact of resource consumption (e.g. CPU, network, disk) of a virtual domain on the performance degradation of other virtual domains. This means that a critical task running on a virtual domain (for example a task on a VM or inside a container) must not cause serious delays to other critical and non-critical tasks running in a different virtual domain, avoiding phenomena such as starvation, reduced throughput and increased latency. Temporal isolation is crucial in mixed-criticality systems, where tasks run in a critical domain must guarantee specific performance Service Level Agreements (SLAs) and must not interfere with each other. In the context of safety-critical applications, some standards (e.g., IEC 61508-3 annex F [28], ISO 26262-6 annex-D [10], ARINC-653 [29], DO 178 6.3.3f [9], CAST-32A [30]) suggest adopting cyclic scheduling between virtual domains, to assure static and predetermined time slots to each domain.
The other crucial property is spatial isolation (also known as memory isolation or spatial segregation). This property describes the ability to isolate code and data between virtual domains and between virtual domains and hosts. This means that a task should not be able to alter private data belonging to other tasks, including devices assigned to a specific task. Spatial isolation is usually implemented using hardware memory protection mechanisms, such as the Memory Management Unit (MMU). Considering the case of shared physical devices, also I/O isolation becomes important. Often, the IOMMU is used to properly resolve the isolation of memory-mapped devices. In some cases, access to hardware devices from the different virtual domains is serialized.
Finally, fault isolation, or fault/error containment, prevents that failures, occurring in a virtual domain, are propagated to the hypervisor and/or to other virtual domains, causing blockages or even stopping the whole system. Related surveys. In years, several efforts on real-time vir-tualization were done, both from the commercial and academic sides. These studies tried to reuse IT virtualization technologies, well tested in cloud computing, for real-time purposes [12]. Gu and Zhao [13] survey virtualization technologies for real-time embedded systems including those for safety-critical applications, and discusses technical problems such as taskgrain scheduling and lock-holder preemption. Burns and Davis [14] surveys the state of the art in the field of mixed-criticality systems, with a focus on scheduling problems and solutions for both single-and multiprocessor systems. Taccari et al. [15] discuss embedded real-time virtualization solutions with a focus on ARM hardware-based virtualization support but limit the analysis to open-source projects. Reghenzani et al. [16] present a comprehensive survey on the real-time Linux kernel research (i.e., PREEMPT RT). Struhàr et al. [17] present a technology survey on real-time Linux container technologies, showing the gaps that should be filled to be a viable solution for industrial applications.
In this paper, we analyze the state of the practice of realtime virtualization approaches in the light of industrial needs that include safety/security certification and testing, reuse of legacy systems, and dependability support. To the best of our knowledge, this is often a neglected point of view on the existing solutions' portfolio, which could aid industrial practitioners to choose the most suitable virtualization approach according to their domain requirements and constraints.
Virtualization Dimensions for Industry Needs
The main question we address in this paper is: what should be the primary focus of industry when creating a new product or migrating seamlessly legacy systems exploiting virtualization technologies? In this section, we delineate three main dimensions that industry and researchers should focus on to properly adopt virtualization technology in mixed-criticality realtime domains.
Certification & Testing. The development of safety-critical systems raises various challenges from the certification point of view. In order to provide a specific safety integrity level (SIL), almost all standards recommend cumbersome V&V activities. Specifically, several studies in the literature [31,32,33,34,35,36,37] and various international safety-related standards [10,38,9,39,11] provide guidelines for testing activities, which encompass fault injection testing, robustness testing, and performance testing, among other activities, such as error impact analysis, coding standards, code review, etc. In a virtualized scenario, such tasks have to be performed at the different levels of the architecture, considering also the use of Commercial-Off-The-Shelf (COTS) components. Concerning cybersecurity, some standards require that the final system must satisfy different security requirements in terms of the provided partitioning level, the degree of resource isolation, complete control over communication channels, and the development of auditing mechanisms. This is the case of the Common Criteria for Information Technology Security Evaluation (ISO/IEC 15408) standard [40], which added a profile named "Separation Kernels in Environments Requiring High Robustness" Protection Profile (SKPP) [41,42] (that currently was superseded by the NIST document "Security and Privacy Controls for Federal Information Systems and Organizations" [43]).
Testing activities like functional and non-functional testing, robustness testing, performance testing, as well as static and dynamic analysis, run-time verification, fault injection, fuzzing, etc., are fundamental during the development of safety-critical systems. Often, these activities are strictly linked to the certification process, but they are usually performed regardless of the objective of making software certifiable. In particular, in real-time virtualization solutions, great relevance assumes the measure of the overhead introduced by the hypervisor, and how it impacts task execution. For example, the worst-case execution time analysis (WCET) could be invalidated due to the newly added software layers, with the consequent need to repeat the analysis. Furthermore, low-level synchronization primitives like spinlocks, and mutexes, could be redefined to prevent problems like priority inversion [44] and lock holder preemption [45] problems. These changes might require the original test suite to be reviewed.
In summary, migrating towards a virtualized environment requires redesigning the approaches used traditionally in the industry. The use of certified hypervisors or the availability of test suites may help to reduce the related burden.
Reuse of legacy systems. Legacy systems migration to a virtualization paradigm brings several benefits, such as, avoiding "divorce" of application and legacy OS, allowing the transparent execution of single-core software stacks on multicore hosts, and emulating discontinued hardware. However, the migration requires addressing a twofold issue. First, the pre-existent kernel (GPOS or RTOS) needs to be ported to be properly run on the specific hypervisor. Second, the chosen hypervisor has to support (or needs to be adapted to) the target (or emulated) hardware platform. These porting issues strictly depend on the type of chosen hypervisor. Indeed, in the case of full-virtualization with complete device emulation, software emulation is needed for each device within the target board; otherwise, the guest OS could not have all the functionalities properly set up. The good point, in this case, is that the pre-existent kernel does not need to be modified. Paravirtualized solutions are even more affected by this issue since guest OS kernels must be modified according to the interfaces provided with hypercalls to handle privileged instructions and virtualize I/O devices. Porting issues is also a problem if we consider the unikernel model. Indeed, unikernel image embeds the application and its dependencies. From one side, this reduces traditional compatibility issues between in-VM components, but potentially introduces new issues between unikernel VMs and the target hypervisor and their execution environment (i.e. computing resources, storage and networking prerequisites). Anyway, several hypervisors can be supported if the unikernel images are configured accordingly.
Dependability support. Virtualization solutions used in cloud computing environments are usually exploited to provide dependable services, such as tenant isolation, high-availability, fault-tolerance, migration and recovery techniques, and, in the worst-case (with severe faulty conditions), graceful degradation of the provided service. Naturally, these features are desirable also for safety-critical real-time systems. Thus, the chosen virtualization solution should support, for instance, easyto-use fault-tolerance tools and redundancy schemes like Triple Modular Redundancy (TMR) or 2-out-of-2, which are classical schemas used in industry to improve fault-tolerance. Concerning security, the hypervisor (and guest OSes) are likely to be the subject of attacks exploiting their vulnerabilities. In general, the addition of new layers may increase the attack surface. This, in turn, can severely harm the safety of the system. Special care must be taken to mitigate attacks, by accompanying the virtualization solution with mechanisms to keep communication channels secure, to cryptographically sign the running code, and provide auditing services.
Virtualization Approaches and Solutions
In this section, we survey representative examples of virtualization solutions, and related approaches, in light of current industry trends and dimensions, mentioned above. This overview is not intended to provide an exhaustive list of current solutions adopted in the industry. Instead, it guides the reader through examples of impactful solutions, in terms of approach and disruption potential, provided not only by software manufacturers through commercial products, but also by research initiatives and experiments with open sources. Our goal is to highlight the details of each solution to shed light on what is important to know to choose the right approach for given innovation needs.
The examples have been selected among four categories, which resemble four trends in the current or prospective adoption of virtualization in industry domains:
1. Solutions specifically designed for industrial and embedded domains, that happen to be based on separation kernel and microkernel approaches; 2. Solutions that try to exploit the best of existing generalpurpose hypervisors, adapting them to industry needs; 3. Solutions that take advantage of latest isolation features at the hardware level (e.g., ARM TrustZone), to achieve the strict isolation guarantees needed by industry standards; 4. Solutions that try to reduce the footprint and save flexibility, with respect to classical virtualization methods, employing lightweight virtualization like containers or unikernels.
Separation Kernels and Microkernels
Several solutions have been developed that try to keep down the complexity of the hypervisor while having a strong level of isolation by using virtualization concepts. Developers provide also ad-hoc solutions that apply hybrid virtualization approaches. Such solutions fit well for the embedded domain from dependability, certification, and testing point of view, as well as, they support several board platforms in order to provide host-level virtualization capabilities by running several guest OSes. The aim of reduced complexity is usually achieved through the adoption of separation kernel or microkernel approaches.
A separation kernel is a special type of very small baremetal hypervisor that utilizes hardware virtualization features to (i) define fixed virtual machines and (ii) control information flows. Separation kernels contain no device drivers, no user model, no shell access, and no dynamic memory; these tasks are all pushed up into guest software running in the VMs. This simple architecture results in a minimal implementation that, while less convenient for desktop or server use, is an excellent fit for embedded real-time and safety-critical systems. Separation kernels were the first technologies used in years for implementing so-called partitioned systems, and various effort was made especially in the avionic domain to support the safety of flight in Integrated Modular Avionics (IMA) according to the ARINC653 standard [29]. Furthermore, the Multiple Independent Levels of Security (MILS) [46] provided a high-assurance security architecture intended to allow mixed security applications to be hosted on common hardware. Recent hypervisors bring together separation kernel and virtualization concepts to isolate virtual machines (also named partitions) at different criticality levels on the same hardware platform.
A microkernel is a minimal kernel that implements only a few abstractions and operations that need to be executed in supervisor mode (e.g., memory management, process/thread management, IPC) while handling in user-mode the other kernel functionalities (device drivers, filesystem, networking, paging, etc.). In the last years, several microkernels have been developed for heterogeneous domains, and recently they have been used as hypervisors in order to provide simple partitioning functionalities with increasing reliability and security, using no 3rdparty code and running drivers within guests.
Representative examples of both models are presented in the following.
VxWorks MILS. VxWorks MILS [47] is a commercial separation kernel provided by WindRiver. The Wind River Vx-Works MILS complies with security requirements derived from the SKPP [41,42], and after the SKPP was sunset, it conforms to a subset of SKPP assurance requirements that apply directly to the product that is provided by Wind River to its customers. VxWorks MILS supports information flow control, resource isolation, trusted initialization, trusted delivery, trusted recovery, and audit capabilities. The information flow policies are set by the customer-defined configuration vector, which includes virtual boards and images, communication channels between virtual boards (direction, mode, etc.), schedules for the virtual boards, authorized system calls for every virtual board, etc. The MILS establishes two primary security domains -MILS kernel (supervisor) space and virtual board (user) space. The MILS enforces a Least Privilege Abstraction Partitioned Information Flow Policy (PIFP) to ensure security domains access only the resources that are required for their assigned functionality. Allowed information flows between specific virtual boards and resources are specified by the configuration vector and these flows are static. The MILS assumes protection from interfer-ence and tampering. Indeed, the MILS bootloader (MILS Payload BootLoader) is the root of trust of the entire system, and if the validation is successful the MILS kernel (MK) initialization is called; after initialization, the scheduler removes the MK init code and the related part in the configuration vector and schedules the first VB for execution. At run-time, the VxWorks MILS reference monitor and self-test subsystem verify that the MILS remains in a secure state. If a failure causing the state to become insecure is detected, the MK recovery is invoked to take action as specified by the configuration vector, which can result in rebooting or halting the system. In [48], Cotroneo et al. targeted VxWorks MILS to perform an experimental analysis to establish potential timing covert channels in order to assess the robustness of configurations provided by system designers. In [49], Aroca et al. assess VxWorks MILS realtimeliness by overloading the MILS kernel with ∼ 400 tasks alongside a ping flood was executed against a testbed that uses a signal generator and analyzes the signal response with an oscilloscope. The results show that the testbed could reliably handle and measure a 260KHz input frequency, with the worst response time of 3, 85 µs.
PikeOS. PikeOS [50] is a commercial hypervisor from SYSGO, used in the avionic domain. PikeOS architecture is based on the L4 microkernel and can run on Intel x86, ARM, PowerPC, SPARC v8/LEON, or MIPS processors. PikeOS supports multi-core platforms natively. That solution adopts three different kinds of scheduling algorithms, that is, priority-based, time-driven, and proportional share. For each real-time VM (critical VM), PikeOS statically assigns a time-slice; whenever a critical VM does not have any task to be executed, it donates CPU time to non-critical VMs. PikeOS architecture is ARINC653-compliant, in the sense that the PikeOS microkernel is the only privileged software and it is in full control of the virtual partitions. PikeOS supports several guest OSes like Android, RT-POSIX, ARINC653-based, Java, RTEMS. PikeOS has been the target and the basis of several academic and industrial evaluations. For example, August [51] analyzes the effect of a cache-timing side channel attacks on AES [52] focusing on a virtualization scenario based on PikeOS, as an example of a real-time system dedicated to security. It will furthermore evaluate methods to counteract that threat by using the system's scheduler. Regarding certification and formal verification, in [53] the authors formalized the hardware-independent securityrelevant part of PikeOS in order to prove intransitive noninterference properties [54]; moreover, in [55] the authors presented first results in the verification of the PikeOS microkernel system calls. In [56], Muttillo et al. leverage Dhrystone benchmark to compare Xtratum and PikeOS by varying the compilation optimization flags (e.g., O0, O1, etc.). We suggest the reader to refer [56] for the extensive results provided by the study.
Xtratum. Xtratum [57] is a para-virtualized type-1 partitioning hypervisor very popular for avionic embedded safetycritical systems and consists in ∼ 9K LOC. Xtratum is based on the APEX model, defined within the ARINC 653 standard [29]. Furthermore, Xtratum supports several CPUs like Intel x86 family, SPARCv8 family, ARMv7, and ultimately RISC-V that is under development. Xtratum provides temporal isolation between virtual partitions by leveraging a fixed cyclic scheduler. Spatial separation is provided by forcing partitions to execute in user-mode without any memory shared area. Xtratum data structures are all pre-defined at build time through a configuration file, in order to know exactly what resources the hypervisor will use. Xtratum defines a minimum set of hypercalls each of which has a known execution time. Finally, Xtratum enables interrupts only for partitions currently running, in order to minimize temporal interferences. Regarding the software certification, Xtratum hypervisor was used as a fundamental component for developing ARINC653-compliant RTOS [58,59] and for porting an OSEK-VDX-based RTOS to run on top of Xtratum [60]. Furthermore, the research community leverage Xtratum as a basis for fault-tolerant platforms in the context of embedded systems for space applications [61]. A preliminary analysis on the realtimeliness of Xtratum is conducted in [62], however is not very indicative. Despite, Carrascosa et al. [63] provide experimentation for native versus partitioned applications with the aim of evaluating the performance loss due to the presence of Xtratum hypervisor. The authors compare the execution time of Dhrystone and CoreMark benchmarks on baremetal and a partition under the Xtratum cyclic scheduling. The results show an execution time of ∼ 1-10 seconds, with 0.008% and 1.087% a performance loss for Dhrystone and CoreMark respectively. Further, the authors evaluated the partition context switch (PCS) impact, and it is estimated to be in the range of 149 to 151 µs.
Jailhouse. Jailhouse [64] is a Linux-based partitioning hypervisor developed within a research project by Siemens publicly available in 2013. In particular, Jailhouse enables asymmetric multiprocessing (AMP) cooperating with the Linux kernel in order to run bare-metal applications or guest OSes properly configured. Given the Jailhouse objective more related to isolation than virtualization, the hypervisor splits physical resources (CPUs, memory, I/O ports, PCI devices, etc.) into strongly isolated compartments called cells. Each cell is exclusively assigned to one guest OS and its applications called inmates. Jailhouse includes a cell, called root cell, that runs the Linux kernel and will execute the Jailhouse hypervisor itself and the other cells. Despite the main objective being partitioning resources, Jailhouse allows inter-cell communication through the ivshmem device model [65] from the QEMU project [66], which is based on an abstraction of PCI devices.
Jailhouse consists of a few lines of code (around 30K C and 1K Assembly lines of code), thus it should be ease both the process of certification and the application of formal methods for verification. Further, Jailhouse is released applying continuous integration and static code analysis tools. Jailhouse supports various OSes besides Linux, like L4 Fiasco.OC on Intel x86 and FreeRTOS on ARM, Erika Enterprise RTOS v3 on ARM64, and several ARM-based boards (e.g., NVIDIA Jetson TK1, Xilinx ZCU102, etc.). About fault-tolerance, Jailhouse provides a simple mechanism that allows restarting non-root cells as soon as they enter deadlock states detected using time-outs. Recently, Jailhouse was chosen as the building block for proposing a new family of safety-critical computing platforms designed to be compliant with IEC 61508 standard [67].
In [27], Jailhouse is the target of real-time assessment. The authors define an isolation coefficient, which represents the resulting slow-down due to the execution of tasks in the presence of other running tasks. For CPU isolation tests, an isolation coefficient of 0.40 and 0.0086 are provided respectively by the Linux and Jailhouse. The authors perform also L2-cache contention tests with basically no difference in execution time nor cache misses. Finally, the authors provide memory bus isolation tests, and the results show that Jailhouse does not provide any mechanism of bus isolation despite it does not introduce any overhead penalties.
L4-based. NOVA [68] is a type-1 hypervisor written in C++, developed to enhance security more than safety. The NOVA design is very similar to the L4 microkernel, but in contrast, it provides a full-virtualization solution. NOVA splits the hypervisor in a full-privileged critical component named micro-VMM, while the rest of the components are not privileged. The micro-VMM includes only the scheduler (in that case NOVA uses a preemptive priority-driven round-robin scheduler with one runqueue per CPU), MMU, a limited set of hypercalls, and implements the communication mechanisms between itself and other non-privileged components. In total, the size of the NOVA hypervisor settles down in 36KLOC including the microhypervisor (9 KLOC), a thin user-level environment (7 KLOC), and the VMM (20 KLOC). In general, the NOVA authors discuss deeply how the design principles can prevent several virtualization attacks like VMM attacks, guest attacks, and so on. They analyzed the performance overhead introduced in NOVA and demonstrate that it can be lower than 1% for memory-bound workloads. Further, they evaluate NOVA against IPC and virtual TLB miss microbenchmarks, according to different CPU architectures, obtaining latencies in the order of 100ns. Finally, they compared memory virtualization using hardware-based nested paging to a shadow page tables approach and observed that nested paging reduces the virtualization overhead from more than 20% to 1-3%. NOVA supports many ARMv8-based boards (e.g., NXP i.MX 8MQuad, Renesas R-Car M3, Raspberry Pi 4 Model B, Avnet Xilinx Ultra96, as well as QEMU virtual platform) and x86/x68 64 CPU families. NOVA was also analyzed by Tews et al. in the context of the European project named Robin [69] for formal verification purposes. The objective is to develop a semantic compiler in order to provide denotational semantics for C++ which includes all the C++ primitive data types of NOVA hypervisor.
The seL4 [70,71] is a formal-verified microkernel designed to be used in security-and safety-critical systems. In particular, sel4 is functionally correct against a formal model enforcing both integrity and confidentiality; timing channels proofs are still under assessment [72]. seL4 uses a priority-based scheduling policy and implements scheduling-context capabilities for assigning CPU time in the context of mixed-criticality systems [73]. In particular, a component can only obtain CPU time if it holds the scheduling-context capability, which specifies also the amount of CPU time that can be used. A scheduling context consists of a time budget (i.e., a time slice) and a time period that determines how often the budget can be used. A thread will not get more time than one budget per period. Further, sel4 leverages both ARM and x86 virtualization extensions to provide interfaces to support running virtual domains, which are implemented in user space. These interfaces compose the VMM, which initializes memory and provides exception handlers for emulated device drivers. The VMM was recently redesigned from a simply sel4 application to a set of CAmkES (Component Architecture for Micro-Kernel-based Embedded Systems) components [74]. sel4 is provided with a WCET analysis that results in determinist upper bounds for system calls and interrupt latencies [75,76]. In particular, Blackham et al. [75] provided an evaluation of seL4 and obtained that sel4 provides a guaranteed interrupt response time of around 500 µs on a BeagleBoard-xM platform with an ARM Cortex-A8 core. In open systems (arbitrary code can execute on the system), the interrupt response time is about 2 ms. Finally, different efforts are made to enhance sel4 with fault-tolerance capabilities. In particular, researchers proposed a mechanism to provide both task backup and recovery, as well as two checkpoint-based optimization strategies [77,78].
Separation Kernel
Separation kernels are designed to provide high levels of isolation coupled with dependability support. However, these solutions are not meant to be deployed on cloud platforms.
The development of separation kernels is often strictly related to the certification process. About testing, some studies provide an evaluation of the overhead due to virtualization, assessment of the isolation and recovery mechanisms.
Microkernel
Microkernels used as hypervisors are mainly designed to reduce at the maximum the trusted computing base compared to classical full virtualization solutions, providing high security.
Since microkernels are lightweight solutions, they are well suited for formal verification and testing activities, which makes easy the certification process.
Microkernels (like sel4) provides real-time capability with memory protection, for security, as well as part of its support for mixed-criticality systems.
General-purpose Hypervisors
In the last decades, Xen and KVM hypervisors were among the most used solutions in server virtualization. Xen is a type-1 hypervisor that provides paravirtualization technologies. It was the first attempt to overcome the performance penalty due to the dynamic binary translation [21]. On the other side, KVM is one of the most used hardware-assisted virtualization solutions, which exploits hardware extensions provided by modern CPUs. For example, the Intel VT-x enables the CPU to execute in two modes, i.e., the non-root mode used to run guest OSes code, and the root mode used to run the hypervisor. As soon as a VM attempts to execute privileged instructions (prohibited in non-root mode), CPU switches to root mode in a trap-like way to properly handle the instruction [79]. KVM is by definition a type-2 hypervisor since it requires the Linux kernel, but in practice, it acts as a type-1 hypervisor since it takes full control of the underlying hardware. It uses QEMU to provide I/O device emulation.
Despite both Xen and KVM are general-purpose hypervisors, they are currently used as tailored and working solutions for embedded systems and real-time clouds, if properly tuned [80,81,82,83,84], as described in the following.
Xen. In Xen, the main approach has been to optimize the scheduling algorithms of the virtual CPUs and to improve the interrupt handling [85,86,87,88,89]. By default, Xen adopts the Credit scheduler, which is a (weighted) proportional fair share virtual CPU scheduler. The user could tune the CPU share for each domain. Furthermore, the scheduler load balances the workload among vCPUs. RT-Xen [85,86] is one of the most important examples of using Xen for real-time purposes by providing a hierarchical real-time scheduling framework for Xen. In [85], the authors provided an empirical study on fixed-priority hierarchical scheduling in Xen, focusing on four real-time schedulers: Deferrable Server, Periodic Server, Polling Server, and Sporadic Server. They demonstrate that Deferrable Server is more suitable for soft real-time applications, while Periodic Server is the worst under the overloaded scenario. RT-Xen is at 2.2 version (last update in 2015), supporting both RM and EDF scheduling policy. The developers re-implemented the RM scheduling policy inside the RTDS scheduler in Xen 4.6 (RTDS is still an in-development feature). This effort is to improve the efficiency of the implementation of the RM scheduling policy and synchronize RT-Xen with the latest Xen version. Further, developers implemented also the null scheduler, which makes Xen a partitioning hypervisor, by statically assigning a single vCPU to a specific pCPU, removing any scheduling decision. Recently, Xen was used as a building block for Xilinx embedded systems [90]. Xilinx chooses Xen due to several motivations: (i) it is a robust and reliable solution;
(ii) recent developments of Xen takes full advantage of ARMv8 and his virtualization extension (around 30KLOC for specific hardware configuration), as well as all the support for the ARM System Memory Management Unit (SMMU); (iii) it is provided with a free-of-use license and has an active user and developer community. In years, Xen developed the Xen Test Framework (XTF) [91], a framework for both creating microkernel-based tests and a suite of tests built using the framework itself: prebuilt tests include assessment of specific security vulnerability, sanity checks, and functional tests. Further, Xen developed also CI platform called OSSTest [92], to run automatically test cases and leverage CI tools. Finally, Xen brings various efforts for safety certification aspects, such as the DornerWorks Xenbased hypervisor named ARLX, which is ARINC653 compliant [93]. Recently, the Xen FuSa Special Interest Group (FuSa SIG), which includes the Xen Project community together with industry vendors and safety assessors, provided objectives and high-level agreements to build and certify safety-critical systems (mainly in the automotive domain) based on mainline Xen hypervisor codebase [94]. In [81], Abeni et al. run the cyclictest as stress load in scenarios with non-real-time and real-time kernels used at guest and Dom0 level. In particular, they used the default Xen scheduler and assign dedicated pCPUs to the DomUs. The results show that using Xen's HVM virtualization mechanism can result in very high latencies in presence of some load in Dom0 (in the order of seconds), leading to unusability of Xen in the real-time domain. However, this issue can be avoided by using PV or PVH modes. Indeed, Xen allows reaching latencies in the order of 100 and 200 µs for PV and PVH modes respectively.
KVM. KVM-based solutions are mainly based on patching the host Linux kernel or improving KVM itself in order to comply with real-time constraints. The PREEMPT RT [95] is a set of patches of the Linux kernel, which provide realtime guarantees (e.g., predictability, low latencies) still using a single-kernel approach, against co-kernel model [16]. The main idea behind the co-kernel approach is to have another OS working as a layer between the hardware and the GPOS kernel, which intercepts interrupts and route them to real-time tasks or to GPOS tasks. Then, the scheduler must guarantee that real-time tasks do not miss deadlines. Instead, the PREEMPT RT patch provides several mechanisms like high-resolution timers, threaded interrupt handlers, priority inheritance implementation, Preemptible Read-Copy-Update (RCU), real-time schedulers, and a memory allocator.
Kiszka et al. [96] developed a para-virtualized scheduler at the task level, which allows the scheduler to cooperate with KVM via two new hypercalls, in order to manage threads at different priorities. They use KVM as a real-time hypervisor by assigning higher priorities to real-time threads within a VM, while lower priorities to threads running at the host layer. Cucinotta et al. [97] developed a scheduling algorithm by extending the Linux cgroups interface [98]. The authors proposed a variant of the CBS (Constant Bandwidth Server)/EDF scheduler to be used for inter-VM scheduling (at hypervisor level), and a fixed-priority scheduler within each VMs. In [99], the same authors focused on I/O issues. The idea was to group in the same reservation both VM threads and KVM threads and kernel threads needed for I/O virtualization (e.g., network or disk). Zhang et al. [100] applied various real-time tuning to Linux host by using the PREEMPT RT patch. They focused on a dual-guest scenario, in which they consolidated an RTOS and GPOS on the single KVM instance. Recently, KVM was supported in automotive industrial scenarios by the Automotive Grade Linux (AGL), which is a collaborative open-source project to accelerate the development of the connected car [84]. Regardless of all the solutions based upon PREEMPT RT, currently, this patch is accompanied by several test cases provided by the Linux Test Project (LTP) [101], and benchmarks about, among others, worst-case latency scenarios, latency debugging with tracing, approximating application performance, schedul-ing attributes tests, and tests against classic three-way priority inversion deadlock. In [81], the authors analyzes the ability of KVM to serve real-time workloads. The results show that KVM causes worst-case latencies smaller than 100 µs. In general, the authors suggest using real-time kernels both at guest and host level.
General-purpose Hypervisors
Could be adapted for real-time purposes through patches and re-design of specific critical components like CPU emulation.
They are a good choice when there are requirements related to cloud computing, like VM migration, orchestration, and high-availability mechanisms.
KVM-and Xen-based hypervisor solutions should be carefully tuned to prevent higher latencies introduced by the scheduling approach and emulation (CPU and I/O) mechanisms.
Explicit support for testing is available, and certification aspects have been started to be a primary focus, especially in Xen.
ARM TrustZone-assisted Virtualization
In order to increase the isolation of virtual domains, the research community explored the possibility of leveraging hardwareassisted solutions for security purposes in the safety-critical domain. ARM with TrustZone [102] and Intel with SGX [103] provide the most used architectures. Normally, these solutions enable a so-called Trusted Execution Environment (TEE) and provide confidentiality and integrity.
In literature, there were very few studies that leverage Intel SGX extensions to design real-time mixed-criticality systems. One study that is worth mentioning is provided with a positioning paper by De Simone et al. [104], which explored the possibility of using the SGX to enforce the isolation among critical tasks running on top of unikernel-based hypervisor [105,106]. The most explored approach has been to use the security features of ARM TrustZone. This technology supports two virtual execution states (i.e. "secure" and "non-secure") and provides time and spatial isolation between the two environments [107,108,109,110,111]. In particular, for virtualization purposes, the non-secure world and the secure world are used for running different VMs that are managed by the hypervisor software that runs in the monitor mode. Mostly, researchers used ARM TrustZone with a dual-guest OS configuration for running side by side a general-purpose OS (GPOS) within the nonsecure world and a real-time OS (RTOS) in the secure-world having higher privileges. This way, critical tasks running on top of the RTOS are isolated from non-critical tasks.
LTZvisor/RTZvisor. One of the most representative TrustZone-assisted virtualization solutions is LTZvisor that is designed mainly for mixed-criticality systems [109]. LTZVisor implements the dual-guest OS scenario, in which the RTOS and GPOS share the same physical processor, but the GPOS is scheduled only when RTOS is idle. An improved version of LTZVisor [112] supports asymmetric multi-processing execution, in which the RTOS and hypervisor execute in one core within the secure world, while another core runs the GPOS within a non-secure world. In that case, the authors avoid starvation of GPOS tasks. The first version of LTZVisor consists of less than 3KB of memory footprint and introduces a GPOS performance degradation of around 2% for a 1-millisecond guestswitching rate. In [109], the authors evaluate several latencysensite operations. In particular, i) partition-switch operations take ∼ 20 µs, assuming no real-time tasks ready to run once the RTOS is rescheduled; ii) the process of checking that no realtime tasks are ready to run and then trigger the switch to the non-secure world takes ∼ 12 µs; iii) switching from the RTOS to the GPOS takes ∼ 3 µs; iv) the hypervisor guarantees ∼ 2 µs of interrupt latency in the case of serving FIQs (Fast Interrupts, which are serviced first when multiple interrupts occur) while GPOS is running, and a total of ∼ 5 µs to restore RTOS execution.
The same authors proposed RTZVisor [113] and his successor µRTZVisor [114] as a solution for multi-guest OS scenario. In that case, the hypervisor software still runs in the monitor mode, while each of the guest OSes can run switching between the non-secure and secure worlds. Specifically, the active guest OS runs in the normal world, while the context of inactive guests is preserved in the secure world. µRTZVisor supports both coarse-grained partitions that run guest OSes on the non-secure world, and user-level finer-grained partitions on the secure side that are used for executing secure tasks implementing kernel extensions. The adopted scheduler is based on VOSYSMonitor. VOSYSmonitor [115,116] is a lowlevel closed-source software layer that executes in the monitor mode of the ARM TrustZone architecture. It was conceived for the automotive industry and it is compliant with the ASIL-C requirements of the ISO 26262 standard [10]. VOSYSmonitor enforces the RTOS, or safety-critical OS, to run on the secure world, while multiple non-critical guests can run on the normal world, managed by a non-real-time hypervisor (e.g., Xen or KVM). Non-critical guests can run only when the critical OS releases the permission to run on the assigned core in the normal mode. Context switches are efficiently managed, through interrupt handling, in the monitor mode. To achieve the required level of certification, VOSYSmonitor implements several safety features. Among them, we mention mechanisms for safe core synchronization, runtime self-tests (e.g., to check memory and I/O isolation properties, code integrity and performance monitoring), and the introduction of a safe state which is used to preserve the proper execution of the critical OS in the secure world in case a fault is detected by the runtime self-tests. Among possible measures, the safe state includes the switching off of appliances in the normal world and the migration of the secure world from a core to another. In [116], the authors evaluate VOSYSMonitor against the ARM Juno R1 and the Renesas R-CarH3 platforms, by analyzing the context switch latency using the ARMv8 Performance Monitoring Unit (PMU). The results for the Juno board shows that VOSYSmonitor is ∼ 100% and ∼ 200% faster than ARM Trusted Firmware (ATF) [117] respectively in VOSYSmonitor running on Cortex-A57 with interrupt handler and almost in VOSYSmonitor running on an A-53 core without interrupt handler, with overall latencies in the order of 0.5-1 µs. Considering also the context switch including the FIQs, VOSYSMonitor settles around 200ns for interrupt latencies.
ARM TrustZone-assisted Virtualization
ARM TrustZone enables virtualization thanks to dual world execution model.
TrustZone-based solutions are strictly linked to the specific ARM CPU architecture, thus they are not suitable for supporting other platforms (e.g., PowerPC, Intel).
These solutions are provided with well-defined test suites and performance analyses, as well as approaches for failure recovery.
Lightweight Virtualization
In some cases, the stringent footprint requirements of embedded mixed-criticality systems call for a lightweight virtualization approach. For this reason, lightweight solutions based on OS-level virtualization with containers and unikernels are starting to be explored in industrial domains.
Adopting OS-level or container-based virtualization in realtime domain is a recent trend. The goal is to leverage containers in lieu of VMs to achieve isolation with small footprint in mixed-criticality systems [118]. In many cases, it is indeed not necessary to replicate an entire OS within a VM, especially if specific OS functionalities are not needed. The key idea is to enhance the abstractions of OS processes (called containers), by extending the (host) OS kernel. For example, Linux leverages the namespace process isolation mechanisms [119] and cgroups that provides resource management capabilities [98]. A container will have its virtual CPU and virtual memory (like in traditional OS processes), but also virtual filesystem (i.e., the container perceives a filesystem structure that is different than the host's), virtual network (i.e., the container sees a different set of networking interfaces), IPC, PIDs, and users management. These virtual resources are distinct for each container in the system. The approach is gaining popularity also in the context of consolidated real-time platforms, such as VxWorks by WindRiver, now featuring a container engine compliant with OCI (Open Container Initiative -opencontainers.org).
In the literature, Linux-based real-time container solutions mainly adopt two approaches: (i) the use of co-kernels, and (ii) the modification of the Linux scheduler.
RT-CASE. RT-CASE [120] is built using the co-kernel approach. Indeed, the real-time tasks run within real-time containers (named rt-case) and will be scheduled by the co-kernel. That approach exploits co-kernels that are known to provide better real-time performance and functionalities, while keeping all the mechanisms and tools provided by a container engine. Each rt-case is assigned with a criticality level, and tasks with a lower criticality level must not interfere with tasks with a higher criticality level. RT-CASE architecture includes container management tools and libraries, and a feasibility checker that is responsible for admitting a new container on a compute node according to already running real-time containers. At the kernel level, RT-CASE leverages the dual-kernel approach, by using a co-kernel like RTAI or Xenomai. The co-kernel makes the host kernel fully preemptable, thus both general-purpose containers and host tasks will be preemptable by real-time tasks and containers. The rt-lib is a key component in RT-CASE, and it provides the mapping between real-time tasks on real-time CPUs according to the container criticality level. Furthermore, rt-lib provides standard primitives to run non-modified tasks within real-time containers. Finally, RT-CASE is designed to migrate on-demand real-time containers on nodes within a large-scale cloud platform.
Hierarchical scheduling of containers. Abeni et al. [121] proposed the use of real-time containers by modifying the Linux scheduling mechanism to provide two levels of hierarchical scheduling. First level Earliest Deadline First scheduler selects the container to be scheduled on each CPU. Subsequently, the second level Fixed Priority scheduler selects a task in the container. CPU reservation (run-time quota and period) is assigned to each of the containers. In [121], Abeni et al. provide a Real-Time Schedulability Analysis proving that using the proposed hierarchical scheduler the all the tasks running at guest level consume all the runtime assigned to the vCPUs of the VM. Further, they perform an experiment to show the advantages of using the proposed scheduler for the management of a real-time JACK audio processing workflow. Cucinotta et al. [122,123] leveraged this hierarchical real-time scheduler for providing preliminary results of an ongoing project about using a container-based solution in Network Function Virtualization infrastructures [124]. The authors proposed a mechanism to reduce temporal interferences among concurrent real-time services deployed on containers, and evaluated the proposed approach by using LXC containers [125]. The results show stable performance of deployed services, enabling the possibility to apply sound performance modeling, analysis, and control techniques. Also in that case, the authors provide a schedulability analysis using a hierarchical real-time scheduler, which provides predictable QoS and can be used for real-time workloads.
OS-level Virtualization
Gained popularity in recent years due to the provision of lightweight isolation solutions in embedded systems, by leveraging the host OS as a hypervisor.
Solutions exploit built-in dependability mechanisms like container migration and load balancing, as well as container recovery (e.g., restart), at the expense of lower security.
Despite the use of CI tools and well-defined test suites, these solutions require more analysis and studies in the industrial context, especially in the view of certification and isolation testing tasks.
In order to increase isolation, performance, and security it is possible to run a single application in its virtual domain. Such a model is known as unikernel or library OS, in which the full software stack of a system, including OS components, libraries, language runtime, and applications, are compiled into a single VM that runs directly on a general-purpose hypervisor (e.g., Xen). This approach introduces benefits such as a high performance, small code base, and a reduced certification effort, due to the low amount of software to be verified. However, stronger isolation proofs to be reported to certifiers are still lacking. Further, the attack surface of unikernel instances is small, as they lack the variety of functions provided by standard OSes, as well as the tools used to exploit them (no shells, utilities, etc.).
A fundamental drawback of unikernels is that developers must manually port target applications to the underlying minimal OS. This brings significant engineering effort since it takes a considerable amount of time and needs experts with high knowledge of underlying OS details. HermiTux [126,127] is a solution that tries to mitigate porting issues in unikernel-based systems. In particular, HermiTux emulates OS interfaces at runtime accordingly to the Linux ABI, and runs a customized hypervisor-based ELF loader to run a Linux binary side-by-side with a minimal kernel in a single address space VM. All the system calls made by a program are redirected to the implementations the unikernel provides. Hermitux supports multithreading and SMP, as well as, checkpoint/restart and migration, which are crucial for orchestration purposes. Another relevant example for circumventing issues mentioned before is Unikraft [128], which provides a highly-configurable unikernel code base for speeding-up development.
Despite existing several unikernel solutions most suitable for cloud computing scenarios [129,130,131,132], representative examples include ClickOS and HermiCore. [133] is an example of using unikernel model in the industry. NEC Ltd. proposed this solutions for consolidating several high-performance virtualized network middleboxes on top of Xen [133]. In particular, ClickOS is based on MiniOS unikernel [134] and brings a number of optimizations to the Xen's network I/O sub-system in order to perform fast networking for traditional VMs; in particular, ClickOS includes (i) replacing Open vSwitch back-end switch with a high-speed ClickOS switch, (ii) removing the netback [135] driver from the pipe, still used as control plane driver to perform actions such as communicating ring buffer addresses (grants) to the netfront driver, and (iii) changing the VM netfront driver to map the ring buffers into its memory space.
ClickOS. ClickOS
HermitCore. HermitCore [136] is a unikernel solution designed for High-performance Computing (HPC) scenarios and particularly for NUMA architectures. This solution leverages a library OS alongside Linux to run NUMA nodes within Her-mitCore instances which manage all the resources. Further, developers implemented a fast message passing interface realizing an inter-kernel communication between the HermitCore instances. Recently, the authors enables HermitCore to both run (as unikernel) within a VM but also as bare-metal applications [137]. In this case, HermitCore could be exploited to run real-time and cloud workloads, since reduced memory footprint and reduced pressure on cache system can provide more predictable behaviors. Further, the authors extended HermitCore to support also many-core architectures. In [137], the authors evaluate HermitCore to reveal the overhead induced on the target system. They leveraged Hourglass benchmark [138,139] to determine the gaps in the execution time caused by Linux and HermitCore. The results show that HermitCore provides the smallest noise, consequently could be used for real-time scenarios.
Unikernels
Fast boot and migration time, low memory footprint, high density, high performance, and an effortless (theoretically) certification process.
Leverage the underlying host hypervisor to provide strong security.
Applications need to be manually ported to the underlying unikernel.
More analysis and studies are needed to assess the feasibility of adoption of these solutions in mixed-criticality real-time systems, especially concerning dependability support and certification. Table 1 shows a summary of the main features of each solution. We considered three classes for hypervisor size according to Lines of Code (LOC), namely, Small (less than 10kLOC), Medium (less than 100kLOC) and Large (greater than 100kLOC) classes. Further, Table 1 reports for each solution the license and the supported hardware and guest OS to ease out the porting of existing legacy systems in the virtualization world. Then the table summarizes the dependability features, the availability of test-suites, and the compliance of the product with industry safety and security standards to aid industry practitioners to choose the most appropriate virtualization technology according to their domain needs. In order to aid industry practitioners to choose an appropriate solution to migrate existing legacy systems to a virtualization paradigm, we reported in Table 1, for each solution, the kind of license, the supported hardware architectures, and the explicit support to guest OS. Time and space partitioning [73].
Discussion
Proofs of security enforcement [140]. Task backup and recovery mechanisms, and checkpoint-based optimization strategies [77,78] Implementation correctness formal-verified. WCET analysis provided in [75,76]. Certifiable in theory. Temporal and spatial isolation provided. Xtratum was also used as a basis for a hypervisor-based fault tolerant architecture for space applications, providing error detection mechanism via task-level duplication [61].
Xtratum
A solution which includes Xtratum and the RTOS ORK+, has been certified to be compliant to the standard ARINC 653.
Separation kernel and Microkernel
Jailhouse Released applying CI and static code analysis tools. No certification was done; within the SELENE project [67] researchers are working on IEC 61508 compliance. Strong isolation and security due to unikernel-based design, but no evidence is provided in [133].
Xen/RT-Xen
Mention to the certification procesess, but no evidence is provided in [133].
Lightweight Virtualization
HermitCore Unikernel Strong isolation and security due to unikernel-based design, with some empirical results provided in [136,126].
Released applying CI/CD tools.
Indeed, the hypervisor selection is a crucial task in the industrial domain, and it should comply with dimensions we provided in Table 1 in order to properly migrate to virtualizationbased systems. As relevant examples, the HERCULES [3], SELENE [67], and HERMES [142] H2020 European projects involved several industry partners (e.g., Airbus, Thales Alenia Space, STMicroelectronics, etc.) which cooperate with academia to leverage virtualization technologies in different domain ranging from the railway to aerospace. In that case, the hypervisor selection follows specific requirements that are easily mapped (in some cases directly) to Table 1.
In the following, we provide some points of discussion for state-of-the-practice solution categories reviewed in Section 4, highlighting the current industrial and scientific trends in virtu-alization.
Separation kernels and microkernels: the current trend. The majority of industry solutions for virtualization fall into the separation kernel and microkernel classes. Also, static approaches to virtualization (aka partitioning) are preferred over dynamic solutions. This is a clever choice since industry scenarios must ensure the highest level of isolation between virtualized domains due to strict requirements that must be met by safety standards clauses. Proprietary solutions (i.e., Vx-Works, PikeOS, VOSYSmonitor) support the majority of features required by a safe and secure environment (e.g., run-time secure state verification, health monitoring, trust recovery, etc.). PikeOS also supports the SAFe-VX architecture for voting, which eases the development of reliable applications in safety-critical domains. While open ARM TrustZone-based solutions inherit isolation and security from the underlying hardware, generalpurpose and OS-level solutions can take advantage of existing tools developed for supporting high-availability mechanisms in cloud applications (i.e., Citrix HA for Xen, Red Hat oVirt for KVM and Docker tools for the Linux containers).
As General-purpose real-time hypervisors: a new opportunity. Also for general-purpose open-source solutions (e.g., KVM, Xen), widely adopted in cloud computing scenarios, we are assisting to a proliferation of projects that are trying to delineate guidelines with tools and methodologies supporting the safety certification process also for these open-source platforms.
For example, the FuSa Special Interest Group (SiG) is analyzing the possibilities of using Xen as a basis for safety-critical virtualized systems. Indeed, Xen currently provides real-time support for scheduling (ARINC, RTDS, and Null schedulers), a minimal size (less than 30KSLOC) for ARM-based hardware environments, paravirtual and GPU mediation for rich I/O, TEE virtualization support [94]. Xen developers provided the Dom0less patch since Xen v4.12 [94,143,144]. This crucial feature enables Xen to create a set of unprivileged domains at boot time, passing information about these VMs to the hypervisor via the Device Tree (a tree data structure with nodes that describe the physical devices in Linux-based systems). Indeed, Xen developers extended the older Device Tree to allow for multiple domains to be passed to Xen. Actually, the Dom0 is still required to manage the DomUs, but the hypervisor can create additional VMs in parallel without any interactions with the control domain. Practitioners can also omit the definition of Dom0 into the Device Tree without specifying the Dom0 kernel obtaining a "true Dom0-less" system, but having a Dom0 environment can still be convenient for monitoring and management purposes. "True Dom0-less" configurations fit well scenarios with higher security (reduced attack surface) or to improve resource utilization (shorten boot times). Further, there are several efforts to break into privileged service domains the Dom0 (aka Dom0 disaggregation) to improve security, reliability, and isolation of Xen [145,146,147]. Open-source virtualization solutions are also gaining popularity in the automotive domain, thanks to vertical initiatives such as the Automotive Grade Linux (AGL) project. AGL is considering hypervisors (including Xen, but also OS-level virtualization like Docker) to create a safety-critical execution environment for workloads in software-defined vehicle architecture according to ISO 26262 [84]. Further, the recent ELISA project [148] promises to implement a certifiable Linux kernel, which (indirectly) impacts KVM applicability for safety use cases.
As mentioned at the beginning, general-purpose solutions fully support cloud computing infrastructures, with several frameworks for the management and orchestration of VMs, which include migration, balancing, and high-availability mechanisms. By leveraging this kind of hypervisors in embedded systems we can easily support the implementation of solutions for orchestrating tasks at different criticality running on different RTOSes and GPOSes within different hardware boards. In the context of the LF Edge foundation [149], whose goal is to aid the development of industrial IoT and edge devices, Xilinx is currently developing a lightweight solution called RunX, which exploits Xen to both run containers as VMs, either with the provided custom-built Linux-based kernel with a Busybox-based ramdisk, or with container-specific kernel/ramdisk. TEE-based virtualization: exploiting hardware-driven innovations. ARM TrustZone-based solutions are gaining big momentum today because several embedded system providers build their products on top of ARM CPUs (e.g., Xilinx). However, this kind of virtualization reduces the reuse of legacy software for platforms powered by other CPU vendors like Intel, LEON, and others. In this regard, Intel is supporting the ACRN project, which is an open-source hypervisor with a focus on industrial IoT scenarios and edge device use cases [150].
Lightweight virtualization: a promising new trend. Concerning lightweight virtualization solutions, they are gaining traction for mixed-criticality systems. Especially in the telco industry, we are witnessing the trend to softwarize hardwarebased network elements towards so-called virtual network functions [124], for which real-time and mixed-criticality are stringent requirements. Since this kind of virtualization allows delivering low-latency, bandwidth-efficient, and resilient services they fit well use cases like autonomous vehicles, smart cities, and augmented reality, which are common scenarios in industrial IoT [151]. However, technological questions remain for ensuring reliability and security, but also the timeliness required both for telecommunication networks and mixed-criticality systems. By using container-based virtualization, the main advantages come with the easy use of built-in orchestration mechanisms (e.g., Docker Swarm) and platforms (e.g., Kubernetes). Containers reduce the overhead affecting VMs and better scale when a larger number of applications of different criticalities are in place with built-in orchestration capabilities. However, containers reduce isolation, threatening the practicability of OSlevel virtualization under strict real-time and safety requirements. For example, in [152] the authors presented an architecture for a multipurpose industrial controller deployed via containers. In [153], the authors provide a performance evaluation that aims to show the strengths and weaknesses of different low-power devices when handling container-virtualized instances.
Instead, since unikernel-based solutions do not share the underlying host kernel (each unikernel has its own kernel), they are mainly used to enhance security; furthermore, since unikernels are minimalistic OSes, with image size less than 5 MBS and memory footprint of 8 MBs on average [151], this kind of virtualization is a good candidate to ease the certification process for safety-critical mixed-criticality systems. Currently, researchers are exploring unikernel-based solutions in the context of industrial IoT (IIoT) scenarios, which impose critical requirements like determinism, safety certification, isolation, and flexibility. In [151], the authors try to understand if unikernels can be exploited for deploying IoT edge architectures and environments, like vehicular cloud computing, edge computing for smart cities, and augmented reality. In [154], the authors discuss how to leverage unikernel-based virtualization in the context of NFV IoT gateways. They highlight how the use of containers for NFV could negatively impact security and isolation due to the shared host kernel.
Hypervisor certification directions. Generally, certifying a hypervisor includes several burdensome tasks (e.g., rigorous documentation, test suites, verification tools, and so on) that lead to an increased overall cost of developing safety-critical systems. However, safety standards like EN 50128 and ISO 26262, consider the possibility of integrating pre-existing software or into systems being certified. Thus, an interesting research direction is considering a hypervisor as a library to be integrated into a system already certified at some SIL level.
Despite the great maturity of safety-related standards, today there is still a need of facing security certification in the context of new industry movements like IIoT and Industry 4.0. This brings cybersecurity aspects in mixed-criticality systems development, which are not considered in the past. When certifying security with safety there is a need to identify properly the overlap between standards processes and ensure that all security and safety requirements are included, still keeping the overall cost of certification low. These issues are today exacerbated if we consider virtualized mixed-criticality systems.
Further, the use of Machine Learning (ML)/Artificial Intelligence (AI) for bringing autonomy in mixed-criticality applications, where software development shifts from traditional coding to example-based training, introduces new issues. Indeed, several industry domains, especially in automotive and healthcare, currently leverage or plan to use ML/AI techniques for critical decision-making components. Clearly, this leads the developed systems to be ready to co-locate on the same hardware platform non-critical applications (e.g., dashboards, monitoring functions) with highly complex AI components. The SELENE project [67] is a real example of how research and industry are envisioning to apply virtualization technologies (in this specific case, they chose the Jailhouse partitioning hypervisors) to quantify and assess the reliability level that can be reached by placing AI components in a safety-critical system.
Curating the training process, operating and integrating ML models, and achieving confidence in the ML models through new forms of verification and validation and through "explainable AI" (XAI) techniques, are some of the tasks to be performed. In that case, is crucial to understand how to certify these systems since in safety-related standards it is common to explicitly not recommend using artificial intelligence for almost all safety integrity levels. However, there are improvements in that direction since ISO/IEC provides standards like the ISO/IEC DTR 24029-1 [155], which focuses on the robustness of neural networks, and the ISO/IEC WD TS 4213 [156] (under-development), which focuses on the assessment of machine learning classification performance.
Hybrid virtualization solutions. Finally, regardless of the virtualization approaches and solutions analyzed in this paper, we are witnessing trends in adopting hybrid solutions that try to satisfy both real-time and general-purpose needs, simultaneously. This is the case, for instance, of the IoT domain in which there is a need for high portability and adaptability, with a rich set of I/O virtualization capabilities. Virtualization will be extensively used also in high-performance computing (HPC) platforms, which offer the power needed by the modern industrial systems and edge computing architectures by using devices like GPUs, FPGAs, and other kinds of accelerators. No solutions are available yet, capable to face such heterogeneous environments while guaranteeing easy porting, isolation, and realtime properties. Containers are a promising solution for these contexts, combining flexibility and scalability, but they are not mature yet for full adoption in industrial domains.
Conclusion
This survey analyzed the most important virtualization approaches and related solutions proposed in the last years targeting the real-time and/or safety-critical domains. In particular, we analyzed existing solutions along three fundamental dimensions, which reflect the most common requirements in mixedcriticality domains: Certification & Testing, Reuse of legacy systems, and Dependability support.
We observed that separation kernel solutions are designed to comply with safety certification, and with a high level of isolation, whereas microkernels approach provides strong security and effective verification by reducing at the minimum the Trusted Computing Base (TCB). Although the previous considerations, general-purpose solutions are still a good choice for real-time purposes, and recent initiatives are emerging, to foster their adoption in safety-critical scenarios. Hardware-assisted solutions, based on ARM TrustZone, leverages security features from the hardware and provide well-defined test suites, performance analysis tools, and failure recovery mechanisms. These solutions raise the portability problem on other platforms, and migrating legacy applications may require non-negligible costs. Finally, lightweight solutions are a recent trend, particularly promising to overcome footprint issues while assuring the isolation required for mixed-criticality. However, their adoption in industrial domains, apart from telecommunication networks, is still far to be established.
More research efforts are needed in several directions. Regarding testing and certification, many safety and security standards provide guidelines for testing activities, which encompass fault injection testing, robustness testing, and performance testing, along with the classical testing activities. However, we still witness a lack of shared benchmarks and effective test suites that could help to produce evidence to support the certification process, especially concerning novel trends, such as the use of lightweight virtualization or the certification of systems based on machine learning and artificial intelligence. Finally, given the evolution of existing security standards, like ISO 62443 and its derivative EN 50701 for the railway, a good portion of mixed-criticality systems will also require security and privacy certification, which is neglected by most of the current noncommercial solutions.
Figure 1 :
1Hypervisor and OS combinations with related applications run directly on the physical hardware or on top of a hypervisor.
Figure 2 :
2Examples of virtualization approaches.
time domains, which are execution windows with a constant and guaranteed bandwidth. At each time domain is assigned an execution budget and each domain is scheduled according to round-robin policy. Further, the scheduler allows assigning partitions to the domain-0 time window, in which partitions are scheduled in a priority-based, time-sliced manner. The domain-0 can preempt partitions running in different domains. In [114], Martins et al. evaluate time switching between guest partitions and secure tasks or between secure tasks; the results show that the switching process takes about 19.4 and 10.4 µs, respectively. µRTZVisor provides in the worst case about 180 µs of interrupt latencies. Finally, about IPC the authors evaluate both asynchronous and synchronous communication. In particular, they analyzed the time the running partition needs to perform the Send, Receive and SendReceive hypercalls from a guest partition. Considering a 64-byte message size, the hypercall execution time is in the order of 5 µs for each operation.
one would expect, the most advanced solutions, in terms of certification, are proprietary. VxWorks, PikeOS, VOSYSmonitor are examples of certified solutions, i.e., compliant with the industry safety and security standards, such as ARINC-653, DO-178C, Common criteria, ISO26262, etc... However, recent initiatives in open-source projects are trying to reduce the gap. For instance, Xtratum, with the RTOS ORK+ as a Guest OS, has been certified to be compliant with the ARINC-653 standard.
Table 1 :
1Comparison between virtualization solutions features through industry dimensionsSolution Features
Reuse of legacy
Dependability
Certification & Testing
Category
Name
Hypervisor
Type/Support
Size
Latest
Release
License
Supported
Hardware
architectures
Supported
Guest OS/
Application
Security, reliability and
fault-tolerance features
Test suites, test reports and
compliance to standards
VxWorks
MILS
Type-1, Static
Small
N/A
(supersed
by
VxWorks
653)
Closed
ARMv7, ARMv8,
MIPS, PowerPC,
SH, Hitachi H8
VxWorks
Guest OS,
WindRiver
Linux Guest
Protection from interference and
tampering, enforces a Least
Privilege Abstraction Partitioned
Information Flow Policy (PIFP),
supports runtime secure state
verification, resource isolation,
trusted initialization, trusted
delivery, trusted recovery, and
audit capabilities
ARINC 653-compliant, MILS support,
proven in DO-178C, EUROCAE
ED-12, and IEC 61508, Common
Criteria with SKPP profile
PikeOS
Type-1, Dynamic
Small
v5.0, Feb
2020
Closed
Intel x86, ARMv8,
PowerPC, SPARC
V8/LEON, MIPS
Linux,
Android,
RT-POSIX,
ARINC653,
RTEMS
Resource isolation, built-in Health
Monitoring Function which
implements all features described
in the ARINC-653 standard,
Hardware support for voting
(SAFe-VX Architecture)
Compliant to ARINC 653, RTCA
DO-178B/C, ISO 26262, IEC 62304,
EN 50128, IEC 61508, Common
Criteria, SAR, MILS (three security
levels). Verification of microkernel
system calls [55], intransitive
noninterfernce properties [53]
NOVA
Type-1, Dynamic
Medium
Latest
commit Jul
2021
GPL
Intel x86, x86-64,
ARMv8-based
boards
Linux
Protection against VMM attacks,
guest attacks
Formal verification analysis[69],
performance testing and overhead
analysis [68]
sel4
Type-1, Dynamic
Small
Latest
commit Oct
2021
GPL
Intel x86, x86-64,
ARMv7/v8-based
boards, RISC-V
Linux
Home page of Shift2Rail projects. Shift2rail, Shift2Rail, Home page of Shift2Rail projects, https://projects. shift2rail.org/s2r_projects.aspx.
WindRiver Systems Inc., Virtualization and the Internet of Things. WindRiver White PaperWindRiver Systems Inc., Virtualization and the Internet of Things, WindRiver White Paper (2016) 4. URL https : / / www . windriver . com / whitepapers / iot-virtualization / 1436-IoT-Virtualization-White-Paper.pdf
Home page of Hercules 2020 project. HerculesHercules 2020, Home page of Hercules 2020 project, http : / / hercules2020.eu/.
5GCityl, 5GCity Project Home Page. 5GCityl, 5GCity Project Home Page, https://www.5gcity.eu/.
Design principles for industrie 4.0 scenarios. M Hermann, T Pentek, B Otto, Proc. HICSS. HICSSIEEEM. Hermann, T. Pentek, B. Otto, Design principles for industrie 4.0 sce- narios, in: Proc. HICSS, IEEE, 2016, pp. 3928-3937.
Using virtualized task isolation to improve responsiveness in mobile and iot software. N Klingensmith, S Banerjee, Proc. IoTDI, ACM/IEEE, 2019. IoTDI, ACM/IEEE, 2019N. Klingensmith, S. Banerjee, Using virtualized task isolation to im- prove responsiveness in mobile and iot software, in: Proc. IoTDI, ACM/IEEE, 2019, pp. 160-171.
Hermes: A real time hypervisor for mobile and iot systems. N Klingensmith, S Banerjee, Proc. HotMobile, ACM. HotMobile, ACMN. Klingensmith, S. Banerjee, Hermes: A real time hypervisor for mo- bile and iot systems, in: Proc. HotMobile, ACM, 2018, pp. 101-106.
Virtualizing embedded systems-why bother?. G Heiser, 48th ACM/EDAC/IEEE Design Automation Conference (DAC). IEEEG. Heiser, Virtualizing embedded systems-why bother?, in: 2011 48th ACM/EDAC/IEEE Design Automation Conference (DAC), IEEE, 2011, pp. 901-905.
DO-178B Software Considerations in Airborne Systems and Equipment Certification, Requirements and Technical Concepts for Aviation. RTCARTCA, DO-178B Software Considerations in Airborne Systems and Equipment Certification, Requirements and Technical Concepts for Avi- ation.
Product Iso, Development, Software Level, ISO 26262: Road vehicles -Functional safety 6. ISO, Product Development: Software Level, ISO 26262: Road vehicles -Functional safety 6.
Railway applications-Communication, Signaling and Processing Systems-Software for Railway Control and Protection Systems. Cenelec, CENELEC, EN 50128, Railway applications-Communication, Signal- ing and Processing Systems-Software for Railway Control and Protec- tion Systems.
M García-Valls, T Cucinotta, C Lu, Challenges in Real-Time Virtualization and Predictable Cloud Computing. Elsevier60M. García-Valls, T. Cucinotta, C. Lu, Challenges in Real-Time Virtual- ization and Predictable Cloud Computing, Elsevier JSA 60 (9) (2014) 726-740.
A state-of-the-art survey on real-time issues in embedded systems virtualization. Z Gu, Q Zhao, Scientific Research Publishing Journal of Software Engineering and Applications. 54Z. Gu, Q. Zhao, A state-of-the-art survey on real-time issues in em- bedded systems virtualization, Scientific Research Publishing Journal of Software Engineering and Applications 5 (4) (2012) 277-290.
A Survey of Research into Mixed Criticality Systems. A Burns, R I Davis, ACM CSUR. 50682A. Burns, R. I. Davis, A Survey of Research into Mixed Criticality Sys- tems, ACM CSUR 50 (6) (2018) 82.
Embedded real-time virtualization: State of the art and research challenges. G Taccari, L Taccari, A Fioravanti, L Spalazzi, A Claudi, A B Sa, Proc. RTLWS. RTLWSG. Taccari, L. Taccari, A. Fioravanti, L. Spalazzi, A. Claudi, A. B. SA, Embedded real-time virtualization: State of the art and research chal- lenges, in: Proc. RTLWS, 2014, pp. 1-7.
The real-time linux kernel: A survey on preempt rt. F Reghenzani, G Massari, W Fornaciari, ACM CSUR. 521F. Reghenzani, G. Massari, W. Fornaciari, The real-time linux kernel: A survey on preempt rt, ACM CSUR 52 (1) (2019) 1-36.
Real-time containers: A survey. V Struhar, M Behnam, M Ashjaei, A Papadopoulos, Proc. Fog-IoT Workshop. Fog-IoT WorkshopV. Struhar, M. Behnam, M. Ashjaei, A. Papadopoulos, Real-time con- tainers: A survey, in: Proc. Fog-IoT Workshop, 2020.
. Vmware Inc, Vmware Esxi, VMware Inc., VMware ESXi Overview. URL http://www.vmware.com/it/products/esxi-and-esx/ overview.html
kvm: the linux virtual machine monitor. A Kivity, Y Kamay, D Laor, U Lublin, A Liguori, Proc. Linux Symp. Linux Symp1A. Kivity, Y. Kamay, D. Laor, U. Lublin, A. Liguori, kvm: the linux virtual machine monitor, in: Proc. Linux Symp., Vol. 1, 2007, pp. 225- 230.
Microsoft Corporation. Microsoft Corporation, Hyper-V. URL http://technet.microsoft.com/en-us/windowsserver/ dd448604.aspx
Xen and the art of virtualization. P Barham, B Dragovic, K Fraser, S Hand, T Harris, A Ho, R Neugebauer, I Pratt, A Warfield, Proc. SOSP. SOSPP. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neuge- bauer, I. Pratt, A. Warfield, Xen and the art of virtualization, in: Proc. SOSP, 2003, pp. 164-177.
The role of virtualization in embedded systems. G Heiser, Proc. IIES. IIESG. Heiser, The role of virtualization in embedded systems, in: Proc. IIES, 2008, pp. 11-16.
. Runx Xilinx, Xilinx, RunX, https://github.com/Xilinx/runx.
. Frakti Frakti, Github, Frakti, Frakti GitHub page, https://github.com/kubernetes/ frakti.
Bringing Virtualization to the x86 Architecture with the Original VMware Workstation. E Bugnion, S Devine, M Rosenblum, J Sugerman, E Y Wang, ACM TOCS. 304E. Bugnion, S. Devine, M. Rosenblum, J. Sugerman, E. Y. Wang, Bring- ing Virtualization to the x86 Architecture with the Original VMware Workstation, ACM TOCS 30 (4).
On the Injection of Hardware Faults in Virtualized Multicore Systems. M Cinque, A Pecchia, Journal of Parallel and Distributed Computing. 106ElsevierM. Cinque, A. Pecchia, On the Injection of Hardware Faults in Virtu- alized Multicore Systems, Elsevier Journal of Parallel and Distributed Computing 106 (2017) 50-61.
Testing Performance-Isolation in Multi-core Systems. J Danielsson, T Seceleanu, M Jägemar, M Behnam, M Sjödin, Proc. COMPSAC. COMPSACJ. Danielsson, T. Seceleanu, M. Jägemar, M. Behnam, M. Sjödin, Test- ing Performance-Isolation in Multi-core Systems, in: Proc. COMPSAC, 2019, pp. 604-609.
. International Electrotechnical Commission, Software Requirements. International Electrotechnical Commission, Software Requirements, IEC 61508-3.
ARINC-653: Avionics application Software standard interface part. 1Aeronautical Radio Inc., ARINC-653: Avionics application Software standard interface part 1 (2010).
Certification Authorities Software Team (CAST), Multi-core Processors. Certification Authorities Software Team (CAST), Multi-core Proces- sors, https : / / www . faa . gov / aircraft / air _ cert / design _ approvals / air _ software / cast / cast _ papers / media / cast-32a.pdf.
Fault Injection for Software Certification. D Cotroneo, R Natella, IEEE Security & Privacy. 114D. Cotroneo, R. Natella, Fault Injection for Software Certification, IEEE Security & Privacy 11 (4) (2013) 38-45.
Experimental Analysis of Binary-level Software Fault Injection in Complex Software. D Cotroneo, A Lanzaro, R Natella, R Barbosa, Proc. EDCC. EDCCIEEED. Cotroneo, A. Lanzaro, R. Natella, R. Barbosa, Experimental Analysis of Binary-level Software Fault Injection in Complex Software, in: Proc. EDCC, IEEE, 2012, pp. 162-172.
Run-Time Detection of Protocol Bugs in Storage I/O Device Drivers. D Cotroneo, L Simone, R Natella, IEEE TR. 673D. Cotroneo, L. De Simone, R. Natella, Run-Time Detection of Protocol Bugs in Storage I/O Device Drivers, IEEE TR 67 (3) (2018) 847-869.
Faultprog: Testing the Accuracy of Binary-level Software Fault Injection. D Cotroneo, A Lanzaro, R Natella, IEEE TDSC. 151D. Cotroneo, A. Lanzaro, R. Natella, Faultprog: Testing the Accuracy of Binary-level Software Fault Injection, IEEE TDSC 15 (1) (2016) 40-53.
Dependability Certification Guidelines for NFVIs through Fault Injection. D Cotroneo, L Simone, R Natella, Proc. ISSREW. ISSREWIEEED. Cotroneo, L. De Simone, R. Natella, Dependability Certification Guidelines for NFVIs through Fault Injection, in: Proc. ISSREW, IEEE, 2018, pp. 321-328.
S Winter, O Schwahn, R Natella, N Suri, D Cotroneo, Pain No, the utility of PArallel fault INjections. IEEE PressProc. ICSES. Winter, O. Schwahn, R. Natella, N. Suri, D. Cotroneo, No PAIN, no gain?: the utility of PArallel fault INjections, in: Proc. ICSE, IEEE Press, 2015, pp. 494-505.
Sil2 assessment of an active/standby cots-based safety-related system. G Mazzeo, L Coppolino, S Antonio, C Mazzariello, L Romano, Elsevier Reliability Engineering & System Safety. 176G. Mazzeo, L. Coppolino, S. D'Antonio, C. Mazzariello, L. Romano, Sil2 assessment of an active/standby cots-based safety-related system, Elsevier Reliability Engineering & System Safety 176 (2018) 125-134.
NASA-GB-8719.13NASA, Software Safety Guidebook. NASA, Software Safety Guidebook, NASA-GB-8719.13.
ISO/IEC 25045, Systems and Software Engineering -Systems and Software Quality Requirements and Evaluation (SQuaRE) -Evaluation module for recoverability. Rtca Rtca, Do, RTcA, RTCA DO, ISO/IEC 25045, Systems and Software Engineering - Systems and Software Quality Requirements and Evaluation (SQuaRE) -Evaluation module for recoverability.
Common Criteria for Information Technology Security Evaluation (Version 3.1, Revision 4) Part 1-3 (ISO/IEC 15408. Iso/Iec, ISO/IEC, Common Criteria for Information Technology Security Eval- uation (Version 3.1, Revision 4) Part 1-3 (ISO/IEC 15408) (2012).
Information Assurance Directorate, US Government Protection Profile for Separation Kernels in Environments Requiring High Robustness. National Security AgencyTech. rep.Information Assurance Directorate, US Government Protection Profile for Separation Kernels in Environments Requiring High Robustness, Tech. rep., National Security Agency (2007).
A survey on formal specification and verification of separation kernels. Y Zhao, Z Yang, D Ma, Springer Frontiers of Computer Science. 114Y. Zhao, Z. Yang, D. Ma, A survey on formal specification and veri- fication of separation kernels, Springer Frontiers of Computer Science 11 (4) (2017) 585-607.
10.6028/NIST.SP.800-53r4.NIST, Security and Privacy Controls for Federal Information Systems and Organizations (NIST SP 800-53 R4). NIST, Security and Privacy Controls for Federal Information Systems and Organizations (NIST SP 800-53 R4), http://dx.doi.org/10. 6028/NIST.SP.800-53r4. (2013).
The Linux Foundation, Priority inversion -priority inheritance. The Linux Foundation, Priority inversion -priority inheritance, https: / / wiki . linuxfoundation . org / realtime / documentation / technical_basics/pi.
The Lock Holder and the Lock Waiter Pre-Emption Problems: Nip Them in the Bud Using Informed Spinlocks (I-Spinlock). B Teabe, V Nitu, A Tchana, D Hagimont, Proc. EuroSys, ACM. EuroSys, ACMB. Teabe, V. Nitu, A. Tchana, D. Hagimont, The Lock Holder and the Lock Waiter Pre-Emption Problems: Nip Them in the Bud Using Informed Spinlocks (I-Spinlock), in: Proc. EuroSys, ACM, 2017, p. 286-297.
The MILS architecture for high-assurance embedded systems. J Alves-Foss, P W Oman, C Taylor, W S Harrison, Inderscience International Journal of Embedded Systems. 23-4J. Alves-Foss, P. W. Oman, C. Taylor, W. S. Harrison, The MILS ar- chitecture for high-assurance embedded systems, Inderscience Interna- tional Journal of Embedded Systems 2 (3-4) (2006) 239-247.
Wind River VxWorks MILS. Wind River, Systems, Inc, Platform 3.0, multi-core editionWind River Systems, Inc., Wind River VxWorks MILS Platform 3.0, multi-core edition, https : / / www . windriver . com / products / product-notes / vxworks-mils-multi-core-platform-product-note / vxworks-mils-multi-core-platform-product-note.pdf.
Timing covert channel analysis of the vxworks mils embedded hypervisor under the common criteria security certification. D Cotroneo, L Simone, R Natella, Computers & Security. 106102307D. Cotroneo, L. De Simone, R. Natella, Timing covert channel analysis of the vxworks mils embedded hypervisor under the common criteria security certification, Computers & Security 106 (2021) 102307.
R V Aroca, G Caurin, S Carlos-Sp-Brasil, A real time operating systems (rtos) comparison, in: WSO-Workshop de Sistemas Operacionais. Citeseer12R. V. Aroca, G. Caurin, S. Carlos-SP-Brasil, A real time operating sys- tems (rtos) comparison, in: WSO-Workshop de Sistemas Operacionais, Vol. 12, Citeseer, 2009.
. Pikeos, PikeOS, PikeOS product overview. URL https : / / www . sysgo . com / fileadmin / user _ upload / www . sysgo . com / redaktion / downloads / pdf / data-sheets / SYSGO-Product-Overview-PikeOS.pdf
IDP: An Analysis of a Cache-Based Timing Side Channel Attack and a Countermeasure on. M August, PikeOS. M. August, IDP: An Analysis of a Cache-Based Timing Side Channel Attack and a Countermeasure on PikeOS (2014).
Advanced encryption standard (AES). S Heron, Network Se- curity 2009ElsevierS. Heron, Advanced encryption standard (AES), Elsevier Network Se- curity 2009 (12) (2009) 8-12.
Formal api specification of the pikeos separation kernel. F Verbeek, O Havle, J Schmaltz, S Tverdyshev, H Blasum, B Langenstein, W Stephan, B Wolff, Y Nemouchi, NASA Formal Methods Symposium. SpringerF. Verbeek, O. Havle, J. Schmaltz, S. Tverdyshev, H. Blasum, B. Lan- genstein, W. Stephan, B. Wolff, Y. Nemouchi, Formal api specification of the pikeos separation kernel, in: NASA Formal Methods Symposium, Springer, 2015, pp. 375-389.
What is intransitive noninterference?. A W Roscoe, M H Goldsmith, Proc. CSF Workshop. CSF WorkshopIEEEA. W. Roscoe, M. H. Goldsmith, What is intransitive noninterference?, in: Proc. CSF Workshop, IEEE, 1999, pp. 228-238.
Verifying the pikeos microkernel: first results in the verisoft xt avionics project. C Baumann, T Bormer, Proc. SSV. SSV20C. Baumann, T. Bormer, Verifying the pikeos microkernel: first results in the verisoft xt avionics project, in: Proc. SSV, 2009, p. 20.
Benchmarking analysis and characterization of hypervisors for space multicore systems. V Muttillo, L Tiberi, L Pomante, P Serri, Journal of Aerospace Information Systems. 1611V. Muttillo, L. Tiberi, L. Pomante, P. Serri, Benchmarking analysis and characterization of hypervisors for space multicore systems, Journal of Aerospace Information Systems 16 (11) (2019) 500-511.
Xtratum: a hypervisor for safety critical embedded systems. M Masmano, I Ripoll, A Crespo, J Metge, Proc. RTLWS, Citeseer. RTLWS, CiteseerM. Masmano, I. Ripoll, A. Crespo, J. Metge, Xtratum: a hypervisor for safety critical embedded systems, in: Proc. RTLWS, Citeseer, 2009, pp. 263-272.
Open source implementation of hierarchical scheduling for integrated modular avionics. J Zamorano, J De La Puente, Proc. RTLWS. RTLWSJ. Zamorano, J. de la Puente, Open source implementation of hierarchi- cal scheduling for integrated modular avionics, in: Proc. RTLWS, 2010.
ORK+/XtratuM: An open partitioning platform for Ada. Á Esquinas, J Zamorano, A Juan, M Masmano, I Ripoll, A Crespo, Proc. Ada-Europe. Ada-EuropeSpringerÁ. Esquinas, J. Zamorano, A. Juan, M. Masmano, I. Ripoll, A. Crespo, ORK+/XtratuM: An open partitioning platform for Ada, in: Proc. Ada- Europe, Springer, 2011, pp. 160-173.
Oversee-a generic floss communication and application platform for vehicles. N Mcguire, A Platschek, G Schiesser, Proc. RTLWS. RTLWSN. McGuire, A. Platschek, G. Schiesser, Oversee-a generic floss com- munication and application platform for vehicles, in: Proc. RTLWS, 2010.
Hypervisor-based virtual hardware for fault tolerance in cots processors targeting space applications. S Campagna, M Hussain, M Violante, Proc. DFT. DFTIEEES. Campagna, M. Hussain, M. Violante, Hypervisor-based virtual hard- ware for fault tolerance in cots processors targeting space applications, in: Proc. DFT, IEEE, 2010, pp. 44-51.
Initial performance study of xtratum/ppc. R Zhou, S Bai, B Wang, N Mcguire, Q Zhou, L Li, R. Zhou, S. Bai, B. Wang, N. McGuire, Q. Zhou, L. Li, Initial perfor- mance study of xtratum/ppc.
Xtratum hypervisor redesign for leon4 multicore processor. E Carrascosa, J Coronel, M Masmano, P Balbastre, A Crespo, ACM SIGBED Review. 112E. Carrascosa, J. Coronel, M. Masmano, P. Balbastre, A. Crespo, Xtra- tum hypervisor redesign for leon4 multicore processor, ACM SIGBED Review 11 (2) (2014) 27-31.
Jailhouse hypervisor source code. A G Siemens, Siemens AG, Jailhouse hypervisor source code. URL https://github.com/siemens/jailhouse
. Homepage Qemu, Qemu, QEMU, Homepage of QEMU. URL https://www.qemu.org/
Self-monitored Dependable platform for High-Performance Safety-Critical Systems. Selene Cordis, CORDIS, SELENE: Self-monitored Dependable platform for High- Performance Safety-Critical Systems, https://cordis.europa.eu/ project/id/871467.
NOVA: a microhypervisor-based secure virtualization architecture. U Steinberg, B Kauer, Proc. EuroSys, ACM. EuroSys, ACMU. Steinberg, B. Kauer, NOVA: a microhypervisor-based secure virtual- ization architecture, in: Proc. EuroSys, ACM, 2010, pp. 209-222.
. H Tews, T Weber, M Völp, E Poll, M Van Eekelen, P Van Rossum, NovaH. Tews, T. Weber, M. Völp, E. Poll, M. van Eekelen, P. van Rossum, Nova micro-hypervisor verification (2008).
Formal verification of an os kernel. G Klein, K Elphinstone, G Heiser, J Andronick, D Cock, P Derrin, D Elkaduwe, K Engelhardt, R Kolanski, M Norrish, Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles. the ACM SIGOPS 22nd symposium on Operating systems principles4G. Klein, K. Elphinstone, G. Heiser, J. Andronick, D. Cock, P. Derrin, D. Elkaduwe, K. Engelhardt, R. Kolanski, M. Norrish, et al., sel4: For- mal verification of an os kernel, in: Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles, 2009, pp. 207-220.
From l3 to sel4 what have we learnt in 20 years of l4 microkernels?. K Elphinstone, G Heiser, Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles. the Twenty-Fourth ACM Symposium on Operating Systems PrinciplesK. Elphinstone, G. Heiser, From l3 to sel4 what have we learnt in 20 years of l4 microkernels?, in: Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles, 2013, pp. 133-150.
Can we prove time protection?. G Heiser, G Klein, T Murray, Proceedings of the Workshop on Hot Topics in Operating Systems. the Workshop on Hot Topics in Operating SystemsG. Heiser, G. Klein, T. Murray, Can we prove time protection?, in: Pro- ceedings of the Workshop on Hot Topics in Operating Systems, 2019, pp. 23-29.
Scheduling-context capabilities: A principled, light-weight operating-system mechanism for managing time. A Lyons, K Mcleod, H Almatary, G Heiser, Proc. EuroSys. EuroSysACMA. Lyons, K. McLeod, H. Almatary, G. Heiser, Scheduling-context ca- pabilities: A principled, light-weight operating-system mechanism for managing time, in: Proc. EuroSys, ACM, 2018, pp. 1-16.
The Linux Foundation, CAmkES documentation. The Linux Foundation, CAmkES documentation. URL https://docs.sel4.systems/projects/camkes/
Timing analysis of a protected operating system kernel. B Blackham, Y Shi, S Chattopadhyay, A Roychoudhury, G Heiser, 2011 IEEE 32nd Real-Time Systems Symposium. IEEEB. Blackham, Y. Shi, S. Chattopadhyay, A. Roychoudhury, G. Heiser, Timing analysis of a protected operating system kernel, in: 2011 IEEE 32nd Real-Time Systems Symposium, IEEE, 2011, pp. 339-348.
High-assurance timing analysis for a highassurance real-time operating system. T Sewell, F Kam, G Heiser, Real-Time Systems. 535T. Sewell, F. Kam, G. Heiser, High-assurance timing analysis for a high- assurance real-time operating system, Real-Time Systems 53 (5) (2017) 812-853.
Towards fault-tolerant task backup and recovery in the sel4 microkernel. G Luan, Y Bai, L Xu, C Yu, C Wang, J Zeng, Q Chen, W Wang, Proc. COMPSAC. COMPSACIEEE1G. Luan, Y. Bai, L. Xu, C. Yu, C. Wang, J. Zeng, Q. Chen, W. Wang, Towards fault-tolerant task backup and recovery in the sel4 microkernel, in: Proc. COMPSAC, Vol. 1, IEEE, 2018, pp. 721-726.
Towards faulttolerant real-time scheduling in the sel4 microkernel. L Xu, Y Bai, K Cheng, L Ge, D Nie, L Zhang, W Liu, Proc. HPCC/S-martCity/DSS. HPCC/S-martCity/DSSIEEEL. Xu, Y. Bai, K. Cheng, L. Ge, D. Nie, L. Zhang, W. Liu, Towards fault- tolerant real-time scheduling in the sel4 microkernel, in: Proc. HPCC/S- martCity/DSS, IEEE, 2016, pp. 711-718.
Intel virtualization technology: Hardware support for efficient processor virtualization. G Neiger, A Santoni, F Leung, D Rodgers, R Uhlig, Intel Technology Journal. 103G. Neiger, A. Santoni, F. Leung, D. Rodgers, R. Uhlig, Intel virtualiza- tion technology: Hardware support for efficient processor virtualization., Intel Technology Journal 10 (3).
Using Xen and KVM as Real-Time Hypervisors. L Abeni, D Faggioli, Elsevier JSA101709L. Abeni, D. Faggioli, Using Xen and KVM as Real-Time Hypervisors, Elsevier JSA (2020) 101709.
An experimental analysis of the xen and kvm latencies. L Abeni, D Faggioli, Proc. ISORC. ISORCIEEEL. Abeni, D. Faggioli, An experimental analysis of the xen and kvm latencies, in: Proc. ISORC, IEEE, 2019, pp. 18-26.
. P Bonzini, Kvm Realtime, P. Bonzini, Realtime KVM, https://lwn.net/Articles/656807/.
Rtopen stack: Cpu resource management for real-time cloud computing. S Xi, C Li, C Lu, C D Gill, M Xu, L T Phan, I Lee, O Sokolsky, Proc. CLOUD. CLOUDIEEES. Xi, C. Li, C. Lu, C. D. Gill, M. Xu, L. T. Phan, I. Lee, O. Sokolsky, Rt- open stack: Cpu resource management for real-time cloud computing, in: Proc. CLOUD, IEEE, 2015, pp. 179-186.
The Automotive Grade Linux Software Defined Connected Car Architecture, White Paper. The Linux FoundationThe Linux Foundation, The Automotive Grade Linux Software Defined Connected Car Architecture, White Paper. URL https : / / www . automotivelinux . org / wp-content / uploads / sites / 4 / 2018 / 06 / agl _ software _ defined _ car _ jun18.pdf
Rt-xen: Towards real-time hypervisor scheduling in xen. S Xi, J Wilson, C Lu, C Gill, Proc. EMSOFT, ACM. EMSOFT, ACMS. Xi, J. Wilson, C. Lu, C. Gill, Rt-xen: Towards real-time hypervisor scheduling in xen, in: Proc. EMSOFT, ACM, 2011, pp. 39-48.
Real-time multi-core virtual machine scheduling in xen. S Xi, M Xu, C Lu, L T Phan, C Gill, O Sokolsky, I Lee, Proc. EMSOFT. EMSOFTIEEES. Xi, M. Xu, C. Lu, L. T. Phan, C. Gill, O. Sokolsky, I. Lee, Real-time multi-core virtual machine scheduling in xen, in: Proc. EMSOFT, IEEE, 2014, pp. 1-10.
Parfait: A new scheduler framework supporting heterogeneous xen-arm schedulers. J.-W Jeong, S Yoo, C Yoo, Proc. CCNC. CCNCIEEEJ.-W. Jeong, S. Yoo, C. Yoo, Parfait: A new scheduler framework supporting heterogeneous xen-arm schedulers, in: Proc. CCNC, IEEE, 2011, pp. 1192-1196.
Enforcing performance isolation across virtual machines in xen. D Gupta, L Cherkasova, R Gardner, A Vahdat, Proc. Middleware. MiddlewareSpringerD. Gupta, L. Cherkasova, R. Gardner, A. Vahdat, Enforcing performance isolation across virtual machines in xen, in: Proc. Middleware, Springer, 2006, pp. 342-362.
Communication-aware cpu management in consolidated xen-based hosting platforms. S Govindan, J Choi, A R Nath, A Das, B Urgaonkar, A Sivasubramaniam, Xen , IEEE TOC. 8S. Govindan, J. Choi, A. R. Nath, A. Das, B. Urgaonkar, A. Sivasub- ramaniam, Xen and co.: Communication-aware cpu management in consolidated xen-based hosting platforms, IEEE TOC (8) (2009) 1111- 1125.
Enabling Virtualization with Xen Hypervisor on Zynq Ul-traScale+ MPSoCs (White Paper). Inc Xilinx, XilinxXilinx, Inc., Enabling Virtualization with Xen Hypervisor on Zynq Ul- traScale+ MPSoCs (White Paper), Xilinx.
Xen Test Framework Home Page. Citrix SystemsCitrix Systems, Xen Test Framework Home Page, http://xenbits. xen.org/gitweb/?p=osstest.git;a=blob;f=README.
. Citrix Systems, OSStest Xen Project REAME. Citrix Systems, OSStest Xen Project REAME, http://xenbits.xen. org/gitweb/?p=osstest.git;a=blob;f=README.
A safe & secure arinc 653 hypervisor. S H Vanderleest, D Greve, P Skentzos, Proc. DASC. DASCIEEES. H. VanderLeest, D. Greve, P. Skentzos, A safe & secure arinc 653 hypervisor, in: Proc. DASC, IEEE, 2013, pp. 7B4-1.
. Sig Fusa, Sig Fusa, Charted, FuSa SIG, FuSa SIG Charted, https : / / wiki . xen . org / wiki / FwwuSa_SIG/Charter.
. P Mckenney, A realtime preemption overviewP. McKenney, A realtime preemption overview, https://lwn.net/ Articles/146861/.
Towards linux as a real-time hypervisor. J Kiszka, Proc. RTLWS. RTLWSCiteseerJ. Kiszka, Towards linux as a real-time hypervisor, in: Proc. RTLWS, Citeseer, 2009, pp. 215-224.
Respecting temporal constraints in virtualised services. T Cucinotta, G Anastasi, L Abeni, Proc. COMPSAC. COMPSACIEEE2T. Cucinotta, G. Anastasi, L. Abeni, Respecting temporal constraints in virtualised services, in: Proc. COMPSAC, Vol. 2, IEEE, 2009, pp. 73- 78.
. P Menage, P. Menage, cgroups documentation, https://www.kernel.org/doc/ Documentation/cgroup-v2.txt.
Providing performance guarantees to virtual machines using real-time scheduling. T Cucinotta, D Giani, D Faggioli, F Checconi, Proc. Euro-Par. Euro-ParSpringerT. Cucinotta, D. Giani, D. Faggioli, F. Checconi, Providing performance guarantees to virtual machines using real-time scheduling, in: Proc. Euro-Par, Springer, 2010, pp. 657-664.
Performance analysis towards a KVM-based embedded real-time virtualization architecture. J Zhang, K Chen, B Zuo, R Ma, Y Dong, H Guan, Proc. ICCIT. ICCITIEEEJ. Zhang, K. Chen, B. Zuo, R. Ma, Y. Dong, H. Guan, Performance analysis towards a KVM-based embedded real-time virtualization archi- tecture, in: Proc. ICCIT, IEEE, 2010, pp. 421-426.
Description of LTP real-time test cases. LTP developers. LTP developers, Description of LTP real-time test cases, https : / / github . com / linux-test-project / ltp / blob / master / testcases/realtime/00_Descriptions.txt.
. TrustZone Technology for Microcontrollers. ARMARM, TrustZone Technology for Microcontrollers, https://www. arm.com/why-arm/technologies/trustzone-for-cortex-m.
Intel sgx explained. V Costan, S Devadas, 2016/086Cryptology ePrint Archive. ReportV. Costan, S. Devadas, Intel sgx explained, Cryptology ePrint Archive, Report 2016/086, http://eprint.iacr.org/2016/086 (2016).
Isolating real-time safety-critical embedded systems via sgx-based lightweight virtualization. L. De Simone, G Mazzeo, Proc. ISSREW. ISSREWIEEEL. De Simone, G. Mazzeo, Isolating real-time safety-critical embed- ded systems via sgx-based lightweight virtualization, in: Proc. ISSREW, IEEE, 2019, pp. 308-313.
Unikernels: Rise of the Virtual Library Operating System. A Madhavapeddy, D J Scott, ACM Queue. 111130A. Madhavapeddy, D. J. Scott, Unikernels: Rise of the Virtual Library Operating System, ACM Queue 11 (11) (2013) 30.
Unikernels: Library operating systems for the cloud. A Madhavapeddy, R Mortier, C Rotsos, D Scott, B Singh, T Gazagnaire, S Smith, S Hand, J Crowcroft, ACM SIGARCH Computer Architecture News. 411A. Madhavapeddy, R. Mortier, C. Rotsos, D. Scott, B. Singh, T. Gaza- gnaire, S. Smith, S. Hand, J. Crowcroft, Unikernels: Library operating systems for the cloud, ACM SIGARCH Computer Architecture News 41 (1) (2013) 461-472.
Thin Hypervisor-based Security Architectures for Embedded Platforms. H Douglas, Royal Institute of TechnologyPh.D. thesisH. Douglas, Thin Hypervisor-based Security Architectures for Embed- ded Platforms, Ph.D. thesis, Royal Institute of Technology (2010).
ARM TrustZone as a Virtualization Technique in Embedded Systems. T Frenzel, A Lackorzynski, A Warg, H Härtig, Proc. RTLWS. RTLWST. Frenzel, A. Lackorzynski, A. Warg, H. Härtig, ARM TrustZone as a Virtualization Technique in Embedded Systems, in: Proc. RTLWS, 2010, pp. 29-42.
Ltzvisor: Trustzone is the key. S Pinto, J Pereira, T Gomes, A Tavares, J Cabral, Proc. ECRTS, Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik. ECRTS, Schloss Dagstuhl-Leibniz-Zentrum fuer InformatikS. Pinto, J. Pereira, T. Gomes, A. Tavares, J. Cabral, Ltzvisor: Trustzone is the key, in: Proc. ECRTS, Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2017.
Acceleration of dual OS virtualization in embedded systems. S.-C Oh, K Koh, C.-Y Kim, K Kim, S Kim, Proc. ICCCT. ICCCTIEEES.-C. Oh, K. Koh, C.-Y. Kim, K. Kim, S. Kim, Acceleration of dual OS virtualization in embedded systems, in: Proc. ICCCT, IEEE, 2012, pp. 1098-1101.
Affordable separation on embedded platforms. O Schwarz, C Gehrmann, V Do, Proc. TRUST. TRUSTSpringerO. Schwarz, C. Gehrmann, V. Do, Affordable separation on embedded platforms, in: Proc. TRUST, Springer, 2014, pp. 37-54.
Lightweight multicore virtualization architecture exploiting arm trustzone. S Pinto, A Oliveira, J Pereira, J Cabral, J Monteiro, A Tavares, Proc. IECON. IECONIEEES. Pinto, A. Oliveira, J. Pereira, J. Cabral, J. Monteiro, A. Tavares, Light- weight multicore virtualization architecture exploiting arm trustzone, in: Proc. IECON, IEEE, 2017, pp. 3562-3567.
Towards a trustzone-assisted hypervisor for real-time embedded systems. S Pinto, J Pereira, T Gomes, M Ekpanyapong, A Tavares, IEEE Computer Architecture Letters. 162S. Pinto, J. Pereira, T. Gomes, M. Ekpanyapong, A. Tavares, Towards a trustzone-assisted hypervisor for real-time embedded systems, IEEE Computer Architecture Letters 16 (2) (2016) 158-161.
µRTZVisor: A secure and safe real-time hypervisor. J Martins, J Alves, J Cabral, A Tavares, S Pinto, MDPI Electronics. 6493J. Martins, J. Alves, J. Cabral, A. Tavares, S. Pinto, µRTZVisor: A secure and safe real-time hypervisor, MDPI Electronics 6 (4) (2017) 93.
VOSYSmonitor, a TrustZone-based Hypervisor for ISO 26262 Mixed-critical System. P Lucas, K Chappuis, B Boutin, J Vetter, D Raho, Proc. FRUCT. FRUCTIEEEP. Lucas, K. Chappuis, B. Boutin, J. Vetter, D. Raho, VOSYSmonitor, a TrustZone-based Hypervisor for ISO 26262 Mixed-critical System, in: Proc. FRUCT, IEEE, 2018, pp. 231-238.
VOSYSmonitor, a low latency monitor layer for mixed-criticality systems on ARMv8-A. P Lucas, K Chappuis, M Paolino, N Dagieu, D Raho, Proc ECRTS, Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik. ECRTS, Schloss Dagstuhl-Leibniz-Zentrum fuer InformatikP. Lucas, K. Chappuis, M. Paolino, N. Dagieu, D. Raho, VOSYS- monitor, a low latency monitor layer for mixed-criticality systems on ARMv8-A, in: Proc ECRTS, Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2017.
ARM Holdings, ARM Trusted Firmware repository. ARM Holdings, ARM Trusted Firmware repository, https : / / github.com/ARM-software/arm-trusted-firmware.
V Struhár, M Behnam, M Ashjaei, A V Papadopoulos, Real-Time Containers: A Survey, in: 2nd Workshop on Fog Computing and the IoT (Fog-IoT 2020. 2020OpenAccess Series in Informatics (OASIcs)V. Struhár, M. Behnam, M. Ashjaei, A. V. Papadopoulos, Real-Time Containers: A Survey, in: 2nd Workshop on Fog Computing and the IoT (Fog-IoT 2020), OpenAccess Series in Informatics (OASIcs), 2020.
Manual, Linux Programmer's, Namespaces. Manual, Linux Programmer's, Namespaces (7). URL http://man7.org/linux/man-pages/man7/namespaces
Rt-cases: Containerbased virtualization for temporally separated mixed-criticality task sets. M Cinque, R Della Corte, A Eliso, A Pecchia, Proc. ECRTS, Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik. ECRTS, Schloss Dagstuhl-Leibniz-Zentrum fuer InformatikM. Cinque, R. Della Corte, A. Eliso, A. Pecchia, Rt-cases: Container- based virtualization for temporally separated mixed-criticality task sets, in: Proc. ECRTS, Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2019.
Container-based real-time scheduling in the linux kernel. L Abeni, A Balsini, T Cucinotta, ACM SIGBED Review. 163L. Abeni, A. Balsini, T. Cucinotta, Container-based real-time scheduling in the linux kernel, ACM SIGBED Review 16 (3) (2019) 33-38.
Virtual network functions as real-time containers in private clouds. T Cucinotta, L Abeni, M Marinoni, A Balsini, C Vitucci, IEEE CLOUDT. Cucinotta, L. Abeni, M. Marinoni, A. Balsini, C. Vitucci, Virtual network functions as real-time containers in private clouds., in: IEEE CLOUD, 2018, pp. 916-919.
Reducing temporal interference in private clouds through real-time containers. T Cucinotta, L Abeni, M Marinoni, A Balsini, C Vitucci, Proc. EDGE. EDGEIEEET. Cucinotta, L. Abeni, M. Marinoni, A. Balsini, C. Vitucci, Reducing temporal interference in private clouds through real-time containers, in: Proc. EDGE, IEEE, 2019, pp. 124-131.
NFV-Bench: A Dependability Benchmark for Network Function Virtualization Systems. D Cotroneo, L Simone, R Natella, IEEE TNSM. 144D. Cotroneo, L. De Simone, R. Natella, NFV-Bench: A Dependability Benchmark for Network Function Virtualization Systems, IEEE TNSM 14 (4) (2017) 934-948.
. Lxc -Linux Lxc, Containers, LXC, LXC -Linux Containers, https://linuxcontainers.org/.
P Olivier, D Chiba, S Lankes, C Min, B Ravindran, Proceedings of the 15th ACM SIG-PLAN/SIGOPS International Conference on Virtual Execution Environments. the 15th ACM SIG-PLAN/SIGOPS International Conference on Virtual Execution EnvironmentsA binarycompatible unikernelP. Olivier, D. Chiba, S. Lankes, C. Min, B. Ravindran, A binary- compatible unikernel, in: Proceedings of the 15th ACM SIG- PLAN/SIGOPS International Conference on Virtual Execution Environ- ments, 2019, pp. 59-73.
A syscall-level binary-compatible unikernel. P Olivier, H Lefeuvre, D Chiba, S Lankes, C Min, B Ravindran, IEEE Transactions on Computers. P. Olivier, H. Lefeuvre, D. Chiba, S. Lankes, C. Min, B. Ravindran, A syscall-level binary-compatible unikernel, IEEE Transactions on Com- puters.
Unikraft: fast, specialized unikernels the easy way. S Kuenzer, V.-A Bȃdoiu, H Lefeuvre, S Santhanam, A Jung, G Gain, C Soldani, C Lupu, Ş Teodorescu, C Rȃducanu, Proc. EuroSys. EuroSysACMS. Kuenzer, V.-A. Bȃdoiu, H. Lefeuvre, S. Santhanam, A. Jung, G. Gain, C. Soldani, C. Lupu, Ş . Teodorescu, C. Rȃducanu, et al., Unikraft: fast, specialized unikernels the easy way, in: Proc. EuroSys, ACM, 2021, pp. 376-394.
The Linux Foundation, MirageOS Homepage. The Linux Foundation, MirageOS Homepage, https://mirage.io/.
OSv-optimizing the operating system for virtual machines. A Kivity, D Laor, G Costa, P Enberg, N Har'el, D Marti, V Zolotarov, Proc. USENIX ATC. USENIX ATCA. Kivity, D. Laor, G. Costa, P. Enberg, N. Har'El, D. Marti, V. Zolotarov, OSv-optimizing the operating system for virtual ma- chines, in: Proc. USENIX ATC, 2014, pp. 61-72.
Rumprun developers, Rumprun GitHub Homepage. Rumprun developers, Rumprun GitHub Homepage, https://github. com/rumpkernel/rumprun.
The halvm: A simple platform for simple platforms, Xen Summit. A Wick, A. Wick, The halvm: A simple platform for simple platforms, Xen Sum- mit.
Clickos and the art of network function virtualization. J Martins, M Ahmed, C Raiciu, V Olteanu, M Honda, R Bifulco, F Huici, Proc. NSDI. NSDIJ. Martins, M. Ahmed, C. Raiciu, V. Olteanu, M. Honda, R. Bifulco, F. Huici, Clickos and the art of network function virtualization, in: Proc. NSDI, 2014, pp. 459-473.
S Popuri, A tour of the mini-os kernel. S. Popuri, A tour of the mini-os kernel, https://www.cs.uic.edu/ spopuri/minios.html.
The Linux Foundation, Xen Networking. The Linux Foundation, Xen Networking, https : / / wiki . xenproject.org/wiki/Xen_Networking.
Hermitcore: a unikernel for extreme scale computing. S Lankes, S Pickartz, J Breitbart, Proceedings of the 6th International Workshop on Runtime and Operating Systems for Supercomputers. the 6th International Workshop on Runtime and Operating Systems for SupercomputersS. Lankes, S. Pickartz, J. Breitbart, Hermitcore: a unikernel for extreme scale computing, in: Proceedings of the 6th International Workshop on Runtime and Operating Systems for Supercomputers, 2016, pp. 1-8.
A low noise unikernel for extremscale systems. S Lankes, S Pickartz, J Breitbart, International Conference on Architecture of Computing Systems. SpringerS. Lankes, S. Pickartz, J. Breitbart, A low noise unikernel for extrem- scale systems, in: International Conference on Architecture of Comput- ing Systems, Springer, 2017, pp. 73-84.
G Wassen, Hourglass benchmark github repository. G. Wassen, Hourglass benchmark github repository, https://github. com/georgwassen/hourglass.
Inferring scheduling behavior with hourglass. J Regehr, USENIX Annual Technical Conference, FREENIX Track. J. Regehr, Inferring scheduling behavior with hourglass., in: USENIX Annual Technical Conference, FREENIX Track, 2002, pp. 143-156.
Comprehensive formal verification of an os microkernel. G Klein, J Andronick, K Elphinstone, T Murray, T Sewell, R Kolanski, G Heiser, ACM TOCS. 321G. Klein, J. Andronick, K. Elphinstone, T. Murray, T. Sewell, R. Kolan- ski, G. Heiser, Comprehensive formal verification of an os microkernel, ACM TOCS 32 (1) (2014) 1-70.
The click modular router. E Kohler, R Morris, B Chen, J Jannotti, M F Kaashoek, ACM TOCS. 183E. Kohler, R. Morris, B. Chen, J. Jannotti, M. F. Kaashoek, The click modular router, ACM TOCS 18 (3) (2000) 263-297.
HERMES2020, qualification of High pErformance pRogrammable Microprocessor and dEvelopment of Software ecosystem. HERMES2020, qualification of High pErformance pRogrammable Mi- croprocessor and dEvelopment of Software ecosystem, https : / / cordis.europa.eu/project/id/101004203.
The Linux Foundation, True Static Partitioning with Xen Dom0-less. The Linux Foundation, True Static Partitioning with Xen Dom0-less, https : / / xenproject . org / 2019 / 12 / 16 / true-static-partitioning-with-xen-dom0-less/.
The Linux Foundation, Dom0less. The Linux Foundation, Dom0less, https://xenbits.xen.org/ docs/4.15-testing/features/dom0less.html.
Breaking up is hard to do: security and functionality in a commodity hypervisor. P Colp, M Nanavati, J Zhu, W Aiello, G Coker, T Deegan, P Loscocco, A Warfield, Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles. the Twenty-Third ACM Symposium on Operating Systems PrinciplesP. Colp, M. Nanavati, J. Zhu, W. Aiello, G. Coker, T. Deegan, P. Loscocco, A. Warfield, Breaking up is hard to do: security and func- tionality in a commodity hypervisor, in: Proceedings of the Twenty- Third ACM Symposium on Operating Systems Principles, 2011, pp. 189-202.
The Linux Foundation, Dom0 Disaggregation. The Linux Foundation, Dom0 Disaggregation, https : / / wiki . xenproject.org/wiki/Dom0_Disaggregation.
Finegrained fault tolerance for resilient pvm-based virtual machine monitors. D Mvondo, A Tchana, R Lachaize, D Hagimont, N De Palma, Proc. DSN. DSNIEEED. Mvondo, A. Tchana, R. Lachaize, D. Hagimont, N. De Palma, Fine- grained fault tolerance for resilient pvm-based virtual machine monitors, in: Proc. DSN, IEEE, 2020, pp. 197-208.
The Linux Foundation, Homepage of Enabling Linux In Safety Applications (ELISA) project. The Linux Foundation, Homepage of Enabling Linux In Safety Appli- cations (ELISA) project, https://elisa.tech/.
The Linux Foundation, Homepage of LF Edge Foundation. The Linux Foundation, Homepage of LF Edge Foundation, https:// elisa.tech/.
Acrn: a big little hypervisor for iot development. H Li, X Xu, J Ren, Y Dong, Proc. VEE, ACM. VEE, ACMH. Li, X. Xu, J. Ren, Y. Dong, Acrn: a big little hypervisor for iot devel- opment, in: Proc. VEE, ACM, 2019, pp. 31-44.
Consolidate iot edge computing with lightweight virtualization. R Morabito, V Cozzolino, A Y Ding, N Beijar, J Ott, IEEE Network. 321R. Morabito, V. Cozzolino, A. Y. Ding, N. Beijar, J. Ott, Consolidate iot edge computing with lightweight virtualization, IEEE Network 32 (1) (2018) 102-111.
Container-based architecture for flexible industrial control applications. T Goldschmidt, S Hauck-Stattelmann, S Malakuti, S Grüner, JSA. 84ElsevierT. Goldschmidt, S. Hauck-Stattelmann, S. Malakuti, S. Grüner, Container-based architecture for flexible industrial control applications, Elsevier JSA 84 (2018) 28-36.
Evaluating performance of containerized iot services for clustered devices at the network edge. R Morabito, I Farris, A Iera, T Taleb, IEEE Internet of Things Journal. 44R. Morabito, I. Farris, A. Iera, T. Taleb, Evaluating performance of con- tainerized iot services for clustered devices at the network edge, IEEE Internet of Things Journal 4 (4) (2017) 1019-1030.
Unikernel network functions: A journey beyond the containers. T Kurek, IEEE Communications Magazine. 5712T. Kurek, Unikernel network functions: A journey beyond the contain- ers, IEEE Communications Magazine 57 (12) (2019) 15-19.
ISO, Assessment of the robustness of neural networks -Part 1: Overview, ISO/IEC DTR 24029-1: Artificial Intelligence (AI). ISO, Assessment of the robustness of neural networks -Part 1: Overview, ISO/IEC DTR 24029-1: Artificial Intelligence (AI).
Assessment of machine learning classification performance, ISO/IEC WD TS 4213 Information technology -Artificial Intelligence. ISOISO, Assessment of machine learning classification performance, ISO/IEC WD TS 4213 Information technology -Artificial Intelligence.
| [
"https://github.com/Xilinx/runx.",
"https://github.com/kubernetes/",
"https://github.com/siemens/jailhouse"
]
|
[
"Challenges of Lattice Calculation of Scalar Mesons",
"Challenges of Lattice Calculation of Scalar Mesons"
]
| [
"Keh-Fei Liu \nDept. of Physics and Astronomy\nUniversity of Kentucky\n40506LexingtonKYUSA\n"
]
| [
"Dept. of Physics and Astronomy\nUniversity of Kentucky\n40506LexingtonKYUSA"
]
| []
| I review a proposed pattern of the light scalar mesons with qq mesons and glueball above 1 GeV and tetraquark mesoniums below 1 GeV. Several challenges and caveats of calculating these light scalar mesons with dynamical fermions are discussed. | 10.1063/1.2973517 | [
"https://arxiv.org/pdf/0805.3364v2.pdf"
]
| 14,248,092 | 0805.3364 | 3940bea10d21e9a9a33d920faab5cdb4e72aea51 |
Challenges of Lattice Calculation of Scalar Mesons
26 May 2008
Keh-Fei Liu
Dept. of Physics and Astronomy
University of Kentucky
40506LexingtonKYUSA
Challenges of Lattice Calculation of Scalar Mesons
26 May 2008Scalar MesonsLattice QCDTetraquak Mesoniums PACS: 1440Cs1440Ev1238Gc
I review a proposed pattern of the light scalar mesons with qq mesons and glueball above 1 GeV and tetraquark mesoniums below 1 GeV. Several challenges and caveats of calculating these light scalar mesons with dynamical fermions are discussed.
INTRODUCTION
The pseudoscalar, vector, axial, and tensor mesons with light quarks (i.e. u, d and s) are reasonably well known in terms of their SU (3) classification and quark content. The scalar meson sector, on the other hand, is much less understood in this regard. There are 19 experimental states below 1.8 GeV which are more than twice the usual qq nonet in other sectors. We show in Fig. 1 the experimentally known scalars including σ (600), κ(800), and f 0 (1710) which are better established experimentally nowadays [1,2]. The recent theoretical advance [3] in identifying σ (600) as a ππ resonance by solving the Roy equation has settled the question about the existence of σ (600). Nevertheless, there are still a number of puzzling features regarding the ordering of a 0 (1450) and K * 0 (1430) with respect to their counterparts in the axial-vector and tensor sectors, the narrowness of a 0 (980) and f 0 (980) in contrast to the broadness of σ (600) and κ(800), etc [4]. We shall first review a emerging pattern of the scalar mesons below 1.8 GeV based on quenched lattice calculation and phenomenology and then discuss the challenges and caveats of full QCD calculation of these scalar mesons on the lattice.
PATTERN OF LIGHT SCALAR MESONS
The unsettling features regarding the nature of a 0 (1450) and K * 0 (1430) are tentatively resolved in a recent quenched lattice calculation [5] with overlap fermions for a range of pion masses with the lowest one at 180 MeV. When the quenched ghost states, which correspond to πη and πη ′ scattering states in the dynamical fermion case are removed, it is found that a 0 is fairly independent of the quark mass. In other words, below the strange quark mass, a 0 is very flat and approaches a 0 (1450) in the chiral limit. This suggests that SU (3) is a much better symmetry in the scalar meson sector than the other meson sectors and that both a 0 (1450) and K * 0 (1430) are qq states. Furthermore, f 0 (1500), by virtue of the fact that it is close by, should be a fairly pure SU (3) octet state, i.e. f octet = (uū + dd − 2ss)/ √ 6. Based on the lattice findings, a mixing scheme for the isoscalar f 0 (1370), f 0 (1500) and f 0 (1710) -a glueball candidate, with slight SU (3) breaking was developed and successfully fit to the decays of pseudoscalar meson pairs as well as various decays from J/Ψ [6]. Some of the robust and conspicuous features of this mixing scheme are the following:
• f 0 (1500) is indeed a fairly pure octet with very little mixing with the flavor singlet and the glueball. f 0 (1710) and f 0 (1370) are dominated by the glueball and the qq singlet respectively, with ∼ 10% mixing between the two. This is consistent with the experimental result Γ(J/Ψ → γ f 0 (1710)) ∼ 5Γ(J/Ψ → γ f 0 (1500)) [2] which favors f 0 (1710) to have a larger glueball content. • The ratio Γ( f 0 (1500) → KK)/Γ( f 0 (1500) → ππ) = 0.246 ± 0.026 is one of the best experimentally determined decay ratios for these mesons [1]. If f 0 (1500) is a glueball (i.e. a flavor singlet) or ss, the ratio will be 0.84 or larger then unity. Either one is much larger than the experimental result. On the other hand, if f 0 (1500) is f octet , then the ratio is 0.21 which is very close to the experimental value. This further demonstrates that f 0 (1500) is mainly an octet and its experimental decay ratio can be well described with a small SU (3) breaking [3]. • Because the nn content is more copious than the ss in f 0 (1710) in this mixing scheme, the prediction of Γ(J/ψ → ω f 0 (1710))/Γ(J/ψ → φ f 0 (1710)) = 4.1 is naturally large and consistent with the observed value of 6.6 ± 2.7. This ratio is not easy to accommodate in a picture where the f 0 (1710) is dominated by ss. One may have to rely on a doubly OZI suppressed process to dominate over the singly OZI suppressed process to explain it [7] . The mesons below 1 GeV were suggested to be tetraquark mesoniums 1 from the MIT bag model [8] and potential model [9,10] studies. A recent lattice calculation [5] with the overlap fermion on 12 3 × 28 and 16 3 × 28 quenched lattices with the two-quark-twoantiquark interpolation field Ψγ 5 ΨΨγ 5 Ψ has confirmed the existence of such low-lying scalar tetraquark mesonium at ∼ 550 MeV. This strongly suggests that it is the σ (600).
a 0 (1450) a 0 (1450) K * 0 (1430) K * 0 (1430) K * 0 (1430) K * 0 (1430) f 0 (1500) f 0 (1370) f 0 (1710) a 0 (1450) a 0 (980) a 0 (980) a 0 (980) κ(800) κ(800) κ(800) κ(800) σ(600) f 0 (980)
Combining the lattice calculations of a 0 (1450), K * 0 (1430) and σ (600) and the mixing study of f 0 (1370), f 0 (1500) and f 0 (1710), a classification of the scalar mesons below 1.8 GeV was proposed [4]. Those below 1 GeV, i.e. σ (600), a 0 (980), f 0 (980) and κ(800) form a nonet of tetraquak mesoniums; those above 1 GeV, i.e. a 0 (1450), K * 0 (1430) and f 0 (1500) form a fairly sure SU (3) octet; and f 0 (1370) and f 0 (1710) are good SU (3) singlet and glueball respectively, with ∼ 10% mixture between the two.
We should stress that this is not finalized. It should be scrutinized in future experiments, such as high statistics J/Ψ, D, B decays and pp annihilations. Furthermore, most of the the lattice results which led to the above proposed pattern were based on quenched calculations. There are loose ends that need to be tightened, come dynamical fermion calculations. In the following, we shall enumerate a number of challenges and the associated caveats in calculations with dynamical fermion configurations.
CHALLENGES AND CAVEATS OF FUTURE LATTICE CALCULATIONS WITH DYNAMICAL FERMIONS
In the quenched lattice calculation of a 0 with light quarks which correspond to m π < 500 MeV, the quenched πη ghost states are lower than the a 0 (1450) and, thus, dominate the long time behavior in the scalar correlator with a non-unitary negative tail. This has to be removed [5,11,12] before the physical a 0 (1450) is revealed. These ghost states turn into physical two meson scattering states in a full QCD calculation with the same valence and sea quark masses 2 . This causes some difficulty in the fitting of scalar meson correlators and has been mentioned by S. Prelovsek [13] in this workshop. In the following, we shall point out several caveats and challenges facing the scalar meson calculation with light dynamical fermions.
a 0 (1450) and K * 0 (1430)
There are several N f = 2 dynamical fermion calculations of a 0 with the ΨΨ interpolation field [12,14,15,16,17]. Save for Ref. [12] which, upon removing the partially quenched ghost πη 2 state, found a 0 to be at 1.51 (19) GeV, the others [15,16,17] found the lowest states at the chiral limit to be ∼ 1 GeV, suggesting that a 0 (980) is the qq state. As pointed out in Ref. [4], this is most likely an untenable interpretation. If a 0 (980) is indeed a qq state, or has a sizable coupling to the ΨΨ interpolation field, then replacing the u/d quark in the a 0 interpolation field with s will place the corresponding sū at ∼ 1100 MeV. This is far (i.e. 300 MeV) away from each of the two experimental states K * 0 (1430) and κ(800) (see Fig. 1). The likely resolution, we think, is that the state found at ∼ 1 GeV is the πη 2 scattering state. Since η 2 , the η(η ′ ) in the two-flavor case is predicted to be ∼ 2/3m η ′ = 782 MeV in the large N c analysis with U (1) anomaly, the weakly interacting πη 2 will be near the state seen at ∼ 1 GeV. In other words, this πη 2 scattering state is the dynamical fermion realization of the corresponding ghost state in the quenched approximation. Parallel to the lesson learned in lattice calculations of pentaquark baryons [18], one has to include the multi-hadron states in addition to the physical resonances when fitting the two-point correlators for the excited spectrum. In the case of a 0 in the realistic N f = 2 + 1 case, one needs to include πη, πη ′ , in addition to the physical a 0 (980), and a 0 (1450). This can be achieved with the sequential empirical Bayes method for curve-fitting [22] or the variational approach. Furthermore, one needs to distinguish the two-particle scattering states from the one-particle resonances. One way to distinguish a two-particle scattering state from a one-particle state is to examine the 3-volume dependence of the fitted spectral weight [19,20,5]. Another way is to impose a 'hydrid boundary condition' on the quark propagators [21]. No attempt has been made to identify the scattering πη(η ′ ) states so far. This has to be carried out before one can reasonably reveal the quark content of a 0 (980) and a 0 (1450). f 0 (980), f 0 (1370), f 0 (1500) and f 0 (1710)
In addition to the complication of two-meson scattering states (in this case ππ, KK, ηη, ηη ′ ), one needs to calculate the correlators with disconnected insertions (D.I.) in addition to the connected insertions (C.I.) as in the a 0 case. This is to reflect the fact that these isoscalar mesons have annihilation channels. The usual approach of adopting the noise [23] to estimate the quark loops in the disconnected insertion makes the calculation much more expensive than the connected insertion one. One caveat with the noise estimator is that the signal falls exponentially in the meson correlator; whereas, the variance of the noise estimator approaches a constant at certain time separation [23]. If one were to fit the time window where the variance of the noise levels off, the shoulder effect of the correlator could result in an unphysically light effective mass. In view of this, the very light mass from the D.I. part of the correlator in the f 0 calculations [14,24] should be subjected to the examination as to whether it is the ππ scattering state or due to the shoulder effect.
Glueball
The continuum and large volume limits of the quenched calculation places the scalar glueball at 1710(50)(80) MeV [25]. This is very close to the viable experimental glueball candidate f 0 (1710). To verify this in the full QCD calculation is, however, non-trivial. Whatever interpolation one adopts, one has to disentangle the glueball from all the lower-lying f 0 states and the ππ, KK, ηη and ηη ′ two-meson states.
Tetraquark Mesoniums
If the nonet below 1 GeV in Fig. 1 are indeed dominated by the q 2q2 tetraquark mesonoums, one can access them through the Ψγ 5 ΨΨγ 5 Ψ operator or other four-quark operators with the same quantum number. In the case of a 0 (980) and f 0 (980), the two-meson threshold, i.e. KK is close by. One may need a good variational method in order to disentangle them. By virtue of the fact that a 0 (980) and f 0 (980) are nearly degenerate, the D.I. should be small compared to the C.I. It should be confirmed in full QCD calculation. qq meson vs q 2q2 tetraquark mesonium The notion of qq or q 2q2 meson is primarily a quark model concept of the valence quark content. How does one distinguish them in lattice QCD with interpolation fields? So far, neither a 0 (980) nor σ (600) is coupled to the ΨΨ interpolation field in the quenched approximation with discernable signal [5]. If this is not true in full QCD calculation with light dynamical fermions, this will complicate matters substantially. One will need both qq and q 2q2 types of operators with a large basis in the variational calculation to identify states and; moreover, to distinguish the one-particle states from the multi-meson scattering states.
FIGURE 1 .
1Spectrum of scalar mesons together with π, ρ, a 1 and a 2 .
FIGURE 2 .
2Pattern of light scalar mesons -a tetraquark mesonium nonet below 1 GeV, an almost pure SU(3) qq nonet and a nearly pure glueball above 1 GeV.
These are two-quark and two-antiquark mesons which have been referred to as four-quark states, meson moleculars, mesoniums, and tetraquark states. We shall call them tetraquark mesoniums so as to avoid implication on the nature of possible spatial and color clustering.
Otherwise, it is considered to be a partially quenched calculation.
CONCLUSIONWe summarized the pattern of light scalar mesons emerged from quenched lattice calculations and the study of mixing and decays of f 0 (1370), f 0 (1500) and f 0 (1710). We have discussed the subtlety and challenges of calculating them in full QCD with light dynamical fermions. In particular, if they couple strongly to both qq and q 2q2 types of interpolation operators, the interpretation of the underline pattern will be considerably more complex. We hope that Nature is only subtle but not malicious.ACKNOWLEDGMENTSIt is a pleasure for the author to thank G. Rupp for his invitation to attend the Scadron70 workshop on scalar mesons and his hospitality. He also acknowledges inspiring discussions with D.V. Bugg, H. Leutwyler, S. Prelovsek, J. Rosner, and M.D. Scadron.
. W.-M Yao, J. of Phys. 331Particle Data GroupW.-M. Yao et al., (Particle Data Group), J. of Phys. 33G, 1 (2006).
. E M Aitala, Phys. Rev. Lett. 86770E.M. Aitala et al., Phys. Rev. Lett. 86, 770 (2001);
. M Ablikim, Phys. Lett. 598149M. Ablikim et al., Phys. Lett. B598, 149 (2004);
. M Ablikim, Phys. Rev. 7292002M. Ablikim et al., Phys. Rev. D72, 092002 (2005);
. M Ablikim, Phys. Lett. 633681M. Ablikim et al., Phys. Lett. B633, 681 (2006).
. I Caprin, G Colangelo, H Leutwyler, hep-lat/0512346Phys. Rev. Lett. 96132001I. Caprin, G. Colangelo and H. Leutwyler, Phys. Rev. Lett. 96, 132001 (2006), [hep-lat/0512346].
. K F Liu, Prog. Theor. Phys. Suppl. 168K.F. Liu, Prog. Theor. Phys. Suppl. 168, 160-167 (2007).
. N Mathur, A Alexandru, Y Chen, S J Dong, T Draper, F X Horváth, K F Lee, S Liu, J B Tamhankar, Zhang, hep-ph/0607110Phys. Rev. 76114505N. Mathur, A. Alexandru, Y. Chen, S.J. Dong, T. Draper, Horváth, F.X. Lee, K.F. Liu, S. Tamhankar, and J.B. Zhang, Phys. Rev. D76, 114505 (2007), [hep-ph/0607110]
. H Y Cheng, C K Chua, K F Liu, Phys. Rev. 7494005H.Y. Cheng, C.K. Chua, and K.F. Liu, Phys. Rev. D74, 094005 (2006).
. C Amsler, F E Close, Phys. Lett. 353385C. Amsler and F.E. Close, Phys. Lett. B353, 385 (1995);
. Phys. Rev. 53295Phys. Rev. D53, 295 (1996);
. F E Close, A Kirk, Phys. Lett. 483345F.E. Close and A. Kirk, Phys. Lett. B483, 345 (2000);
. F E Close, Q Zhao, Phys. Rev. 7194022F.E. Close and Q. Zhao, Phys. Rev. D71, 094022 (2005).
. R L Jaffe, Phys. Rev. 15267R.L. Jaffe, Phys. Rev. D15, 267 (1997).
. K F Liu, C W Wong, Phys. Lett. 107391K.F. Liu and C.W. Wong, Phys. Lett. B107, 391 (1981).
. J Weinstein, N Isgur, Phys. Rev. Lett. 48659J. Weinstein and N. Isgur, Phys. Rev. Lett. 48, 659 (1982).
. W Bardeen, A Duncan, E Eichten, N Isgur, H Thacker, Phys. Rev. 6514509W. Bardeen, A. Duncan, E. Eichten, N. Isgur, and H. Thacker, Phys. Rev. D65, 014509 (2002).
. S Prelovsek, C Dawson, T Izubuchi, K Orginos, A Soni, Phys. Rev. 7094503S. Prelovsek, C. Dawson, T. Izubuchi, K. Orginos, and A. Soni, Phys. Rev. D70, 094503 (2004).
. S Prelovsek, these proceedingsS. Prelovsek, these proceedings.
. T Kunihiro, SCALAR Coll.hep-ph/0310312Phys. Rev. 7034504T. Kunihiro et al., SCALAR Coll., Phys. Rev. D70, 034504 (2004), [hep-ph/0310312].
. C Mcneile, C Michael, hep-lat/0604009Phys. Rev. 7414508C. McNeile and C. Michael, Phys. Rev. D74, 014508 (2006), [hep-lat/0604009].
. R Frigori, 114R. Frigori et al., PoSLAT2007, 114 (2007).
. K Hashimoto, T Izubuchi, arXiv:0803.0186K. Hashimoto and T. Izubuchi, [arXiv: 0803.0186].
. K F Liu, hep-lat/0610036int. J. Mod. Phys. 21851K.F. Liu, int. J. Mod. Phys. A21, 851 (2006), [hep-lat/0610036].
. N Mathur, hep-ph/0306199Phys. Lett. 605137N. Mathur et al., Phys. Lett. B605, 137 (2005), [hep-ph/0306199].
. N Mathur, hep-lat/0406196Phys. ReV. 7074508N. Mathur et al., Phys. ReV. D70, 074508 (2004), [hep-lat/0406196].
. N Ishii, hep-lat/0408030Phys. ReV. 7134001N. Ishii et al., Phys. ReV. D71, 034001 (2005), [hep-lat/0408030].
. Y Chen, hep-lat/0405001Y. Chen et al., hep-lat/0405001.
. S J Dong, K F Liu, hep-lat/9308015Phys. Lett. 328130S.J. Dong and K.F. Liu, Phys. Lett. B328, 130 (19940, [hep-lat/9308015].
. A Hart, C Mcneile, C Michael, J Pickavance, hep-lat/0608026Phys. ReV. 74114504A. Hart, C. McNeile,C. Michael, and J. Pickavance, Phys. ReV. D74, 114504 (2006), [hep-lat/0608026].
. Y Chen, hep-lat/051007471Phys. ReV. 7314516Y. Chen et al., Phys. ReV. D73, 014516 (2006), [hep-lat/051007471].
| []
|
[
"Solar polarimetry in the K i D 2 line: A novel possibility for a stratospheric balloon",
"Solar polarimetry in the K i D 2 line: A novel possibility for a stratospheric balloon"
]
| [
"C Quintero Noda \nInstitute of Space and Astronautical Science\nJapan Aerospace Exploration Agency\n252-5210SagamiharaKanagawaJapan\n",
"G L Villanueva \nNASA Goddard Space Flight Center, Planetary Systems Laboratory (Code 693)\nGreenbeltMDUSA\n",
"Y Katsukawa \nNational Astronomical Observatory of Japan\n2-21-1 Osawa181-8588MitakaTokyoJapan\n",
"S K Solanki \nMax Planck Institute for Solar System Research\nJustus-von-Liebig-Weg 3D-37077GöttingenGermany\n\nSchool of Space Research\nKyung Hee University\nYongin, Gyeonggi, 446-701Korea\n",
"D Orozco Suárez \nInstituto de Astrofísica de Andalucía (CSIC), Glorieta de la Astronomía\n18008GranadaSpain\n",
"B Ruiz Cobo \nInstituto de Astrofísica de Canarias\nE-38200, La LagunaTenerifeSpain\n\nDepartamento de Astrofísica\nUniv. de La Laguna, La Laguna38205Tenerife, ESpain\n",
"T Shimizu \nInstitute of Space and Astronautical Science\nJapan Aerospace Exploration Agency\n252-5210SagamiharaKanagawaJapan\n",
"T Oba \nInstitute of Space and Astronautical Science\nJapan Aerospace Exploration Agency\n252-5210SagamiharaKanagawaJapan\n\nSOKENDAI (The Graduate University for Advanced Studies)\n252-5210SagamiharaKanagawaJapan\n",
"M Kubo \nNational Astronomical Observatory of Japan\n2-21-1 Osawa181-8588MitakaTokyoJapan\n",
"T Anan \nKwasan and Hida Observatories\nKyoto University\nKurabashira Kamitakara-cho, Takayama-city506-1314GifuJapan\n",
"K Ichimoto \nNational Astronomical Observatory of Japan\n2-21-1 Osawa181-8588MitakaTokyoJapan\n\nKwasan and Hida Observatories\nKyoto University\nKurabashira Kamitakara-cho, Takayama-city506-1314GifuJapan\n",
"Y Suematsu \nNational Astronomical Observatory of Japan\n2-21-1 Osawa181-8588MitakaTokyoJapan\n"
]
| [
"Institute of Space and Astronautical Science\nJapan Aerospace Exploration Agency\n252-5210SagamiharaKanagawaJapan",
"NASA Goddard Space Flight Center, Planetary Systems Laboratory (Code 693)\nGreenbeltMDUSA",
"National Astronomical Observatory of Japan\n2-21-1 Osawa181-8588MitakaTokyoJapan",
"Max Planck Institute for Solar System Research\nJustus-von-Liebig-Weg 3D-37077GöttingenGermany",
"School of Space Research\nKyung Hee University\nYongin, Gyeonggi, 446-701Korea",
"Instituto de Astrofísica de Andalucía (CSIC), Glorieta de la Astronomía\n18008GranadaSpain",
"Instituto de Astrofísica de Canarias\nE-38200, La LagunaTenerifeSpain",
"Departamento de Astrofísica\nUniv. de La Laguna, La Laguna38205Tenerife, ESpain",
"Institute of Space and Astronautical Science\nJapan Aerospace Exploration Agency\n252-5210SagamiharaKanagawaJapan",
"Institute of Space and Astronautical Science\nJapan Aerospace Exploration Agency\n252-5210SagamiharaKanagawaJapan",
"SOKENDAI (The Graduate University for Advanced Studies)\n252-5210SagamiharaKanagawaJapan",
"National Astronomical Observatory of Japan\n2-21-1 Osawa181-8588MitakaTokyoJapan",
"Kwasan and Hida Observatories\nKyoto University\nKurabashira Kamitakara-cho, Takayama-city506-1314GifuJapan",
"National Astronomical Observatory of Japan\n2-21-1 Osawa181-8588MitakaTokyoJapan",
"Kwasan and Hida Observatories\nKyoto University\nKurabashira Kamitakara-cho, Takayama-city506-1314GifuJapan",
"National Astronomical Observatory of Japan\n2-21-1 Osawa181-8588MitakaTokyoJapan"
]
| []
| Of the two solar lines, K i D 1 and D 2 , almost all attention so far has been devoted to the D 1 line, as D 2 is severely affected by an O 2 atmospheric band. This, however, makes the latter appealing for balloon and space observations from above (most of) the Earth's atmosphere. We estimate the residual effect of the O 2 band on the K i D 2 line at altitudes typical for stratospheric balloons. Our aim is to study the feasibility of observing the 770 nm window. Specifically, this paper serves as a preparation for the third flight of the Sunrise balloon-borne observatory. The results indicate that the absorption by O 2 is still present, albeit much weaker, at the expected balloon altitude. We applied the obtained O 2 transmittance to K i D 2 synthetic polarimetric spectra and found that in the absence of line-of-sight motions, the residual O 2 has a negligible effect on the K i D 2 line. On the other hand, for Doppler-shifted K i D 2 data, the residual O 2 might alter the shape of the Stokes profiles. However, the residual O 2 absorption is sufficiently weak at stratospheric levels that it can be divided out if appropriate measurements are made, something that is impossible at ground level. Therefore, for the first time with Sunrise iii, we will be able to perform polarimetric observations of the K i D 2 line and, consequently, we will have improved access to the thermodynamics and magnetic properties of the upper photosphere from observations of the K i lines. | 10.1051/0004-6361/201732111 | [
"https://arxiv.org/pdf/1801.01655v1.pdf"
]
| 119,187,283 | 1801.01655 | 1f2857252b10c6b7389ee4952a36c65ed23d6249 |
Solar polarimetry in the K i D 2 line: A novel possibility for a stratospheric balloon
5 Jan 2018 2018 January 8, 2018
C Quintero Noda
Institute of Space and Astronautical Science
Japan Aerospace Exploration Agency
252-5210SagamiharaKanagawaJapan
G L Villanueva
NASA Goddard Space Flight Center, Planetary Systems Laboratory (Code 693)
GreenbeltMDUSA
Y Katsukawa
National Astronomical Observatory of Japan
2-21-1 Osawa181-8588MitakaTokyoJapan
S K Solanki
Max Planck Institute for Solar System Research
Justus-von-Liebig-Weg 3D-37077GöttingenGermany
School of Space Research
Kyung Hee University
Yongin, Gyeonggi, 446-701Korea
D Orozco Suárez
Instituto de Astrofísica de Andalucía (CSIC), Glorieta de la Astronomía
18008GranadaSpain
B Ruiz Cobo
Instituto de Astrofísica de Canarias
E-38200, La LagunaTenerifeSpain
Departamento de Astrofísica
Univ. de La Laguna, La Laguna38205Tenerife, ESpain
T Shimizu
Institute of Space and Astronautical Science
Japan Aerospace Exploration Agency
252-5210SagamiharaKanagawaJapan
T Oba
Institute of Space and Astronautical Science
Japan Aerospace Exploration Agency
252-5210SagamiharaKanagawaJapan
SOKENDAI (The Graduate University for Advanced Studies)
252-5210SagamiharaKanagawaJapan
M Kubo
National Astronomical Observatory of Japan
2-21-1 Osawa181-8588MitakaTokyoJapan
T Anan
Kwasan and Hida Observatories
Kyoto University
Kurabashira Kamitakara-cho, Takayama-city506-1314GifuJapan
K Ichimoto
National Astronomical Observatory of Japan
2-21-1 Osawa181-8588MitakaTokyoJapan
Kwasan and Hida Observatories
Kyoto University
Kurabashira Kamitakara-cho, Takayama-city506-1314GifuJapan
Y Suematsu
National Astronomical Observatory of Japan
2-21-1 Osawa181-8588MitakaTokyoJapan
Solar polarimetry in the K i D 2 line: A novel possibility for a stratospheric balloon
5 Jan 2018 2018 January 8, 2018Received October, 2017; accepted December, 2017Astronomy & Astrophysics manuscript no. oxygen2 c ESOSun: magnetic fields -Techniques: polarimetric -Atmospheric effects -Balloons
Of the two solar lines, K i D 1 and D 2 , almost all attention so far has been devoted to the D 1 line, as D 2 is severely affected by an O 2 atmospheric band. This, however, makes the latter appealing for balloon and space observations from above (most of) the Earth's atmosphere. We estimate the residual effect of the O 2 band on the K i D 2 line at altitudes typical for stratospheric balloons. Our aim is to study the feasibility of observing the 770 nm window. Specifically, this paper serves as a preparation for the third flight of the Sunrise balloon-borne observatory. The results indicate that the absorption by O 2 is still present, albeit much weaker, at the expected balloon altitude. We applied the obtained O 2 transmittance to K i D 2 synthetic polarimetric spectra and found that in the absence of line-of-sight motions, the residual O 2 has a negligible effect on the K i D 2 line. On the other hand, for Doppler-shifted K i D 2 data, the residual O 2 might alter the shape of the Stokes profiles. However, the residual O 2 absorption is sufficiently weak at stratospheric levels that it can be divided out if appropriate measurements are made, something that is impossible at ground level. Therefore, for the first time with Sunrise iii, we will be able to perform polarimetric observations of the K i D 2 line and, consequently, we will have improved access to the thermodynamics and magnetic properties of the upper photosphere from observations of the K i lines.
Introduction
Quintero Noda et al. (2017) studied the spectral region at 770 nm containing, among other spectral lines, the K i D 1 and D 2 lines. The latter is blended with an atmospheric molecular oxygen line belonging to the O 2 A band. The authors mentioned that, by observing K i D 2 , we can have access to upper photospheric layers (slightly higher than those covered by K i D 1 ), which are located in between the height of formation of traditional photospheric and chromospheric lines such as the infrared Ca ii lines (see Figure 9 of Quintero Noda et al. 2017). In this regard, the combination of spectral lines with various heights of formation brings the possibility, for instance, to continuously trace the vertical stratification of the magnetic field or to study the impact of photospheric events on upper atmospheric layers (see, for example, the review of Borrero et al. 2015). In addition, the K i D 2 is the most capable line of the doublet for performing quiet Sun polarimetric observations. This is because the K i D 2 line, similar to the Na i D 2 (see, for instance, Figure 1 in Stenflo et al. 2000), produces larger scattering polarization signals, by more than 5 times in the case of the Na i doublet, when observed at various heliocentric angles. Moreover, if we observe the D 2 line in combination with the D 1 transition we also increase the signal to noise for the polarization signals at the upper photosphere. This is crucial when performing inversion of the Stokes profiles as the amount of information present in an observed data set is a monotonically increasing function of the number of available spectral lines (Asensio Ramos et al. 2007). Finally, we also emphasized in Quintero Noda et al. (2017) that the fact that the K i D 2 line is completely blocked by the Earth's atmosphere, makes it an appealing candidate for satellite and balloon missions. Therefore, it is useful to estimate the possibilities of observing the potassium lines from a stratospheric balloon, such as the Sunrise solar balloon-borne observatory Berkefeld et al. 2011;Gandorfer et al. 2011), which has had two successful science flights in 2009 and 2013 (Solanki et al. 2010(Solanki et al. , 2017. For the above reasons we selected K i D 1 and D 2 as candidate lines for the next Sunrise flight, i.e. Sunrise iii.
We plan to perform spectropolarimetric observations of those lines, aiming to infer the thermodynamics and properties of the magnetic field in the upper photosphere, around 600 km above the layer where the continuum optical depth is unity at 500 nm. However, this requires that we demonstrate the feasibility of observing the K i D 2 line from the stratosphere. Therefore, A&A proofs: manuscript no. oxygen2 the main target of the present paper is to assess the influence of the O 2 atmospheric absorption on the K i D 2 line.
To this end, we compute the residual effect of the atmospheric O 2 in the middle stratosphere, around 35 km, following the characteristics, for example, the launch site, altitude and flight dates, of the two previous flights of the Sunrise mission (Solanki et al. 2010(Solanki et al. , 2017. We also quantify additional variations of the O 2 absorption due to, for instance, changes in the altitude of the balloon or position of the Sun above the horizon, i.e. different airmass. After computing the O 2 transmission profiles for the above scenarios, we apply them to K i D 2 synthetic profiles, aiming to measure the effect on the Stokes spectra with special emphasis on the polarimetric signals.
Method
O 2 transmittance
We aim to compute the effect of the oxygen molecular bands on the solar spectra in the 770 nm window. For this purpose, we employed the Planetary Spectrum Generator (psg) (Villanueva et al. 2017). This tool 1 can be used to generate high-resolution spectra of planetary bodies (e.g. planets, moons, comets, and exoplanets). In addition, the code is able to compute atmospheric transmittances and radiances for various scenarios using the Planetary and Universal Model of Atmospheric Scattering (pumas) presented in Villanueva et al. (2015). This tool performs line-byline calculations that have been validated and benchmarked with the accurate general line-by-line atmospheric transmittance and radiance model (genln2) (Edwards 1992).
We used psg to compute the O 2 transmittance spectrum. This spectrum provides information on the amount of solar radiation that is absorbed by the Earth's atmosphere due to the presence of oxygen molecules. In this regard, low transmittance values correspond to large O 2 absorption; this absorption is very prominent at 770 nm, where we can find the O 2 A band (e.g. Babcock & Herzberg 1948). The computed wavelength window comprises 20 nm between 755-775 nm with high spectral sampling, similar to that used in Quintero Noda et al. (2017). Additional molecules or aerosols are not considered in this work as O 2 is the main contributor to the spectral window of interest. We computed the atmospheric transmittance assuming hydrostatic equilibrium using a vertical profile with 55 layers where the O 2 abundance is constant for the range of heights of interest. The O 2 molecule considered corresponds to hitran (McClatchey 1973;Rothman et al. 2013) number 7 and takes into account all the isotopes of oxygen. We first performed a comparison between the solar atlas of Delbouille et al. (1973) and the solar spectra computed with psg. We aim to confirm that the program uses the same wavelength reference that we use later when computing the synthetic polarization signals. The mentioned atlas was observed in the early 70s from the Jungfraujoch International Scientific Station (Switzerland). Therefore, we generated the transmittance profile at 770 nm assuming that we are looking up to the Sun from the Earth (see Villanueva et al. 2017, for more information), from the Jungfraujoch observatory located at Switzerland (7.59 • E, 46.33 • N) at 3500 m above the sea level. When synthesizing telluric spectra, psg accesses the accurate nasa-merra2 (Gelaro et al. 2017) meteorological database to obtain vertical profiles (from the surface to 70 km) for the observatory site. This provides extremely realistic atmospheric conditions for any site on the planet with a precision of 30 minutes (from 1981 to date) 1 https://psg.gsfc.nasa.gov and with a spatial resolution of 1 km (refined employing usgs-gtopo30 (Gesch & Larson 1996)
topographic information).
We show in the top panel of Figure 1 the solar spectra (without telluric contamination) and the O 2 transmittance computed with the psg tool. Bottom panel shows a comparison between the observed atlas (black) and the synthesized atlas, i.e. solar spectra and telluric contamination, in orange. Observed and synthetic spectra show very good agreement in amplitude and wavelength, demonstrating the accuracy of the telluric modelling technique. The strong absorption features are dominated by telluric O 2 (shown in blue in the top panel), while most of the remaining spectral lines are from solar origin. The O 2 telluric absorptions are relatively broad and saturated, and several spectral windows of high transmittance, in which the solar radiation is not blocked by the Earth atmosphere, are present.
Synthesis of the Stokes parameters
We made use of the rh code (Uitenbroek 2001(Uitenbroek , 2003 to synthesize the solar spectra at 770 nm (see Figure 1 in Quintero Noda et al. 2017). This region contains the upper photospheric K i D 2 line at 7664.90 Å that is heavily blended with an O 2 line (see the dotted line in Figure 1). We followed the approach of Quintero Noda et al. (2017), computing the Stokes vector in non-local thermodynamic equilibrium and complete redistribution, as partial redistribution effects are not important for the K i lines (Uitenbroek & Bruls 1992). We restricted the wavelength coverage to a value of approximately 60 Å with a spectral sampling of 30 mÅ. The K i atomic model contains 12 levels plus the continuum and was presented in Bruls et al. (1992) as a simplified version of the comprehensive model. We used the original photoionization and collision values included in the mentioned atomic model and the abundance of K i is taken to be 5.03 (extracted from Asplund et al. 2009). We employed the semi-empirical FALC model (Fontenla et al. 1993), using the microturbulence presented in Figure 2 and extracted from Fontenla et al. (1990). We assumed disc centre observations, i.e. µ = 1, where µ = cos(θ) and θ is the heliocentric angle. In addition, we did not include any further spectral degradation and we focussed only on the polarization signals produced by the Zeeman effect; the scattering polarization signals will be studied in the future.
Our objective is to combine the previous synthetic profiles with the transmittance results from psg, simulating the path of the light, i.e. generated at the solar atmosphere and perturbed by the Earth's atmosphere. In this regard, we plan to simply multiply the synthetic Stokes parameters by the O 2 transmittance profile computed at various observing conditions. We believe this is a correct approach because, given that we always measure a linear combination of Stokes parameters, i.e. I+S and I-S , the effect of the transmission on that combination is the same as when applying the transmission directly on a given Stokes parameter S .
Observing conditions
We focus on a reference launch site located at the Esrange Space Center, Kiruna, Sweden. This location has been used twice by the Sunrise mission (see Barthol et al. 2011;Solanki et al. 2017, for more details) and implies a flight path that crosses Greenland and Canada. Therefore, we computed the O 2 transmittance at these three different locations assuming reference coordinates based on the trajectory followed by the Sunrise balloon in 2009 (see Figure 15 of Barthol et al. 2011). We plan to compare the absorption at ground level and at the stratospheric balloon reference flight height, i.e. around 35 km above sea level. In addition, we studied the transmittance dependence on the day and night cycle and changes in the altitude of the balloon. However, we want to clarify that, at these northern coordinates, the Sun is always visible above the horizon even at night time. Thus, we can observe the Sun at all times during the Sunrise flight. Moreover, as all the computations are performed looking at the Sun, day and night periods simply translate into different solar positions above the horizon.
Results
Launch from Sweden
We show in Figure 3 the O 2 transmittance profiles for three reference spatial locations in the northern hemisphere (columns), i.e. Sweden, Greenland, and Canada, computed for day (blue) and night (black) conditions. The latter corresponds to low solar elevation above the horizon but the Sun is still visible. We present the specific details of those locations in Table 1 where the flight coordinates are estimated from the first Sunrise flight . The elevation of the Sun is defined as α = 90 • for the horizon and α = 0 • for the zenith. We distinguish between day and night periods based on the Sun's position above the horizon although the Sun never sets at those latitudes in summer.
The O 2 transmittance is always very low at sea level (first row) and is lowest in Sweden. This could be due to the elevation of the Sun above the horizon (see Table 1) but could be also related to the atmospheric properties on the three geographical locations. In particular, it is probable that the definition of ground level for Greenland corresponds to a higher altitude than that for Sweden or Canada. Still, for all the cases, the O 2 band located close to K i D 2 (see dots) impedes the detection of the solar line. If we examine a higher altitude, similar to that expected for a stratospheric balloon flight (second row), we find a completely different transmittance spectrum. The first difference is that the transmittance at the K i D 2 spectral location is larger than 0, and is up to 60 per cent during the day. But, most importantly, the width of the O 2 bands has severely diminished, producing bands of only several mÅ wide. We believe the reasons for this behaviour are the reduction of the atmospheric pressure with height, and consequently the line pressure broadening (for instance, Strong & Plass 1950), and that the atmospheric convection at the middle stratosphere is lower than that at sea level. These effects significantly reduce the width of the O 2 bands although their absorption is still detectable; the O 2 abundance is almost constant up to 150 km. Figure 3 also reveals the differences between day and night transmittances; the latter has less amplitude. The reason for that is the elevation of the Sun above the horizon. This elevation fluctuates between day and night (see dashed line in Figure 4, we define the zenith as 0 • and the horizon as 90 • ), and consequently, the atmospheric airmass along the line of sight (LOS) of the observer. In addition, we note that, as the reference launch site in Kiruna is located far north, i.e. 68 • N, the Sun never reaches a position higher than 50 • over the horizon even at midday; see the evolution of the observing angle (the zenith corresponds to 0 • ) in Fig. 4. Moreover, we also compute the airmass following Kasten & Young (1989), finding that it closely resembles the pattern displayed by the observing angle (dashed line).
In order to examine the daily variation of the O 2 transmittance, we selected a reference spatial location, i.e. Sweden, and A&A proofs: manuscript no. oxygen2 Table 1). we computed the transmittance every hour for a day. We picked the same reference date corresponding to 2017 June 15. Figure 4 shows the maximum O 2 transmittance (squares) of the band that is blended with the K i D 2 solar line for the mentioned period. The transmittance is lower at night (around 10 per cent) while it can reach up to 50−60 per cent for a few hours at midday. Importantly, however, it is always above zero. Therefore, at no time is all solar information lost, so that techniques to recover it can be applied. Something that is impossible from the ground, as the transmittance is zero or very close to it (see Figure 1 and Table 1. it seems that the optimum period for observing the K i D 2 solar line is around noon rather than at midnight, as the transmittance varies from 60 to 10 per cent depending on the solar position over the horizon.
Stratospheric balloons change altitude with time over a range of a few kilometres in the course of a day, for example between 34−37 km in the case of the Sunrise balloon . We represent in Figure 5 the O 2 transmittance for the band that is blended with the K i D 2 solar line at different altitudes, from 30−40 km above sea level, selecting the same date ig. 6. Comparison between synthetic profiles without considering the O 2 transmittance (black) and after applying it (blue). The grey line indicates the transmittance spectra at 35 km for the reference date 2017 June 15; these spectra are normalized and share the same ordinate axis with the intensity. Each row represents a selected time corresponding to different Sun positions over the horizon; α = 0 • is the zenith. Columns, from left to right, represent the Stokes I, Q, U, V parameters. and geographical position used in the bottom panel of the leftmost column of Figure 3. For both, day and night periods, we can see a linear dependence with height; the transmittance values are much larger for the former period, reaching up to 70 per cent if an altitude of 40 km is achieved. However, in spite of this behaviour, we do not detect changes in the width of the O 2 band, suggesting that to first order the lines are optically thin at these altitudes, in contrast to when they are observed from the ground. We performed a linear fit of both cases finding that the slope for the day case is roughly 3.3 per cent km −1 , while, for the night period, it is approximately 0.9 per cent km −1 . This indicates that we can expect just small variations with time of the O 2 transmittance due to altitude changes because the previously recorded altitude fluctuations have an amplitude less than 3 km
K i D 2 polarimetry at 35 km
Solar lines at rest
We represent in Figure 6 the Stokes profiles for a narrow window centred on K i D 2 synthesized following the method explained in Sec. 2.2. The window contains the line of interest and photospheric Fe i line at 7664.30 Å. We compare the original profiles (black) and the result of multiplying them by the O 2 transmittance (blue). We select the same conditions as in previous sections, i.e. Sweden on 2017 June 15, and pick up the transmittance for three selected times (rows) that correspond to different Sun positions (see also Figure 4). In addition, as a reference, we add the O 2 transmittance (grey) in the intensity panels (leftmost column). The full width at half maximum of the blended O 2 line is less than 70 mÅ or, assuming that we scan it with a reference spectral sampling of ∆λ=30 mÅ, less than 3 pixels along the spectral direction, and seems to be independent of the Sun elevation.
This oxygen molecule band reduces the intensity of the K i D 2 line by up to 0.1 of the continuum intensity (I c ) for the lowest elevation (top row). However, the O 2 profile is much narrower than that of the K i D 2 line, which means that the general shape of the line is unaltered, mostly the wings. This condition is translated into polarization profiles that barely change, even for low Sun elevations, pointing out that the effect of the O 2 band for the presented conditions, i.e. solar lines at rest, is negligible for the polarization profiles. The reason why the effect of the O 2 transmittance is negligible is simply because the K i D 2 is a strong line and hence strongly saturated.
Finally, we intentionally include the isolated O 2 band at the right part of the spectrum to show that we could infer the evolution of the molecular line that is blended with K i D 2 if we trace the variations shown by that isolated O 2 band. A&A proofs: manuscript no. oxygen2 Fig. 7. Difference between the original synthetic K i D 2 Stokes profiles and those affected by the O 2 band. We consider different LOS velocities (abscissas) that shift the K i D 2 line with respect to the O 2 band and we represent the maximum difference for the three selected times (Sun positions) presented in Figure 6, i.e. 00:00 (crosses), 05:00 (triangles), and 11:00 UTC (squares). The velocity sign follows the solar convention, i.e. positive velocities are associated with redshifted profiles and downflows at the solar surface.
Wavelength shifted solar lines
We explained in the previous section that the effect of the oxygen molecule is negligible when the K i D 2 line is at rest. However, we want to estimate what happens when we introduce a wavelength shift in the solar spectra with respect to the O 2 band. There are various features that can produce this effect, among others, convection in the solar photosphere, solar rotation and Earth's orbital (and partly rotational) motion, or waves in the solar atmosphere.
We show in Figure 7 the maximum difference between the original synthetic profiles and those affected by the Earth's atmospheric absorption. We select the elevations of the Sun used in the previous section. On this occasion, the effect of the O 2 absorption on the polarization Stokes parameters is not negligible. In the worst scenario, i.e. 00:00 UTC (crosses), we have a maximum difference of 0.02 and 0.01 of I c in Stokes Q and U, respectively, while in Stokes V we detect larger values up to 0.11 of I c .
In order to visualize how this affects the Stokes parameters, we plot in Figure 8 selected profiles that correspond to the Doppler shifts that roughly generate the highest differences, e.g. 3 km/s. Starting with Stokes I (top), we can see that this corresponds to the case in which the O 2 band falls in one of the wings of the K i D 2 line, which is compatible with the results presented in Figure 7. In Stokes V, we detect the largest deviation when the O 2 band modifies one of the lobes. In this case, this is translated into an artificial Stokes V area and amplitude asymmetry with an amplitude reduction of almost 70 per cent; the original Stokes V lobe diminishes from 0.14 to 0.04 of I c . The same effect occurs for the linear polarization profiles (not presented here) when the oxygen band falls at line core wavelengths as it partially removes the central π component. However, we note that those profiles correspond to the worst case scenario, when observing at night with the Sun very close to the horizon (largest airmass). In fact, if we compare the various lines on each panel of Figure 7, we detect much lower differences for observations at noon. Wallace et al. (1996) explained that one option for removing the effect of the Earth's telluric absorption consists of observing the spectrum of interest at two different airmasses. Their approach is based on the Beer-Lambert law that relates the attenuation of the light with the properties of the material it is travelling through. In this regard, and focussing on the case of the Earth's atmosphere, we can define the intensity at a given observatory as
Telluric correction
I λ (m) = I λ (0) × T λ (m),(1)
where I λ (0) is the spectrum produced in the Sun and T λ (m) the attenuation induced by the Earth's atmosphere for a given airmass m. The attenuation can be described in the present case as
T λ (m) = e −m[τ O 2 (λ)] ,(2)
where τ O 2 (λ) is the optical depth related to the absorption of the O 2 molecular band. As explained in Section 2.1, we assume that the main contribution for the atmospheric attenuation is the oxygen band, although, in general, additional terms should be included for a different spectral range. The airmass m, can be defined as m = sec(θ), where θ is the solar zenith angle. If we perform observations at two different airmasses, for example observing the Sun at different elevations, we can obtain two spectra I λ (m 1 ) and I λ (m 2 ) whose only difference should be airmasses m 1 and m 2 . Therefore, we can compute the telluric attenuation by simply dividing the two observations performed at different airmasses as follows:
I λ (m 1 ) I λ (m 2 ) = e −τ O 2 (λ)(m 1 −m 2 ) ,(3)
as the only unknown of the previous equation is τ O 2 (λ). Finally, after determining τ O 2 (λ), we can derive I λ (0) from Equation 1. In addition, as τ O 2 (λ) does not change strongly on short timescales, in contrast to H 2 O (see Wallace et al. 1996), we only need to estimate τ O 2 (λ) once per day or every two days. We can carry out this estimation while performing, for instance, flat-field calibrations when the Sun elevation is low in the morning and when it is higher around noon; see Figure 4. In order to reinforce the previous argument, we test the method on synthetic spectropolarimetric spectra. We employ the transmittance presented in Figure 4, where m 1 and m 2 are the airmasses at 12:00 UTC and 6:00 UTC, respectively. We start from a Stokes vector that is affected by the O 2 absorption and also Doppler-shifted with a line-of-sight velocity of 3 km/s. This is because we aim to examine this technique for a similar case as that shown in Figure 8. The results of applying the method described before are presented in Figure 9. The first row shows that we can recover the original Stokes I profile almost perfectly; see the similarities between the squares and the solid line. Moreover, if we replace in Equation 1 I λ by the Stokes Q, U and V spectra, we can also recover the polarization signals with high accuracy (see columns in Figure 9). This means that we can correct the O 2 effect almost perfectly in this ideal case, with unchanging solar spectra and in the absence of noise. The most critical is that the solar spectra remain unchanged between the two measurements. This implies measuring the quiet Sun at the same µ and same relative velocity (i.e. at the same solar longitude if the relative Sun-Earth velocity is otherwise unchanged), which is approxi-mately the case if we compare observations made at midnight and noon.
Summary and conclusions
We estimated the O 2 transmittance assuming the conditions expected for the Sunrise iii balloon-borne mission. In addition, these results are also applicable to any other balloon flight that aims to observe the 770 nm spectral region.
We studied the O 2 transmittance dependence on different conditions, such as the altitude and geographical location of the balloon or the Sun elevation. We found that the first condition generates transmittance bands that are very narrow when observing the Sun from the middle stratosphere because of the reduction of atmospheric pressure with height, i.e. the pressure broadening of the spectral lines is much lower (see e.g. Strong & Plass 1950), and the lack of convection and associated turbulence at those atmospheric layers. This reduces the effect of the band on solar lines, opening the possibility of observing the solar K i D 2 line for the first time. Regarding the geographical location, we did not detect large variations between the three examined locations, although the transmittance is in general slightly lower at Sweden, which could be due to geographical differences but also to the ground level reference altitude (probably higher in Greenland). Concerning the Sun elevation, we found that, although the Sun never sets in summer above the Arctic circle, the transmittance is low during nocturnal periods. This indicates that it is better to observe the K i D 2 line close to local noon, when the O 2 transmittance is much larger, up to 60 per cent.
Later, we studied the effect of the O 2 band on synthetic polarimetric spectra, finding that it is significant when we introduce a LOS velocity that shifts the location of the K i D 2 line. This is because the residual O 2 absorption changes the properties of the Stokes profiles, for example the Stokes V amplitude can be reduced, in the worst case, by up to 70 per cent of its original value.
We also tested the method presented in Wallace et al. (1996) for removing the telluric contamination from the ob-A&A proofs: manuscript no. oxygen2 served spectra. This technique has been successfully applied on solar, albeit only spectroscopic, observations in the past (e.g. Livingston & Wallace 1991;Wallace & Livingston 1992;Wallace et al. 1993Wallace et al. , 1996. The method is based on the Beer-Lambert law and uses two observations taken at different airmasses for recovering the original spectra. Our results indicate that, for the ideal case studied, we can recover the original solar spectra, including the polarization Stokes profiles, with high accuracy. Thus, we have found that it is feasible to correct the telluric absorption if we observe (e.g. twice a day) the Sun at different elevations periodically, for instance performing a flatfield calibration, and the airmass of each observation is properly determined.
We conclude that observing the K i D 2 line from a stratospheric balloon is achievable and the spectral properties of the line can be studied. This will allow the scientific exploitation of the data and the discovery of new features in the solar atmosphere, at atmospheric layers that have been scarcely explored with polarimetry.
Fig. 1 .
1Top panel: synthetic solar spectra (purple) and telluric O 2 transmittance (blue) computed with the psg tool. The bottom panel shows the solar atlas fromDelbouille et al. (1973) observed at the Jungfraujoch Scientific Station, Switzerland (black) and the synthetic atlas for the same conditions (orange) generated with the psg tool. The vertical dotted line indicates the K i D 2 line located at 7664.90 Å.
Fig. 2 .
2Microturbulence stratification used in this work, extracted fromFontenla et al. (1990).
Fig. 3 .
3Oxygen molecule transmittance in the Earth's northern hemisphere, from left to right, at Sweden, Greenland, and Canada. The top row shows the transmittance at ground level while the bottom row corresponds to the results for an observatory at 35 km above the sea level. Blue designates the transmittance for a reference day time while black corresponds to a reference nocturnal period on 2017 June 15 (see
Fig. 4 .
4Daily variation of the oxygen molecule band transmittance (squares) blended with the K i D 2 line at the northern hemisphere (Sweden) for the reference date 2017 June 15. Dashed line indicates the position of the Sun above the horizon, defined as 90 • , while the zenith corresponds to 0 • .
Fig. 5 .
5Altitude dependence of the oxygen molecule transmittance band blended with the K i D 2 line in the northern hemisphere. The geographical location corresponds to Sweden with squares and triangles designating the transmittance at 10:30 and 21:30 UTC, respectively. For more information, see the first two rows in
Fig. 8 .
8Comparison between the original synthetic Stokes I (top) and V (bottom) profiles and those affected by the O 2 transmittance. We show the profiles corresponding to the maximum difference (crosses inFigure 7)for a given Doppler shift, e.g. 3 km/s.
Fig. 9 .
9Results of applying the telluric correction on the Stokes (from left to right) I, Q, U, V profiles. The top row corresponds to the spectra observed at lower airmass m 1 , i.e. at 12:00 UTC, while the bottom row designates the spectra at larger airmass m 2 , i.e. 06:00 UTC. Solid lines represent the original profile, the dotted lines display the same profile affected by the O 2 absorption, and squares show the results obtained from the method explained in Section 5.
Table 1 .
1Reference conditions used to simulate a launch from the Esrange Space Center, Kiruna, Sweden (seeFigure 3) on 2017 June 15.Region
Coordinates
Time (UTC) Period Sun position (α)
Sweden
(19 • E, 68 • N)
10:30
Day
44.76 •
Sweden
(19 • E, 68 • N)
21:30
Night
87.36 •
Greenland (317 • E, 72 • N)
15:30
Day
48.81 •
Greenland (317 • E, 72 • N)
03:30
Night
84.50 •
Canada
(254 • E, 72 • N)
19:00
Day
48.64 •
Canada
(254 • E, 72 • N)
05:00
Night
82.00 •
3). Still,30
32
34
36
38
40
Altitude [km]
0
20
40
60
80
100
O
2 transmittance [%]
AcknowledgementsWe would like to thank the referee for the helpful and constructive comments that helped improve the manuscript. C. Quintero Noda acknowledges the support of the ISAS/JAXA International Top Young Fellowship (ITYF). This work was supported by the funding for the international collaboration mission (SUNRISE-3) of ISAS/JAXA. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 695075) and has been supported by the BK21 plus programme through the National Research Foundation (NRF) funded by the Ministry of Education of Korea. This work has also been supported by Spanish Ministry of Economy and Competitiveness through the project ESP-2016-77548-C5-1-R.
. Asensio Ramos, A Socas-Navarro, H López Ariste, A González, M J , ApJ. 6601690Asensio Ramos, A., Socas-Navarro, H., López Ariste, A., & Martínez González, M. J. 2007, ApJ, 660, 1690
. M Asplund, N Grevesse, A J Sauval, P Scott, ARA&A. 47481Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, ARA&A, 47, 481
. H D Babcock, L Herzberg, ApJ. 108167Babcock, H. D. & Herzberg, L. 1948, ApJ, 108, 167
. P Barthol, A Gandorfer, S K Solanki, Sol. Phys. 2681Barthol, P., Gandorfer, A., Solanki, S. K., et al. 2011, Sol. Phys., 268, 1
. T Berkefeld, W Schmidt, D Soltau, Sol. Phys. 268103Berkefeld, T., Schmidt, W., Soltau, D., et al. 2011, Sol. Phys., 268, 103
. J M Borrero, S Jafarzadeh, M Schüssler, S K Solanki, arXiv:1511.04214Space Sci. Rev. Borrero, J. M., Jafarzadeh, S., Schüssler, M., & Solanki, S. K. 2015, Space Sci. Rev.[arXiv:1511.04214]
. J H M J Bruls, R J Rutten, N G Shchukina, A&A. 265237Bruls, J. H. M. J., Rutten, R. J., & Shchukina, N. G. 1992, A&A, 265, 237
. L Delbouille, G Roland, L Neven, Atlas photometrique du spectre solaire de [lambda] 3000 a [lambda] 10000Delbouille, L., Roland, G., & Neven, L. 1973, Atlas photometrique du spectre solaire de [lambda] 3000 a [lambda] 10000
GENLN2 a General Line-by-line Atmospheric Transmittance and Radiance Model: Version 3.0 Description and Users Guide, GENLN2: A General Line-by-line Atmospheric Transmittance and Radiance Model : Version 3.0 Description and Users Guide (Atmospheric Chemistry Division. D Edwards, National Center for Atmospheric ResearchEdwards, D. 1992, GENLN2 a General Line-by-line Atmospheric Transmittance and Radiance Model: Version 3.0 Description and Users Guide, GENLN2: A General Line-by-line Atmospheric Transmittance and Radiance Model : Version 3.0 Description and Users Guide (Atmospheric Chemistry Division, National Center for Atmospheric Research)
. J M Fontenla, E H Avrett, R Loeser, ApJ. 355700Fontenla, J. M., Avrett, E. H., & Loeser, R. 1990, ApJ, 355, 700
. J M Fontenla, E H Avrett, R Loeser, ApJ. 406319Fontenla, J. M., Avrett, E. H., & Loeser, R. 1993, ApJ, 406, 319
. A Gandorfer, B Grauf, P Barthol, Sol. Phys. 26835Gandorfer, A., Grauf, B., Barthol, P., et al. 2011, Sol. Phys., 268, 35
. R Gelaro, W Mccarty, M J Suárez, Journal of Climate. 305419Gelaro, R., McCarty, W., Suárez, M. J., et al. 2017, Journal of Climate, 30, 5419
D B Gesch, K S Larson, Pecora Thirteen, Human Interactions with the Environment -Perspectives from Space. South DakotaSioux Falls677Gesch, D. B. & Larson, K. S. 1996, Pecora Thirteen, Human Interactions with the Environment -Perspectives from Space, South Dakota: Sioux Falls, 677
. F Kasten, A T Young, Appl. Opt. 284735Kasten, F. & Young, A. T. 1989, Appl. Opt., 28, 4735
An atlas of the solar spectrum in the infrared from 1850. W Livingston, L Wallace, to 9000 cm −1 (1.1 to 5.4 micrometerLivingston, W. & Wallace, L. 1991, An atlas of the solar spectrum in the infrared from 1850 to 9000 cm −1 (1.1 to 5.4 micrometer)
AFCRL Atmospheric Absorption Line Parameters Compilation. R Mcclatchey, AFCRL-TR-73-0096Air Force Cambridge Research LaboratoriesMcClatchey, R. 1973, AFCRL Atmospheric Absorption Line Parameters Com- pilation, AFCRL-TR-73-0096 (Air Force Cambridge Research Laboratories)
. C Quintero Noda, H Uitenbroek, Y Katsukawa, MNRAS. 4701453Quintero Noda, C., Uitenbroek, H., Katsukawa, Y., et al. 2017, MNRAS, 470, 1453
Journal of Quantitative Spectroscopy and Radiative Transfer, 130, 4 , hITRAN2012 special issue Solanki. L Rothman, I Gordon, Y Babikov, ApJ. 723127Rothman, L., Gordon, I., Babikov, Y., et al. 2013, Journal of Quantitative Spec- troscopy and Radiative Transfer, 130, 4 , hITRAN2012 special issue Solanki, S. K., Barthol, P., Danilovic, S., et al. 2010, ApJ, 723, L127
. S K Solanki, T L Riethmüller, P Barthol, ApJS. 2292Solanki, S. K., Riethmüller, T. L., Barthol, P., et al. 2017, ApJS, 229, 2
. J O Stenflo, A Gandorfer, C U Keller, A&A. 355781Stenflo, J. O., Gandorfer, A., & Keller, C. U. 2000, A&A, 355, 781
. J Strong, G N Plass, ApJ. 112365Strong, J. & Plass, G. N. 1950, ApJ, 112, 365
. H Uitenbroek, ApJ. 557389Uitenbroek, H. 2001, ApJ, 557, 389
. H Uitenbroek, ApJ. 5921225Uitenbroek, H. 2003, ApJ, 592, 1225
. H Uitenbroek, J H M J Bruls, A&A. 265268Uitenbroek, H. & Bruls, J. H. M. J. 1992, A&A, 265, 268
G L Villanueva, A Mandell, S Protopapa, Planetary Science Vision 2050 Workshop. 19898006LPI ContributionsVillanueva, G. L., Mandell, A., Protopapa, S., et al. 2017, in LPI Contributions, Vol. 1989, Planetary Science Vision 2050 Workshop, 8006
. G L Villanueva, M J Mumma, R E Novak, Science. 348218Villanueva, G. L., Mumma, M. J., Novak, R. E., et al. 2015, Science, 348, 218
. L Wallace, W Livingston, K Hinkle, P Bernath, ApJS. 106165Wallace, L., Livingston, W., Hinkle, K., & Bernath, P. 1996, ApJS, 106, 165
An atlas of a dark sunspot umbral spectrum from. L Wallace, W C Livingston, to 8640 cm −1 (1.16 to 5.1 [micronsWallace, L. & Livingston, W. C. 1992, An atlas of a dark sunspot umbral spec- trum from 1970 to 8640 cm −1 (1.16 to 5.1 [microns])
An atlas of the solar photospheric spectrum in the region from 8900 to 13600 cm −1 (7350 to 11230 Å) with decomposition into solar and atmospheric components and identifications of the main solar features. L Wallace, W C Livingston, K Hinkle, Wallace, L., Livingston, W. C., & Hinkle, K. 1993, An atlas of the solar photo- spheric spectrum in the region from 8900 to 13600 cm −1 (7350 to 11230 Å) with decomposition into solar and atmospheric components and identifica- tions of the main solar features
| []
|
[
"Advances in the ab initio description of nuclear three-cluster sys- tems",
"Advances in the ab initio description of nuclear three-cluster sys- tems"
]
| [
"Carolina Romero-Redondo \nLawrence Livermore National Laboratory\nP.O. Box 808L-414, 94551LivermoreCaliforniaUSA\n",
"Sofia Quaglioni \nLawrence Livermore National Laboratory\nP.O. Box 808L-414, 94551LivermoreCaliforniaUSA\n",
"Petr Navrátil \nTRIUMF\n4004 Wesbrook MallV6T 2A3VancouverBritish ColumbiaCanada\n",
"Guillaume Hupin \nInstitut de Physique Nucléaire\nUniversité Paris-Sud\nIN2P3/CNRS\nF-91406Orsay CedexFrance\n"
]
| [
"Lawrence Livermore National Laboratory\nP.O. Box 808L-414, 94551LivermoreCaliforniaUSA",
"Lawrence Livermore National Laboratory\nP.O. Box 808L-414, 94551LivermoreCaliforniaUSA",
"TRIUMF\n4004 Wesbrook MallV6T 2A3VancouverBritish ColumbiaCanada",
"Institut de Physique Nucléaire\nUniversité Paris-Sud\nIN2P3/CNRS\nF-91406Orsay CedexFrance"
]
| []
| We introduce the extension of the ab initio no-core shell model with continuum to describe three-body cluster systems. We present results for the ground state of 6 He and show improvements with respect to the description obtained within the no-core shell model and the no-core shell model/resonating group methods. | 10.1051/epjconf/201611303004 | [
"https://www.epj-conferences.org/articles/epjconf/pdf/2016/08/epjconf_fb2016_03004.pdf"
]
| 119,269,985 | 1509.00878 | bb464f30c15601dc3008420a16f26b1a9a1b366c |
Advances in the ab initio description of nuclear three-cluster sys- tems
Carolina Romero-Redondo
Lawrence Livermore National Laboratory
P.O. Box 808L-414, 94551LivermoreCaliforniaUSA
Sofia Quaglioni
Lawrence Livermore National Laboratory
P.O. Box 808L-414, 94551LivermoreCaliforniaUSA
Petr Navrátil
TRIUMF
4004 Wesbrook MallV6T 2A3VancouverBritish ColumbiaCanada
Guillaume Hupin
Institut de Physique Nucléaire
Université Paris-Sud
IN2P3/CNRS
F-91406Orsay CedexFrance
Advances in the ab initio description of nuclear three-cluster sys- tems
We introduce the extension of the ab initio no-core shell model with continuum to describe three-body cluster systems. We present results for the ground state of 6 He and show improvements with respect to the description obtained within the no-core shell model and the no-core shell model/resonating group methods.
Introduction
The ab initio no-core shell model/resonating group method (NCSM/RGM) was presented in [1,2] as a technique that is able to describe both structure and reactions in light nuclear systems. Within this approach, the wave function is expanded in a continuous cluster basis using the resonating group method with realistic interactions and a consistent ab initio description of the nucleon clusters.
The method was first introduced in detail for two-body cluster bases and has been shown to work efficiently in different systems [1][2][3][4]. Later, the expansion of the method for three-cluster systems was introduced in [5,6]. The capability of ab initio methods to properly describe three-body cluster states is essential for the study of nuclear systems that present such configuration. This type of systems appear, e.g., in structure problems of two-nucleon halo nuclei such as 6 He and 11 Li, resonant systems such as 5 H, and reactions with three fragments in their final state such as 3 H( 3 H,2n) 4 He or 3 He( 3 He,2p) 4 He.
Despite the success of the NCSM/RGM in describing the asymptotic behavior of the wave functions, it has been shown that it has limitations when it comes to accurately describe systems at short to medium ranges (up to about 5 fm for the 6 He case). This is due to the fact that, in order to account for all many-body correlations, several excited states of the nuclear clusters must be included in the basis, resulting in an increase of the problem size that goes beyond current computational capabilities. This limitation has been overcome by introducing the ab initio no-core shell model with continuum (NCSMC). With this method, the wave function is written as a superposition of both continuous NCSM/RGM cluster states and discrete eigenstates of the compound system obtained with the no-core shell model (NCSM). The latter eigenstates compensate for missing cluster excitations, improving the description of the wave function at short to medium range.
The NCSMC was first introduced in [7,8] for binary systems. Its expansion to three-cluster systems was recently achieved and we show here the first results for the 6 He ground state (g.s).
Formalism
In the NCSMC, the ansatz for the three-cluster many-body wave function is given by
|Ψ J π T = λ c λ |AλJ π T + ν dx dy x 2 y 2 G J π T ν (x, y)Â ν |Φ J π T νxy ,(1)
where c λ and G J π T ν (x, y) are, respectively, discrete and continuous variational amplitudes, |AλJ π T are NCSM eigenstates of the compound nucleus labeled by the set of quantum number λ, ν is an appropriate intercluster antisymmetrizer introduced to exactly preserve the Pauli exclusion principle, and
|Φ J π T νxy = |A − a 23 α 1 I π 1 1 T 1 |a 2 α 2 I π 2 2 T 2 |a 3 α 3 I π 3 3 T 3 (s 23 T 23 ) (S T) Y x (η 23 )Y y (η 1,23 ) (L) (J π T ) × δ(x − η 23 ) xη 23 δ(y − η 1,23 ) yη 1,23 ,(2)
are three-body cluster channels of total angular momentum J, parity π and isospin T , where ν represents a set of quantum numbers that describes the channel within the cluster basis. Here, |A − a 23 α 1 I π 1 1 T 1 , |a 2 α 2 I π 2 2 T 2 and |a 3 α 3 I π 3 3 T 3 denote the microscopic (antisymmetric) wave functions of the three nuclear fragments calculated within the NCSM. The Jacobi coordinates describing the relative positions of the clusters are denoted by η 23 = η 23η23 and η 1,23 = η 1,23η1,23 .
We calculate the unknowns of the NCSMC wave function [c λ and G J π T ν (x, y)] by solving the orthogonalized coupled equations obtained by projecting the Schrödinger equation on the model space spanned by NCSM eigenstates and the NCSM/RGM basis |Φ J π T νxy . Those equations are solved by means of the microscopic R-matrix method on a Lagrange mesh [9]. Details on the procedure will be available in [10].
Application to 6 He
The lightest Borromean nucleus is 6 He [11,12], formed by an 4 He core and two halo neutrons. It is, therefore, an ideal first candidate to be studied within a three-body formalism. Hence, it was used as a test case when the NCSM/RGM formalism for three-cluster dynamics was introduced in [5, 6] and here is studied again in order to perform a benchmark with such results. We describe the 4 He core by its g.s. wave function and couple the three-cluster basis with the 6 He g.s. eigenstate obtained through the NCSM. We use the same potential as in [5,6], i.e., the similarity-renormalization-group (SRG) [13,14] evolved potential obtained from the chiral N 3 LO NN interaction [15] with Λ SRG = 1.5 fm −1 . With this potential, the variational minimum for the NCSM 6 He g.s. is found at an HO frequency of around Ω = 14 MeV, which is used in this work for all calculations. With this soft potential the binding energy can be accurately computed by extrapolating (through an exponential fit) the NCSM results to N max → ∞, hence providing a good benchmark for the newly implemented NCSMC.
From Table 1, we can see that the NCSMC 6 He g.s. energy quickly converges to the NCSM extrapolated value, unlike in the NCSM/RGM, i.e., using only the basis (2) in the expansion of the wave function. This is due to the fact that the 6 He NCSM eigenstate takes into account the six-body correlations and 4 He core polarization that are missing when considering the cluster basis alone. It is also important to note that, in contrast to the behavior offered by the NCSM, the NCSMC presents the correct extended asymptotic behavior of the wave function. In Fig 1 such comparison is shown in a preliminary calculation at an N max = 6 model space.
Finally, we can also compare the probability densities from the 6 He g.s. obtained with the NCSM/RGM and the NCSMC. In Fig. 2, such comparison is shown and it is interesting to find that while the two main configurations (di-neutron and cigar) appear to have the same probability within the NCSM/RGM, the di-neutron probability is enhanced when using the NCSMC. This asymmetry in the strength of the probability peaks is known to be a characteristic of 6 He and these results show that it is a consequence of the aditional six-body correlations.
Conclusions
The NCSMC uses an ansatz wave function that includes both an expansion in a continuous threecluster basis and in a discrete basis of NCSM eigenstates. This provides a foundation that is capable of describing both short and long range characteristics of three-cluster systems. In the case of the 6 He g.s., we could see that this approach provides both the correct binding energy and extended asymptotic behavior unlike the NCSM, that does provide the correct binding energy, but not the correct Figure 2. Probability distribution of the 6 He g.s. wave function in terms of the relative distance between the neutrons (r nn ) and the distance between the center of mass of the neutrons and the 4 He (r α,nn ). The di-neutron and cigar configurations appear to have the same probability within the NCSM/RGM (a), while the di-neutron probability is enhanced when using the NCSMC (b). asymptotics, or the NCSM/RGM that does the opposite. Calculations in larger model spaces for both g.s. and continuum states of 6 He are underway.
DQ 2SHQ $FFHVV DUWLFOH GLVWULEXWHG XQGHU WKH WHUPV RI WKH &UHDWLYH &RPPRQV $WWULEXWLRQ /LFHQVH ZKLFK SHUPLWV XQUHVWULFWHG XVH GLVWULEXWLRQ DQG UHSURGXFWLRQ LQ DQ\ PHGLXP SURYLGHG WKH RULJLQDO ZRUN LV SURSHUO\ FLWHG
Figure 1 .
1Most relevant hyperradial contributions to the 6 He g.s. wave function. Both the contribution from the NCSM wave function and the total NCSMC wave function are shown for a N max = 6 model space. The figure shows how the addition of the three-cluster basis within the NCSMC compensates for the limitations of the NCSM to obtain an extended wave function characteristic of two-neutron halo nuclei. The hyperradial wave functions u Kν (ρ) are the coefficients of the wave function when expanded in the hyperspherical basis, where K represents the hypermomentum.
Table 1 .
1Energy (in MeV) for the NCSM 4 He g.s. and the 6 He g.s. using the NCSM/RGM, NCSM and NCSMC approaches in terms of the absolute HO model space size N tot = N 0 + N max , where N 0 is the number of oscillator quanta shared by the nucleons in their lowest configuration. For the NCSM, we also show the extrapolated value to N max → ∞ (the extrapolation was performed with an exponential fit).4 He
6 He
N tot
NCSM
NCSM/RGM
NCSM
NCSMC
8
−28.17
−28.62
−28.95
−29.69
10
−28.22
−28.72
−29.45
−29.86
12
−28.22
−28.70
−29.66
−29.86
Extrapolation −28.230(5)
-
−29.84(4)
-
Acknowledgements
. S Quaglioni, P Navrátil, Physical Review Letters. 10192501S. Quaglioni, P. Navrátil, Physical Review Letters 101, 092501 (2008)
. S Quaglioni, P Navrátil, Physical Review C. 7944606S. Quaglioni, P. Navrátil, Physical Review C 79, 044606 (2009)
. P Navrátil, S Quaglioni, Physical Review C. 8344609P. Navrátil, S. Quaglioni, Physical Review C 83, 044609 (2011)
. P Navrátil, S Quaglioni, Physical Review Letters. 10842503P. Navrátil, S. Quaglioni, Physical Review Letters 108, 042503 (2012)
. S Quaglioni, C Romero-Redondo, P Navrátil, Phys.Rev. 8834320S. Quaglioni, C. Romero-Redondo, P. Navrátil, Phys.Rev. C88, 034320 (2013)
. C Romero-Redondo, S Quaglioni, P Navrátil, G Hupin, Phys. Rev. Lett. 11332503C. Romero-Redondo, S. Quaglioni, P. Navrátil, G. Hupin, Phys. Rev. Lett. 113, 032503 (2014)
. S Baroni, P Navrátil, S Quaglioni, Phys. Rev. Lett. 11022505S. Baroni, P. Navrátil, S. Quaglioni, Phys. Rev. Lett. 110, 022505 (2013)
. S Baroni, P Navrátil, S Quaglioni, Phys. Rev. C. 8734326S. Baroni, P. Navrátil, S. Quaglioni, Phys. Rev. C 87, 034326 (2013)
. M Hesse, J M Sparenberg, F V Raemdonck, D Baye, Nucl. Phys. 64037M. Hesse, J.M. Sparenberg, F.V. Raemdonck, D. Baye, Nucl. Phys. A640, 37 (1998)
. C Romero-Redondo, S Quaglioni, P Navrátil, G Hupin, In preparationC. Romero-Redondo, S. Quaglioni, P. Navrátil, G. Hupin, In preparation (2015)
. I Tanihata, J. Phys. 22157I. Tanihata, J. Phys. G22, 157 (1996)
. I Tanihata, H Hamagaki, O Hashimoto, Y Shida, N Yoshikawa, K Sugimoto, O Yamakawa, T Kobayashi, N Takahashi, Phys. Rev. Lett. 552676I. Tanihata, H. Hamagaki, O. Hashimoto, Y. Shida, N. Yoshikawa, K. Sugimoto, O. Yamakawa, T. Kobayashi, N. Takahashi, Phys. Rev. Lett. 55, 2676 (1985)
. S K Bogner, R J Furnstahl, R J Perry, Phys. Rev. C. 7561001S.K. Bogner, R.J. Furnstahl, R.J. Perry, Phys. Rev. C 75, 061001 (2007)
. R Roth, S Reinhardt, H Hergert, Phys. Rev. C. 7764003R. Roth, S. Reinhardt, H. Hergert, Phys. Rev. C 77, 064003 (2008)
. D R Entem, R Machleidt, Phys. Rev. C. 6841001D.R. Entem, R. Machleidt, Phys. Rev. C 68, 041001 (2003)
| []
|
[
"ENDOMORPHISMS OF POSITIVE CHARACTERISTIC TORI: ENTROPY AND ZETA FUNCTION",
"ENDOMORPHISMS OF POSITIVE CHARACTERISTIC TORI: ENTROPY AND ZETA FUNCTION"
]
| [
"Keira Gunn ",
"Khoa D Nguyen ",
"J C Saunders "
]
| []
| []
| Let F be a finite field of order q and characteristic p.equipped with the discrete valuation for which 1/t is a uniformizer, and let T F = R F /Z F which has the structure of a compact abelian group. Let d be a positive integer and let A be a d × d-matrix with entries in Z F and non-zero determinant. The multiplication-by-A map is a surjective endomorphism on T d F . First, we compute the entropy of this endomorphism; the result and arguments are analogous to those for the classical case T d = R d /Z d . Second and most importantly, we resolve the algebraicity problem for the Artin-Mazur zeta function of all such endomorphisms. As a consequence of our main result, we provide a complete characterization and an explicit formula related to the entropy when the zeta function is algebraic. | null | [
"https://arxiv.org/pdf/2112.14812v2.pdf"
]
| 245,634,912 | 2112.14812 | 681a23bd996055446d9a0638df116993da57f0d9 |
ENDOMORPHISMS OF POSITIVE CHARACTERISTIC TORI: ENTROPY AND ZETA FUNCTION
3 Jun 2022
Keira Gunn
Khoa D Nguyen
J C Saunders
ENDOMORPHISMS OF POSITIVE CHARACTERISTIC TORI: ENTROPY AND ZETA FUNCTION
3 Jun 2022arXiv:2112.14812v2 [math.NT]
Let F be a finite field of order q and characteristic p.equipped with the discrete valuation for which 1/t is a uniformizer, and let T F = R F /Z F which has the structure of a compact abelian group. Let d be a positive integer and let A be a d × d-matrix with entries in Z F and non-zero determinant. The multiplication-by-A map is a surjective endomorphism on T d F . First, we compute the entropy of this endomorphism; the result and arguments are analogous to those for the classical case T d = R d /Z d . Second and most importantly, we resolve the algebraicity problem for the Artin-Mazur zeta function of all such endomorphisms. As a consequence of our main result, we provide a complete characterization and an explicit formula related to the entropy when the zeta function is algebraic.
Positive characteristic tori and statements of the main results
The tori T d := R d /Z d where d is a positive integer play an important role in number theory, dynamical systems, and many other areas of mathematics. In this paper, we study the entropy and algebraicity of the Artin-Mazur zeta function of a surjective endomorphism on the so called positive characteristic tori.
Throughout this paper, let F be the finite field of order q and characteristic p. Let Z F = F [t] be the polynomial ring over F , Q F = F (t), and
R F = F ((1/t)) = i≤m a i t i : m ∈ Z, a i ∈ F for i ≤ m .
The field R F is equipped with the discrete valuation v : R F → Z ∪ {∞} given by v(0) = ∞ and v(x) = −m where x = i≤m a i t i with a m = 0; in fact R F is the completion of Q F with respect to this valuation. Let | · | denote the nonarchimedean absolute value |x| = q −v(x) for x ∈ R F . We fix an algebraic closure of R F and the absolute value | · | can be extended uniquely to the algebraic closure (see Proposition 2.1). Let T F = R F /Z F and let π : R F → T F be the quotient map. Every element α ∈ T F has the unique preimageα ∈ R F of the form This yields a homeomorphism T F ∼ = i≤−1 F of compact abelian groups. Let µ be the probability Haar measure on T F and let ρ be the metric on T F given by ρ(α, β) := |α −β|. We fix a positive integer d and let µ d be the product measure on T d F . The analytic number theory, more specifically the theory of characters and Lfunctions, on T F has been studied since at least 1965 in work of Hayes [Hay65]. Some relatively recent results include work of Liu-Wooley [LW10] on Waring's problem and the circle method in function fields and work of Porritt [Por18] and Bienvenu-Lê [BL19] on correlation between the Möbius function and a character over Z F . For a recent work in the ergodic theory side, we refer the readers to the paper by Bergelson-Leibman [BL16] and its reference in which the authors establish a Weyl-type equidistribution theorem.
Let A ∈ M d (Z F ) having non-zero discriminant. The multiplication-by-A map yields a surjective endomorphism of T d F for which µ d is an invariant measure, we abuse the notation by using A to denote this endomorphism. Our first result is the following:
Theorem 1.1. Let h(µ d , A) denote the entropy of A with respect to µ d and let h(A) denote the topological entropy of A. Let λ 1 , . . . , λ d denote the eigenvalues of A. We have:
h(A) = h(µ d , A) = d i=1 log max{|λ i |, 1}.
Remark 1.2. This is the same formula as the entropy of surjective endomorphisms of T d . The proof is not surprising either: we use similar arguments to the classical ones presented in the books by Walters [Wal82] and Viana-Oliveira [VO16] together with several adaptations to the non-archimedean setting of R d F and T d F . What is important is the relationship between the entropy and the Artin-Mazur zeta function in the next main result.
Let f : X → X be a map from a topological space X to itself. For each k ≥ 1, let N k (f ) denote the number of isolated fixed points of f k . Assume that N k (f ) is finite for every k, then one can define the Artin-Mazur zeta function [AM65]:
ζ f (z) = exp ∞ k=1 N k (f ) k z k .
When X is a compact differentiable manifold and f is a smooth map such that N k (f ) grows at most exponentially in k, the question of whether ζ f (z) is algebraic is stated in [AM65]. The rationality of ζ f (z) when f is an Axiom A diffeomorphism is established by Manning [Man71] after earlier work by Guckenheimer [Guc70]. On the other hand, when X is an algebraic variety defined over a finite field and f is the Frobenius morphism, the function ζ f (z) is precisely the classical zeta function of the variety X and its rationality is conjectured by Weil [Wei49] and first established by Dwork [Dwo60]. For the dynamics of a univariate rational function, rationality of ζ f (x) is established by Hinkkanen in characteristic zero [Hin94] while Bridy [Bri12,Bri16] obtains both rationality and transcendence results over positive characteristic when f belongs to certain special families of rational functions. As before, let A ∈ M d (Z F ) and we use A to denote the induced endomorphism on T d F . We will show that N k (A) < ∞ for every n and hence one can define the zeta function ζ A (z).
As a consequence of our next main result, we resolve the algebraicity problem for ζ A (z): we provide a complete characterization and an explicit formula when ζ A (z) is algebraic. We need a couple of definitions before stating our result.
Let K be a finite extension of R F . Let
O K := {α ∈ K : |α| ≤ 1},
O * K = {α ∈ K : |α| = 1}, and p K := {α ∈ K : |α| < 1} respectively denote the valuation ring, unit group, and maximal ideal. In particular:
O := O RF = F [[1/t]] and p := p RF = 1 t F [[1/t]] = i≤−1 a i t i : a i ∈ F ∀i .
Note that p is the compact open subset of R F that is both the open ball of radius 1 and closed ball of radius 1/q centered at 0. The field O K /p K is a finite extension of O/p = F and the degree of this extension is called the inertia degree of K/R F [Neu99,p. 150]. Let δ be this inertia degree, then O K /p K is isomorphic to the finite field GF(q δ ). By applying Hensel's lemma [Neu99, for the polynomial X q δ −1 − 1, we have that K contains all the roots of X q δ −1 − 1. These roots together with 0 form a unique copy of GF(q δ ) in K called the Teichmüller representatives. This allows us to regard GF(q δ ) as a subfield of K; in fact GF(q δ ) is exactly the set of all the roots of unity in K together with 0. For every α ∈ O K , we can express uniquely:
(1) α = α (0) + α (1)
where α (0) ∈ GF(q δ ) and α (1) ∈ p K . Definition 1.3. Let α be algebraic over R F such that |α| ≤ 1. Let K be a finite extension of R F containing α. We call α (0) and α (1) in (1) respectively the constant term and p-term of α; they are independent of the choice of K. When |α| = 1, the order of α modulo p means the order of α (0) in the multiplicative group GF (q δ ) * where δ is the inertia degree of K/R F ; this is independent of the choice of K as well. In fact, this order is the smallest positive integer n such that |α n − 1| < 1.
We identify the rational functions in C(z) to the corresponding Laurent series in C((z)).
Definition 1.4. A series f (z) ∈ C((z)) is called D-finite if all of its formal derivatives f (n) (z) for n = 0, 1, . . . span a finite dimensional vectors space over C(z). Equivalently, there exist an integer n ≥ 0 and a 0 (z), . . . , a n (z) ∈ C[z] with a n = 0 such that: a n (z)f (n) (z) + a n−1 f (n−1) (z) + . . . + a 0 (z)f (z) = 0.
Remark 1.5. Suppose that f (z) ∈ C[[z]] is algebraic then f is D-finite, see [Sta80, Theorem 2.1].
Our next main result is the following: Remark 1.7. We allow the possibility that any (or even both) of M and N to be 0. When N = 0, the condition in (a) is vacuously true and ζ A (z) is algebraic in this case. When N = 0 and M = 0 meaning that none of the eigenvalues of A has absolute value 1, the product 1≤j≤M in (a) is the empty product and
ζ A (z) = 1 1 − r(A)z
. When M = 0 and N > 0, the condition in (b) is vacuously true and ζ A (z) is transcendental in this case.
Our results are quite different from results in work of Baake-Lau-Paskunas [BLP10]. In [BLP10], the authors prove that the zeta function of endomorphisms of the classical tori T d are always rational. In our setting, we have cases when the zeta function is rational, transcendental, or algebraic irrational:
Example 1.8. Let F = GF(7) and let A be the diagonal matrix with diagonal entries α, β ∈ GF(7) * where α has order 2 and β has order 3. Then
ζ A (z) = (1 − z 2 ) 1/2 (1 − z 3 ) 1/3 (1 − z)(1 − z 6 ) 1/6
is algebraic irrational.
In work of Bell-Miles-Ward [BMW14], the authors conjecture and obtain some partial results concerning the following Pólya-Carlson type dichotomy [Car21,Póy28] for a slightly different zeta function: it is either rational or admits a natural boundary at its radius of convergence. Conjecture 1.9 (Bell-Miles-Ward, 2014). Let θ : X → X be an automorphism of a compact metric abelian group with the property thatÑ k (θ) < ∞ for every k ≥ 1 whereÑ k (θ) denotes the number of fixed points of θ k . Theñ
ζ θ (z) := exp ∞ k=1Ñ k (θ) k z k
is either a rational function or admits a natural boundary.
Remark 1.10. The difference betweenζ θ in 1.9 and the Artin-Mazur zeta function ζ f is that the latter involves the number of isolate fixed points. Example 1.8 is not included in Conjecture 1.9 since A 6 is the identity matrix and henceÑ 6 (A) = ∞ while we have N 6 (A) = 0 (see Lemma 4.1). When A ∈ M d (Z F ) has the property that none of its eigenvalues is a root of unity, one can show that N k (A) =Ñ k (A) and hence ζ A (z) =ζ A (z). Conjecture 1.9 predicts that when M = 0 and N > 0 in Theorem 1.6, the zeta function ζ A (z) =ζ A (z) admits the circle of radius 1/r(A) as a natural boundary. We can only prove this in some special cases and leave it for future work.
For the proof of Theorem 1.6, we first derive a formula for N k (A) and it turns out that one needs to study |λ k − 1| where λ is an eigenvalue of A. When |λ| = 1, one immediately has |λ k − 1| = max{1, |λ|} k . However, when |λ| = 1 (i.e. λ is among the µ i 's and η j 's), a more refined analysis is necessary to study |λ k − 1|. After that, part (a) can be proved by a direct computation. On the other hand, the proof of part (b) is more intricate. We first assume that the series Acknowledgements. The first author is partially supported by a Vanier Canada Graduate Scholarship. The second and third authors are partially supported by an NSERC Discovery Grant and a CRC Research Stipend. We are grateful to Professors Jason Bell, Michael Singer, and Tom Ward for useful comments that help improve the paper.
Notes added in May 2022. This paper is superseded by [BGNS] by Bell and the authors and no longer intended for publication. Inspired by the earlier work [BNZ20,BNZ], the paper [BGNS] establishes a general Pólya-Carlson criterion and applies this to confirm that the zeta function ζ A (z) admits the circle of radius 1/r(A) as a natural boundary in the transcendence case (see Remark 1.10).
Normed vector spaces and linear maps
Throughout this section, let K be a field that is complete with respect to a nontrivial absolute value | · |; nontriviality means that there exists x ∈ K * such that |x| = 1. We have:
Proposition 2.1. Let E/K be a finite extension of degree n. Then | · | can be extended in a unique way to an absolute value on E and this extension is given by the formula:
|α| = | N E/K (α)| 1/n for every α ∈ E.
The field E is complete with respect to this extended absolute value.
Proof. See [Neu99,.
We now fix an algebraic closure of K and extend | · | to an absolute value on this algebraic closure thanks to Proposition 2.1. For a vector space V over K, a norm on V is a function · : V → R ≥0 such that:
• x = 0 iff x = 0. • cx = |c| · x for every c ∈ K and v ∈ V . • x + y ≤ x + y for every x, y ∈ V .
Two norms · and · ′ on V are said to be equivalent if there exists a positive constant C such that
1 C x ≤ x ′ ≤ C x
for every x ∈ V . It is well-known that any two norms on a finite dimensional vector space V are equivalent to each other and V is complete with respect to any norm, see [Neu99,.
Proposition 2.2. Let V be a vector space over K of finite dimension d > 0. Let ℓ : V → V be an invertible K-linear map such that there exist λ ∈ K * and a basis x 1 , . . . , x d of V over K with: ℓ(x 1 ) = λx 1 and ℓ(x i ) = λx i + x i−1 for 2 ≤ i ≤ d;
in other words, the matrix of ℓ with respect to x 1 , . . . , x d is one single Jordan block with eigenvalue λ. Let δ > 0. Then there exists a norm · on V such that:
(2) (1 − δ)|λ| · x ≤ ℓ(x) ≤ (1 + δ)|λ| · x for every x ∈ V .
Proof. We proceed by induction on d. The case d = 1 is obvious since we can take · to be any norm and we have ℓ(x 1 ) = |λ| x 1 . Let d ≥ 2 and suppose the proposition holds for any vector space of dimension at most d − 1. Let V ′ = Span(x 1 , . . . , x d−1 ). By the induction hypothesis, there exists a norm · ′ on V ′ such that
(3) (1 − δ)|λ| · x ′ ′ ≤ ℓ(x ′ ) ′ ≤ (1 + δ)|λ| · x ′ ′ for every x ′ ∈ V ′ .
Let M be a positive number such that:
(4) δ|λ|M ≥ x d−1 ′ .
Every x ∈ V can be written uniquely as x = ax d + x ′ where a ∈ K and x ′ ∈ V ′ , then we define the norm · on V by the formula:
x = |a|M + x ′ ′ .
Note that ℓ(x) = aλx d + ax d−1 + ℓ(x ′ ) and ℓ(x) = |λ||a|M + ℓ(x ′ ) + ax d−1 ′ . Therefore:
ℓ(x) ≥ |λ||a|M + ℓ(x ′ ) ′ − |a| · x d−1 ′ ≥ (1 − δ)|λ||a|M + (1 − δ)|λ| · x ′ ′ = (1 − δ)|λ| · x
where the last inequality follows from (3) and (4). The desired upper bound on ℓ(x) is obtained in a similar way:
ℓ(x) ≤ |λ||a|M + ℓ(x ′ ) ′ + |a| · x d−1 ′ ≤ (1 + δ)|λ||a|M + (1 + δ)|λ| · x ′ ′ = (1 + δ)|λ| · x
and we finish the proof.
(1 − δ)θ x ≤ ℓ(x) ≤ (1 + δ)θ x for every x ∈ V .
Proof. Let E be the splitting field of P (X) over K. Let V E = E ⊗ K V and we still use ℓ to denote the induced linear operator on V E . In the Jordan canonical form of ℓ, let s denote the number of Jordan blocks. Then we have a basis x 1,1 , . . . , x 1,d1 , . . . , x s,1 , . . . , x s,ds of V E over E such that for each 1 ≤ i ≤ s, the map ℓ maps V E,i := Span E (x i,1 , . . . , x i,di ) to itself and the matrix representation of ℓ with respect to x i,1 , . . . , x i,di is the i-th Jordan block. By Proposition 2.2, there exists a norm · i on V E,i such that
(1 − δ)θ x i ≤ ℓ(x) i ≤ (1 + δ)θ x i for every x ∈ V E,i . We can now define · on V E = V E,1 ⊕· · ·⊕V E,s as · 1 +· · ·+ · s .
Then the restriction of · on V is the desired norm.
Corollary 2.4. Let V be a vector space over K of finite dimension d > 0. Let ℓ : V → V be an invertible K-linear map. Then there exist a positive integer s, subspaces V 1 , . . . , V s of V , and positive numbers θ 1 , . . . , θ s with the following properties:
(i) ℓ(V i ) ⊆ V i for 1 ≤ i ≤ s and V = V 1 ⊕ · · · ⊕ V s .(V i ) for 1 ≤ i ≤ s. (iii) For every δ > 0, for 1 ≤ i ≤ s, there exists a norm · i on V i such that (1 − δ)θ i x i ≤ ℓ(x) i ≤ (1 + δ)θ i x i for every x ∈ V i .
Proof. By [DF04,p. 424], there exist ℓ-invariant subspaces V 1 , . . . , V s of V such that V = V 1 ⊕ · · · ⊕ V s and for 1 ≤ i ≤ s, the characteristic polynomial P i of the restriction of ℓ to V i is a power of an irreducible factor over K of the characteristic polynomial of ℓ. Let θ i denote the common absolute value of the roots of P i . Then we apply Proposition 2.3 and finish the proof.
The proof of Theorem 1.1
Recall from Section 1 that π : R F → T F denotes the quotient map,
p := p RF = 1 t F [[1/t]] = i≤−1 a i t i : a i ∈ F ∀i ,
every element α ∈ T F has the unique preimageα ∈ R F of the form
α = i≤−1 a i t i ∈ p,
µ denotes the probability Haar measure on T F , and ρ is the metric on T F given by ρ(α, β) = |α −β|. Letμ be the Haar measure on R F normalized so thatμ(D F ) = 1. Therefore, we have that D F and T F are isometric as metric spaces and isomorphic as probability spaces. Let d be a positive integer. On T d F and R d F we have the respective product measures µ d andμ d . Let | · | (d) be the norm on R d F given by:
|(x 1 , . . . , x d )| (d) = max 1≤i≤d |x i |.
Then the induced metric ρ (d) on T d F is: x ≤ r} satisfy C 1 r d < η(B(r − )), η(B(r)) < C 2 r d for every r > 0.
ρ (d) ((α 1 , . . . , α d ), (β 1 , . . . , β d )) = max 1≤i≤d |α i −β i |.
Proof. After choosing a basis, we may identify V as R d F ; recall the norm | · | (d) above. By uniqueness up to scaling of Haar measures, we may assume that η is the Haar measure normalized so that the set
B ′ := {(x 1 , . . . , x d ) ∈ R d F : |(x 1 , . . . , x d )| (d) = max 1≤i≤d |x i | ≤ 1}
has η(B ′ ) = 1. Since · and | · | (d) are equivalent to each other, there exist positive C 3 and C 4 such that both B(r − ) and B(r) contain
B ′ (C 3 r) := {(x 1 , . . . , x d ) ∈ R d F : |(x 1 , . . . , x d )| (d) = max 1≤i≤d |x i | ≤ C 3 r}
and are contained in
B ′ (C 4 r) = {(x 1 , . . . , x d ) ∈ R d F : |(x 1 , . . . , x d )| (d) = max 1≤i≤d |x i | ≤ C 4 r}.
Let q m (respectively q n ) be the largest (respectively smallest) power of q that is smaller than C 3 r (respectively larger than C 4 r). Then we have:
η(B ′ (C 3 r)) ≥ q md > (C 3 r/q) d and η(B ′ (C 4 r)) ≤ q nd < (C 4 qr) d .
This finishes the proof.
We apply Corollary 2.4 for the vector space R d F and the multiplication-by-A map to get the invariant subspaces V 1 , . . . , V s and positive numbers θ 1 , . . . , θ s . Fix a Haar measure η i on V i and let η := η 1 × · · · × η s which is a Haar measure on R d F . Let c > 0 such thatμ d = cη.
Fix δ > 0, we assume that δ is sufficiently small so that (1 + δ)θ i < 1 whenever θ i < 1. For 1 ≤ i ≤ s, let · i be a norm on V i as given in Corollary 2.4. Every
x ∈ R d F can be written uniquely as x = x 1 + . . . + x s with x i ∈ V i for 1 ≤ i ≤ s and we define the norm · on R d F by the formula:
x = max 1≤i≤s x i i .
Since | · | (d) and · are equivalent to each other, the induced metric τ on T d F given by:
τ ((α 1 , . . . , α d ), (β 1 , . . . , β d )) :
= (α 1 −β 1 , . . . ,α d −β d ) is equivalent to ρ (d) .
Lemma 3.2. We still use π to denote the quotient map R d F → T d F . There exists a positive constant C 5 such that the following hold.
(i) For any x ∈ p d and y ∈ R d F , if x − y ≤ C 5 then y ∈ p d . (ii) For any x, y ∈ R d F such that x − y ≤ C 5 and τ (π(Ax), π(Ay)) ≤ C 5 , we have τ (π(Ax), π(Ay)) = Ax − Ay .
Proof. For part (i), we can characterize the set p d as the set of x ∈ R d F such that |x| (d) ≤ 1/q. Hence when x − y is sufficiently small, we have that |x − y| (d) ≤ 1/q thanks to equivalence of these norms. Hence x − y ∈ p d and we have y ∈ p d .
We now consider part (ii). Since |z| (d) ≥ 1 for every non-zero z ∈ Z d F and since · and | · | (d) are equivalent, there is a positive constant C 6 such that z ≥ C 6 for every non-zero z ∈ Z d F . There exists C 7 such that Aw ≤ C 7 w for every w ∈ R d F ; for instance we may take C 7 = (1 + δ) max 1≤i≤s θ i thanks to the definition of · and properties of the · i 's in Corollary 2.4.
We now choose C 5 to be any positive constant such that C 5 < C6 C7+1 . Let x, y ∈ R d F satisfying conditions in the statement of the lemma. We have C 5 ≥ τ (π(Ax), π(Ay)) = Ax − Ay + z for some z ∈ Z d F . If z = 0 then we have
C 7 C 5 ≥ C 7 x − y ≥ Ax − Ay ≥ z − Ax − Ay + z ≥ C 6 − C 5 ,
contradicting the choice of C 5 . Hence z = 0 and we are done.
Proof of Theorem 1.1. Let α = (α 1 , . . . , α d ) ∈ T d F and let x = (α 1 , . . . ,α d ) which is the preimage of α in p d . Let ǫ > 0 and n ≥ 1. All the implicit constants below might depend on the choice of the norms · i 's hence depending on δ but they are independent of ǫ and n.
Let
B(α, ǫ, n) := {β = (β 1 , . . . , β d ) ∈ T d F : ρ (d) (A j α, A j β) < ǫ for j = 0, 1, .
. . , n − 1}. We aim to obtain an upper bound on µ d (B(α, ǫ, n)). Thanks to equivalence between ρ (d) and τ , there exists a positive constant C 8 such that B(α, ǫ, n) is contained in B ′ (α, ǫ, n) := {β = (β 1 , . . . , β d ) ∈ T d F : τ (A j α, A j β) < C 8 ǫ for j = 0, 1, . . . , n − 1}. For β = (β 1 , . . . , β d ) ∈ B ′ (α, ǫ, n), let y = (β 1 , . . . ,β d ) and we have x − y = τ (α, β) < C 8 ǫ. When ǫ is sufficiently small so that C 8 ǫ is smaller than the constant C 5 in Lemma 3.2, we can apply this lemma repeatedly to get B ′ (α, ǫ, n) = {π(y) : y ∈ p d and A j x − A j y < C 8 ǫ for j = 0, 1, . . . , n − 1}.
By Lemma 3.2, the condition y ∈ p d is automatic once we have x − y < C 8 ǫ < C 5 and x ∈ p d . Let x, ǫ, n)).
B ′ (x, ǫ, n) := {y ∈ R d F : A j x − A j y < C 8 ǫ for j = 0, 1, . . . , n − 1}, we have µ d (B ′ (α, ǫ, n)) =μ d (B ′ (x, ǫ, n)) = cη(B ′ (
We express x = x 1 + . . . + x s and y = y 1 + . . . + y s where each x i , y i ∈ V i . The condition in the description ofB ′ (x, ǫ, n) is equivalent to x i − y i i < C 8 ǫ and A j x i − A j y i i < C 8 ǫ for every 1 ≤ i ≤ s and 1 ≤ j ≤ n − 1. We use Corollary 2.4 to have:
(5) ((1 − δ)θ i ) j x i − y i i ≤ A j x i − A j y i i ≤ ((1 + δ)θ i ) j x i − y i i .
Let I = {i ∈ {1, . . . , s} : θ i ≥ 1} and since we choose δ sufficiently small so that (1 + δ)θ i < 1 whenever θ i < 1, inequality (5) implies that the setB ′ (x, ǫ, n) is contained in the set:
{y = y 1 + . . . + y s : x i − y i i < C 8 ǫ((1 − δ)θ i ) −(n−1) for i ∈ I and x i − y i i < C 8 ǫ for i / ∈ I}.
Let d i = dim(V i ) for 1 ≤ i ≤ s. By Proposition 3.1, there exists a constant C 9 such that:
(6) µ d (B ′ (α, ǫ, n)) = cη(B ′ (x, ǫ, n)) < C 9 i∈I (C 8 ǫ) di ((1 − δ)θ i ) −di(n−1) .
Put h + (µ d , A, x, ǫ) = lim sup n→∞ − log(µ d (B(α, ǫ, n))) n , then (6) implies:
i∈I d i log(1 − δ) + i∈I d i log θ i ≤ h + (µ, A, x, ǫ).
Recall that our only assumption on ǫ is that it is sufficiently small so that C 8 ǫ < C 5 . For the other inequality, we argue in a similar way. There exists a constant C 10 such that set B(α, ǫ, n) contains the set:
B ′′ (α, ǫ, n) := {β = (β 1 , . . . , β d ) ∈ T d F : τ (A j α, A j β) < C 10 ǫ for 0 ≤ j ≤ n − 1}. And when ǫ is sufficiently small so that C 10 ǫ < C 5 , we apply Lemma 3.2 repeatedly to get B ′′ (α, ǫ, n) = {π(y) : y ∈ p d and A j x − A j y < C 10 ǫ for j = 0, 1, . . . , n − 1}.
Then consider x, ǫ, n)). Arguing as before, the setB ′′ (x, ǫ, n) contains the set:
B ′′ (x, ǫ, n) := {y ∈ R d F : A j x − A j y < C 10 ǫ for j = 0, 1, . . . , n − 1}, we have µ d (B ′′ (α, ǫ, n)) =μ d (B ′′ (x, ǫ, n)) = cη(B ′′ ({y = y 1 + . . . + y s : x i − y i i < C 10 ǫ((1 + δ)θ i ) −(n−1) for i ∈ I and x i − y i i < C 10 ǫ for i / ∈ I}.
Then we can use Proposition 3.1 to get a constant C 11 such that: x, ǫ, n)).
C 11 i∈I (C 10 ǫ) di ((1 + δ)θ i ) −di(n−1) < η(B ′′ (
This implies
h + (µ, A, x, ǫ) ≤ i∈I d i log(1 + δ) + i∈I d i log θ i
when ǫ is sufficiently small. Therefore
i∈I d i log(1 − δ) + i∈I d i log θ i ≤ lim ǫ→0 + h + (µ, A, x, ǫ) ≤ i∈I d i log(1 + δ) + i∈I d i log θ i .
Since δ can be arbitrarily small, we conclude that
lim ǫ→0 + h + (µ, A, x, ǫ) = i∈I d i log θ i = d i=1 log max{|λ i |, 1}
where the last equality follows from Property (ii) in Corollary 2.4. By the Brin-Katok theorem (see [BK83] and [VO16, pp. 262-263]), we have: , p. 197] and this finishes the proof.
h(µ d , A) = d i=1 log max{|λ i |, 1}. It is well-known that h(A) = h(µ d , A) [Wal82
4. The proof of Theorem 1.6
Throughout this section, we assume the notation in the statement of Theorem 1.6. Let I denote the identity matrix in M d (Z F ). The below formula for N k (A) in the classical case is well-known [BLP10]: Proof. When det(B − I) = 0, there is a non-zero x ∈ R d F such that Bx = x. Then for any fixed point y ∈ T d F , the points y + cx for c ∈ R F are fixed. By choosing c to be in an arbitrarily small neighborhood of 0, we have that y is not isolated. Hence N 1 (B) = 0.
Suppose det(B − I) = 0. There is a 1-1 correspondence between the set of fixed points of B and the set Z d
F /(B − I)Z d F . Since Z F is a PID, we obtain the Smith Normal Form of B − I that is a diagonal matrix with entries b 1 , . . . , b d ∈ Z F \ {0} and a Z F -basis x 1 , . . . , x d of Z d F so that b 1 x 1 , . . . , b d x d is a Z F -basis of (B − I)Z F . Therefore the number of fixed points of B is: d i=1 card(Z F /b i Z F ) = d i=1 |b i | = | det(B − I)|.
We fix once and for all a finite extension K of R F containing all the eigenvalues of A and let δ be the inertia degree of K/R F . For each µ i in the (possibly empty) multiset {µ 1 , . . . , µ M } of eigenvalues of A that are roots of unity, we have the decomposition:
µ i = µ i,(0) + µ i,(1)
with µ i,(0) ∈ GF(q δ ) * and µ i,(1) ∈ p K as in (1); in fact µ i,(1) = 0 since µ i is a root of unity. Likewise, for each η i in the (possibly empty) multiset {η 1 , . . . , η N }, we have:
η i = η i,(0) + η i,(1) with η i,(0) ∈ GF(q δ ) * and η i,(1) ∈ p K \ {0}.
Proposition 4.2. Let v p denote the p-adic valuation on Z. Recall that the orders of µ i,(0) and η j,(0) in GF(q δ ) * are respectively denoted m i and n j for 1 ≤ i ≤ M and 1 ≤ j ≤ N ; each of the m i 's and n j 's is coprime to p. Let k be a positive integer, we have:
(i) For 1 ≤ i ≤ M , |µ k i − 1| = 0 if k ≡ 0 mod m i 1 otherwise . (ii) For 1 ≤ j ≤ N , |η k j − 1| = |η j,(1) | p vp (k) if k ≡ 0 mod n j 1 otherwise (iii) N k (A) = | det(A k − I)| = r(A) k M i=1 a i,k N j=1 b j,k p vp (k) where a i,k = 0 if k ≡ 0 mod m i 1 otherwise and b j,k = |η j,(1) | if k ≡ 0 mod n j 1 otherwise for 1 ≤ i ≤ M and 1 ≤ j ≤ N .
Proof. Part (i) is easy: µ k i − 1 = µ k i,(0) − 1 is an element of GF(q δ ) and it is 0 exactly when k ≡ 0 mod m i . For part (ii), when k ≡ 0 mod n j , we have:
η k j − 1 ≡ η k j,(0) − 1 ≡ 0 mod p K ,
hence |η k j − 1| = 1. Now suppose k ≡ 0 mod n j but k ≡ 0 mod p, we have:
η k j − 1 = (η j,(0) + η j,(1) ) k − 1 = kη k−1 j,(0) η j,(1) + k ℓ=2 k ℓ η k−ℓ j,(0) η ℓ j,(1)
and since |kη k−1 j,(0) η j,(1) | = |η j,(1) | is strictly larger than the absolute value of each of the remaining terms, we have:
|η k j − 1| = |η j,(1) |.
Finally, suppose k ≡ 0 mod n j . Since gcd(n j , p) = 1, we can write k = k 0 p vp(k) where k 0 ≡ 0 mod n j and k 0 ≡ 0 mod p. We have:
|η k j − 1| = |η k0 j − 1| p vp (k) = |η j,
(1) | p vp (k) and this finishes the proof of part (ii). Part (iii) follows from parts (i), (ii), and the definition of r(A).
Proof of Theorem 1.6. First, we prove part (a). We are given that for every j ∈ {1, . . . , N }, there exists i ∈ {1, . . . , M } such that m i | n j .
Let k ≥ 1. If m i | k for some i then N k (A) = 0 by part (c) of Proposition 4.2. If m i ∤ k for every i ∈ {1, . . . , M } then n j ∤ k for every j ∈ {1, . . . , N } thanks to the above assumption, then we have N k (A) = r(A) k by Proposition 4.2. Therefore
∞ k=1 N k (A) k z k is equal to: k≥1 mi∤k for 1≤i≤M N k (A) k z k = k≥1 mi∤k for 1≤i≤M r(A) k k z k = k≥1 r(A) k k z k − k≥1 mi|k for some 1≤i≤M r(A) k k z k = − log(1 − r(A)z) − M ℓ=1 1≤i1<...<i ℓ ≤M (−1) ℓ−1 k≥1 lcm(mi 1 ,...,mi ℓ )|k r(A) k k z k = − log(1 − r(A)z) + M ℓ=1 1≤i1<...<i ℓ ≤M (−1) ℓ+1 lcm(m i1 , . . . , m i ℓ ) log 1 − (r(A)z) lcm(mi 1 ,...,mi ℓ )
where the third "=" follows from the inclusion-exclusion principle. This finishes the proof of part (a). For part (b), without loss of generality, we assume that m i ∤ n 1 for 1 ≤ i ≤ M . Put
f (z) := ∞ k=1 N k (A)z k .
Proposition 4.2 gives that |N k (A)| ≤ r(A) k , hence f is convergent in the disk of radius 1/r(A). Assume that f is D-finite and we arrive at a contradiction. Consider
c k z k = f (z/r(A))
is D-finite. Let τ denote the ramification index of K/R F , then each |η j,(1) | has the form 1 q dj/τ where d j is a positive integer [Neu99,p. 150]. Combining this with (7) and Proposition 4.2, we have that the c k 's belong to the number field E := Q(p 1/τ ). Let | · | p denote the p-adic absolute value on Q, then | · | p extends uniquely to an absolute value on E since there is only one prime ideal of the ring of integers of E lying above p. Put: Since both Q and Q 1 are powers of 1/q 1/τ with positive integer exponents, we have:
Q =(8) |Q| p , |Q 1 | p > 1.
Since m i ∤ n 1 for every i, Proposition 4.2 and (7) yield:
(9) c n1p ℓ = Q p ℓ 1 for every integer ℓ ≥ 0. On the other hand, Proposition 4.2 and (7) also yield:
(10) |c k | p ≤ |Q| p vp (k) p for every integer k > 1.
The idea to finish the proof is as follows. D-finiteness of the series ∞ k=1 c k z k implies a strong restriction on the "growth" of the coefficients c k 's at least through a recurrence relation satisfied by the c k 's. This growth could be in terms of local data such as absolute values of the c k 's or global data such as Weil heights of the c k 's [BNZ20]. It is indeed the |c k | p 's that will give us the desired contradiction.
The key observation is that when ℓ is large |c n1p ℓ | p = |Q 1 | p ℓ p is exponential in p ℓ thanks to (8) and (9) while the "nearby" coefficients c n1p ℓ −n for a bounded positive integer n have small p-adic absolute values thanks to (10) since v p (n 1 p ℓ −n) is small compared to ℓ.
Since ∞ k=1 c k z k ∈ E[[z]
] is D-finite, there exist a positive integer s and polynomials P 0 (z), . . . , P s (z) ∈ E[z] such that P 0 = 0 and (11) P 0 (k)c k + P 1 (k)c k−1 + . . . + P s (k)c k−s = 0 for all sufficiently large k [Sta80]. In the following ℓ denotes a large positive integer and the implied constants in the various estimates are independent of ℓ. Consider k = n 1 p ℓ , then the highest power of p dividing any of the k − i = n 1 p ℓ − i for 1 ≤ i ≤ s is at most the largest power of p in {1, 2, . . . , s}. Combining this with (10), we have:
(12) |P i (n 1 p ℓ )c n1p ℓ −i | p ≪ 1 for 1 ≤ i ≤ s. Now (9), (11), and (12) imply:
(13) |P 0 (n 1 p ℓ )| p ≪ |Q 1 | −p ℓ p .
This means for the infinitely many positive integers k of the form n 1 p ℓ , we have that |P 0 (k)| p is exponentially small in k. This implies that k is unusually close to a root of P 0 with respect to the p-adic absolute value. One can use the product formula to arrive at a contradiction, as follows.
Let M E = M 0 E ∪ M ∞ E be the set of all places of E where M 0 E consists of the finite places and M ∞ E denotes the set of all the infinite places [BG06, Chapter 1]. For every w ∈ M E , we normalize | · | w as in [BG06, Chapter 1] and the product formula holds. We still use p to denote the only place of E lying above p and the above | · | p has already been normalized according to [BG06, Chapter 1]. We have:
w∈M ∞ K |P 0 (n 1 p ℓ )| w ≪ (n 1 p ℓ ) deg(P0) and w∈M 0 K \{p} |P 0 (n 1 p ℓ )| w ≪ 1.
When ℓ is sufficiently large and P 0 (n 1 p ℓ ) = 0, we have that (8), (13) and (14) contradict the product formula: w∈MK |P 0 (n 1 p ℓ )| w = 1 and this finishes the proof that f (z) = ∞ k=1 N k (A)z k is not D-finite. The transcendence of ζ A (z) follows immediately: if ζ A (z) were algebraic then f (z) = z ζ ′ A (z) ζ A (z) would be algebraic and hence D-finite, see Remark 1.5.
Theorem 1. 6 .
6Let A ∈ M d (Z F ) and put r(A) = λ max{1, |λ|} where λ ranges over all the d eigenvalues of A; we have r(A) = e h(A) when det(A) = 0 thanks to Theorem 1.1. Among the d eigenvalues of A, let µ 1 , . . . , µ M be all the eigenvalues that are roots of unity and let η 1 , . . . , η N be all the eigenvalues that have absolute value 1 and are not roots of unity. For 1 ≤ i ≤ M , let m i denote the order of µ i modulo p. For 1 ≤ i ≤ N , let n i denote the order of η i modulo p. We have: (a) Suppose that for every j ∈ {1, . . . , N }, there exists i ∈ {1, . . . , M } such that m i | n j . Then ζ A (z) is algebraic and ζ A (z) = (1 − r(A)z) −1 1≤ℓ≤M 1≤i1<i2<...<i ℓ ≤M R A,i1,...,i ℓ (z) where R A,i1,...,i ℓ (z) := 1 − (r(A)z) lcm(mi 1 ,...,mi ℓ ) (−1) ℓ+1 / lcm(mi 1 ,...,mi ℓ ) . (b) Otherwise suppose there exists j ∈ {1, . . . , N } such that for every i ∈ {1, . . . , M }, we have m i ∤ n j . Then the series ∞ k=1 N k (A)z k converges in the open disk {z ∈ C : |z| < 1/r(A)} and it is not D-finite. Consequently, the function ζ A (z) is transcendental.
-finite, then use a certain linear recurrence relation satisfied by D-finite power series to contradict the peculiar value of N k (A) at certain k.
Proposition 2. 3 .
3Let V be a vector space over K of finite dimension d > 0. Let ℓ : V → V be an invertible K-linear map such that the characteristic polynomial P (X) of ℓ is the power of an irreducible polynomial in K[X]. By Proposition 2.1, all the roots of P have the same absolute value denoted by θ. Let δ > 0. Then there exists a norm · on V such that
ii) The multiset {|λ| : eigenvalues λ of V counted with multiplicities} of order d is equal to the multiset {θ 1 , . . . , θ 1 , θ 2 , . . . , θ 2 , . . . , θ s , . . . , θ s } in which the number of times θ i appears is dim(
Proposition 3 . 1 .
31Let V be a vector space over R F of dimension d. Let · be a norm on V and let η be a Haar measure on V . There exist positive constants C 1 and C 2 such that the open ball B(r − ) := {x ∈ V : x < r} and the closed ball B(r) := {x ∈ V :
Lemma 4 . 1 .
41Let B ∈ M d (Z F ). The number of isolated fixed points N 1 (B) of the multiplication-by-B map B : T d F → T d F is | det(B − I)|. Consequently N k (A) = | det(A k − I)| for every k ≥ 1.
1≤j≤N |η j,(1) | and Q 1 = 1≤j≤N nj |n1 |η j,(1) |.
On periodic points. M Artin, B Mazur, Ann. of Math. 2M. Artin and B. Mazur, On periodic points, Ann. of Math. (2) 81 (1965), 82-99.
Heights in Diophantine geometry. E Bombieri, W Gubler, New Mathematical Monographs. 4Cambridge University PressE. Bombieri and W. Gubler, Heights in Diophantine geometry, New Mathematical Monographs, vol. 4, Cambridge University Press, Cambridge, 2006.
A general criterion for the Pólya-Carlson dichotomy and application. J P Bell, K Gunn, K D Nguyen, J C Saunders, 2022J. P. Bell, K. Gunn, K. D. Nguyen, and J. C. Saunders, A general criterion for the Pólya-Carlson dichotomy and application, available on the arXiv, 2022.
On local entropy. M Brin, A Katok, Geometric dynamics. Springer-VerlagRio de JaneiroM. Brin and A. Katok, On local entropy, Geometric dynamics (Rio de Janeiro, 1981),, Lecture Notes in Math., no. 1007, Springer-Verlag, 1983, pp. 30-38.
A Weyl-type equidistribution theorem in finite characteristic. V Bergelson, A Leibman, Adv. Math. 289V. Bergelson and A. Leibman, A Weyl-type equidistribution theorem in finite charac- teristic, Adv. Math. 289 (2016), 928-950.
Linear and quadratic uniformity of the Möbius function over Fq. P.-Y Bienvenu, T.-H Le, Mathematika. 65P.-Y. Bienvenu and T.-H. Le, Linear and quadratic uniformity of the Möbius function over Fq[t], Mathematika 65 (2019), 505-529.
A note on the dynamical zeta function of general toral endomorphisms. M Baake, E Lau, V Paskunas, Monatsh. Math. 161M. Baake, E. Lau, and V. Paskunas, A note on the dynamical zeta function of general toral endomorphisms, Monatsh. Math. 161 (2010), 33-42.
Towards a pólya-carlson dichotomy for algebraic dynamics. J Bell, R Miles, T Ward, Indag. Math. (N.S.). 25J. Bell, R. Miles, and T. Ward, Towards a pólya-carlson dichotomy for algebraic dy- namics, Indag. Math. (N.S.) 25 (2014), 652-668.
J P Bell, K D Nguyen, U Zannier, arXiv:2205.02145D-finiteness, rationality, and height II: lower bounds over a set of positive density. J. P. Bell, K. D. Nguyen, and U. Zannier, D-finiteness, rationality, and height II: lower bounds over a set of positive density, arXiv:2205.02145.
. Rationality Height, Trans. Amer. Math. Soc. 373, D-finiteness, rationality, and height, Trans. Amer. Math. Soc. 373 (2020), 4889-4906.
Transcendence of the Artin-Mazur zeta function for polynomial maps of A 1 (Fp), Acta Arith. A Bridy, 156A. Bridy, Transcendence of the Artin-Mazur zeta function for polynomial maps of A 1 (Fp), Acta Arith. 156 (2012), 293-300.
The Artin-Mazur zeta function of a dynamically affine rational map in positive characteristic. J. Théor. Nombres Bordeaux. 282, The Artin-Mazur zeta function of a dynamically affine rational map in positive characteristic, J. Théor. Nombres Bordeaux 28 (2016), no. 2, 301-324.
Über ganzwertige funktionen. F Carlson, Math. Z. 11F. Carlson,Über ganzwertige funktionen, Math. Z. 11 (1921), 1-23.
D S Dummit, R M Foote, Abstract algebra. Wileythird ed.D. S. Dummit and R. M. Foote, Abstract algebra, third ed., Wiley, 2004.
On the rationality of the zeta function of an algebraic variety. B Dwork, Amer. J. Math. 82B. Dwork, On the rationality of the zeta function of an algebraic variety, Amer. J. Math. 82 (1960), 631-648.
Axiom A+No Cycles =⇒ ζ f (t) Rational. J Guckenheimer, Bull. Amer. Math. Soc. 76J. Guckenheimer, Axiom A+No Cycles =⇒ ζ f (t) Rational, Bull. Amer. Math. Soc. 76 (1970), 592-594.
The distribution of irreducibles in GF. D R Hayes, Trans. Amer. Math. Soc. 117q, xD. R. Hayes, The distribution of irreducibles in GF [q, x], Trans. Amer. Math. Soc. 117 (1965), 101-127.
A Hinkkanen, Zeta functions of rational functions are rational. 19A. Hinkkanen, Zeta functions of rational functions are rational, Ann. Acad. Sci. Fenn. Ser. AI Math. 19 (1994), 3-10.
Waring's problem in function fields. Y.-R Liu, T Wooley, J. reine angew. Math. 638Y.-R. Liu and T. Wooley, Waring's problem in function fields, J. reine angew. Math. 638 (2010), 1-67.
Axiom A diffeomorphisms have rational zeta functions. A Manning, Bull. Lond. Math. Soc. 3A. Manning, Axiom A diffeomorphisms have rational zeta functions, Bull. Lond. Math. Soc. 3 (1971), 215-220.
Algebraic Number Theory. J Neukirch, Grundlehren der mathematischen Wissenschaften. Springer-Verlag322Translated from the German by N. SchappacherJ. Neukirch, Algebraic Number Theory, Grundlehren der mathematischen Wis- senschaften, vol. 322, Springer-Verlag, 1999, Translated from the German by N. Schap- pacher.
A note on exponential-Möbius sums over Fq. S Porritt, Finite Fields Appl. 51S. Porritt, A note on exponential-Möbius sums over Fq[t], Finite Fields Appl. 51 (2018), 298-305.
Über gewisse notwendige Determinantenkriterien für Fortsetzbarkeit einer Potenzreihe. G Póya, Math. Ann. 99G. Póya,Über gewisse notwendige Determinantenkriterien für Fortsetzbarkeit einer Potenzreihe, Math. Ann. 99 (1928), 687-706.
Differentiably finite power series. R Stanley, European J. Combin. 1R. Stanley, Differentiably finite power series, European J. Combin. 1 (1980), 175-188.
M Viana, K Oliveira, Foundations of Ergodic Theory, Cambridge studies in advanced mathematics. CambridgeCambridge University Press151M. Viana and K. Oliveira, Foundations of Ergodic Theory, Cambridge studies in ad- vanced mathematics, vol. 151, Cambridge University Press, Cambridge, 2016.
An Introduction to Ergodic Theory. P Walters, Graduate Texts in Mathematics. 79Springer-VerlagP. Walters, An Introduction to Ergodic Theory, Graduate Texts in Mathematics, vol. 79, Springer-Verlag, New York, 1982.
Numbers of solutions of equations in finite fields. A Weil, Bull. Amer. Math. Soc. 55A. Weil, Numbers of solutions of equations in finite fields, Bull. Amer. Math. Soc. 55 (1949), 497-508.
AB T2N 1N4, Canada Email address: keira. Keira Gunn, [email protected] of Mathematics and Statistics, University of CalgaryKeira Gunn, Department of Mathematics and Statistics, University of Calgary, AB T2N 1N4, Canada Email address: [email protected]
D Khoa, Nguyen, address: [email protected] T2N 1N4. Department of Mathematics and Statistics, University of CalgaryKhoa D. Nguyen, Department of Mathematics and Statistics, University of Calgary, AB T2N 1N4, Canada Email address: [email protected]
. J C Saunders, Department of Mathematics and Statistics, University of CalgaryAB T2N 1N4, Canada Email address: [email protected]. C. Saunders, Department of Mathematics and Statistics, University of Calgary, AB T2N 1N4, Canada Email address: [email protected]
| []
|
[
"Strain Influence on Optical Absorption of Giant Semiconductor Colloidal Quantum Dots",
"Strain Influence on Optical Absorption of Giant Semiconductor Colloidal Quantum Dots"
]
| [
"Tudor E Pahomi \nFaculty of Physics\nUniversity of Bucharest\nMăgurele -Bucharest\nRO-077125EURomania\n",
"Tiberius O Cheche [email protected] \nFaculty of Physics\nUniversity of Bucharest\nMăgurele -Bucharest\nRO-077125EURomania\n"
]
| [
"Faculty of Physics\nUniversity of Bucharest\nMăgurele -Bucharest\nRO-077125EURomania",
"Faculty of Physics\nUniversity of Bucharest\nMăgurele -Bucharest\nRO-077125EURomania"
]
| []
| The lattice mismatch strain field of core/multishell structures with spherical symmetry is modeled by a linear continuum elasticity approach. The effect of the strain on the energy structure and linear optical absorption in large core/shell/shell spherical semiconductor quantum dots is analyzed. Localization of the photoexcited carriers induced by coating is found to play an important role in explaining the optical stability of large CdSe/CdS/ZnS and ZnTe/ZnSe/ZnS quantum dots. | 10.1016/j.cplett.2014.07.078 | [
"https://arxiv.org/pdf/1408.4972v1.pdf"
]
| 97,922,340 | 1408.4972 | 00fc591dc3d75fc6f8d1b448e6e581ee4b717a53 |
Strain Influence on Optical Absorption of Giant Semiconductor Colloidal Quantum Dots
Tudor E Pahomi
Faculty of Physics
University of Bucharest
Măgurele -Bucharest
RO-077125EURomania
Tiberius O Cheche [email protected]
Faculty of Physics
University of Bucharest
Măgurele -Bucharest
RO-077125EURomania
Strain Influence on Optical Absorption of Giant Semiconductor Colloidal Quantum Dots
1
The lattice mismatch strain field of core/multishell structures with spherical symmetry is modeled by a linear continuum elasticity approach. The effect of the strain on the energy structure and linear optical absorption in large core/shell/shell spherical semiconductor quantum dots is analyzed. Localization of the photoexcited carriers induced by coating is found to play an important role in explaining the optical stability of large CdSe/CdS/ZnS and ZnTe/ZnSe/ZnS quantum dots.
Introduction
As 'The Next Big Thing' in photovoltaics [1], the colloidal multishell semiconductor quantum dots (QDs) have led to the development of high-efficiency solar cells. To overcome the crystal irregularities induced by the lattice mismatch in the synthesis of these colloidal nanocrystals, the use of a strain-adapting intermediate shell in core/shell (CS) QDs has been proposed. Thus, 'giant' core/shell/shell (CSS) QDs of 18-19 monolayers shell thickness of are synthesized [2,3]. There are several theoretical studies of multi-component nanocrystals, in which the role of the strain is considered by first-principle calculations, by using, for example, the density-functional tight-binding method [4] or local density approximation [5] or densityfunctional theory [6]. Unfortunately, limitations of these ab-initio calculations (e.g., bandgap underestimation) make difficult comparison of their results with the experiment. More important, the main problem of the first-principle calculations, the computational cost, can make the method inadequate for larger structures, such as the large CSS QDs. On the other hand, the widely used for analyzing the linear elasticity of epitaxial strained heterointerfaces, the valence force field method (see, e.g., Ref. [7]) is dependent on a priori information regarding the interface structure and surface passivation. The continuum elasticity approach in the limits of homogeneous and isotropic materials has been shown to be in good agreement with the valence force field models * E-mail: [email protected]
Tel.: +40 724 536 908 for semiconductor QDs of spherical shape and cubic symmetry (see, e.g., Ref. [8]). In this context, we propose a continuum elasticity model for the lattice mismatch strain field in such nanocrystals. Keeping justified simplicity, based on our strain field model, we consider a twoband model within the effective mass approximation to theoretically investigate the energy structure and light absorption of a CSS QD with thick shells. In our model, we consider ideal multilayer structures. We assume the defects and impurities with low concentration are located at the interfaces, as reported by experiment, (see, e.g., Ref. [9]), and consequently, do not significantly influence the lattice mismatch strain field.
Theoretical model
Strain field and the band lineup in the presence of strain
First, we describe our method for calculus of the lattice mismatch strain field. For spherical core/shell nanocrystals the displacement ( u ) has radial symmetry, that is the field is irrotational, and within the continuum elasticity approach the equilibrium equation is simply: 0 div grad = u [10]. Linear stress ( ij σ )-strain ( ij ε ) tensor relation is used to obtain the strain field.
For a CSS QD with radii 1 r (for the core), 2 r (for core+middle shell), and R (for the total radius shells to the case of one shell or core embedded in infinite matrix, we recover the results of the previous works, Ref. [11] and Ref. [12], respectively. The method can also be applied to cylindrical multilayer structures. Irrotational displacement field of the form,
.) , 0 ), ( ( const r u r = u
can be used to evaluate the strain field in: (i) two-dimensional circular multilayer structure or (ii) three-dimensional multilayer structure by assuming a certain form of the tensor zz e for each component (core, shells) of the structure (see an example in Ref. [11]). Second, we model the heterostructures band lineup, which is a crucial in obtaining accurate predictions of the energy structure by the effective mass approach. In semiconductor QDs, the lattice-mismatch strain induces deformations that shift both valence band (VB) and conduction band (CB). Thus, the values of the VB and CB extrema at the Γ point (we consider direct band semiconductors) are given by the equation [13]:
hyd c v u c v c v e a E E , , , + = ,(2)
where the unstrained values are related by
g u v u c E E E + = and g E is the unstrained bandgap, c v a ,
is the volume deformation potential (subscript v for VB, c for CB), and hyd e is the hydrostatic strain.
Single particle states
The single particle states are obtained by solving the Schrödinger equation for the envelope wave function,
) ( ) ( r r ψ ψ E H = within the one-band effective Hamiltonian, ) ( )] ( 2 [ 2 1 * r V p r m H + = − , where ) ( * r m
is the photoexcited carrier r-dependent effective mass, and ) (r V is the step confinement potential generated by the band-offset of the materials in presence of the strain. The solution is a product of radial function and spherical harmonics,
) , ( ) ( ) ( ϕ θ ψ m l l nlm Y r R = r
. The radial solution is a linear combinations of spherical or modified spherical Bessel functions (see Appendix B, Eqs. (B. 1-4)). Imposing the physical conditions of continuity, we obtain the transcendental equation valid for a general form of the three-region step confinement potential (see Fig. 1 and details in Appendix B,
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 2 1 1 1 2 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 1 2 1 ' ' ' ' ' ' r f r f m r f r f m r f r f m r f r f m r f F m r f m r f F m r f m B A B B A A B A B B A A B C B B C B C B B C − − = − − ,(3)
where
)] ( ) ( ) ( ) ( [ )] ( ' ) ( ) ( ) ( ' [ 2 2 1 2 2 1 2 2 1 2 2 1 r f R f R f r f r f R f R f r f F C C C C C C C C C − − = , C B A f ,, 2 ,
Excitonic effect
A more accurate description of the optical properties requires estimation of the excitonic effect. This task is more complex as beyond the electron-hole exchange interaction (EHEI) and correlation interactions, the polarization charge induced at the interfaces and the screened dielectric constant should be taken into account. However, the Coulomb electron-hole interaction is usually the leading term of the carrier interaction in excitonic systems. Thus, modeling the excitonic effect by Coulomb electron-hole interaction mediated by a homogenized screened dielectric constant is at least satisfactory in estimating the absorption in CSS QDs. We consider a simplified two-band Hamiltonian, by keeping only the kinetic part and Coulomb electron-hole interaction. Neglecting in a first approximation EHEI is adequate. For example, in spherical InAs QD of radius 3nm it is of 2.093meV comparatively to the Coulomb interaction of 60.6meV [14] and of order 0.1 meV in CdSe/CdS QD with thick shell [15]. In our large QDs, the Coulomb interaction is of order of ten meV. We write the exciton state as a configuration interaction expansion,
∑ = + + = Ψ 1 , 0 j i j i ij h c C
, with 0 the ground state (no excited electron or hole particle), and + i c ( + i h ) creation operator of the electron (hole) state "i". With the algebra in the second quantization one obtains the secular equation (Appendix C, Eqs. (C. 10-12)):
( ) [ ] 0 1 , , , = + − + ∑ = l k j i kl eh ijlk jl ik h j e i C V E E E δ δ ,(4)
ZnTe ZnS
A c V C c V B v V C v V r ZnSe (b)
Optical absorption
We find the exciton linear absorption coefficient for a single QD at low temperatures is given by (Appendix C, Eqs. C. (1-9, 14-17)):
( ) ∑ ∑ + − = = τ τ τ γ ω γ ψ ψ ω α ω α 2 2 2 1 , ) ( 0 ) ( h E C j i h j e i ij QD ,(5)
where 0 α includes the bulk dependence of the material parameters (see Appendix C, Eq. (C.17)), τ E is the energy of the exciton in state τ , and γ is the homogeneous electronic broadening. The multishell character is reflected by the excitonic optical matrix element 2 1 ,
) ( 0 ∑ = = j i h j e i ij C f ψ ψ τ τ (6)
a quantity that can be related to the exciton oscillator strength. As the parameters entering Eq. (5) characterizes the QD, we name it single QD absorption coefficient (see Appendix C, Eq. (C.17)).
For the absorption in colloidal QD solutions, the measured quantity in experiment, we introduce the absorption coefficient:
( ) [ ] QD R QD QD sol e c R c α α 2 3 / 2 2 3 / 1 1 4 1 ln − − − − = ,(7)
where QD c is solution concentration. As the parameters entering Eq.
Thus, comparing Eq. (5) and Eq. (8), we obtain that ( ) τ α sol , the colloidal absorption coefficient corresponding to the exciton state τ at resonance, in limit of dilute solutions, is proportional to τ 0 f .
Results and discussion
Energy structure
In what follows, we apply the above theory to predict the energy structure and the fundamental excitonic absorption (FEA) of CSS QDs. In principle, the continuum strain approach works for thicker shells, consequently, we model the 'giant' CSS QDs from Refs. [3,16]. We consider spherical type-I CdSe/CdS/ZnS QDs with core radius of 2nm and the middle shell thickness of 11ML (hereafter denoted as I QD ) and spherical type-II ZnTe/ZnSe/ZnS QDs with core radius of 2.2nm and the middle shell thickness of 6ML (hereafter denoted as II QD ). We take in our estimation shell thickness large enough to keep premises of continuum elasticity approach (in the modeling it is at least of 6ML). I QD is interesting in applications for its optical and chemical stability [2,3], and II QD , following the excited charge separation, for its photovoltaic properties [16].
In the first step of application of the theory, we obtain the strain field of the In the second step, we characterize the single particle states in a one-band approximation with the potential band offset built with the strain band lineups. In the II-VI semiconductor heterostructures, following the large bandgap, the CB-VB admixture is insignificant. In addition, for such heterostructures, crossing of the heavy and light holes is expected to be obtained beyond the first few excited states [17]. Thus, we first find the form of the spherical Bessel (Table B.2)). For the monolayer thickness, we take the values 0.4nm for CdS [15], 0.33nm for ZnSe [19], and consider 0.33nm for ZnS. In the calculus, we do find the first four hole states are heavy hole states and the fifth is the first light hole energy level, for both I QD and II QD and the two-band approximation is justified. This characteristic (common to the wide bandgap semiconductor heterostructures) guarantees an accurate prediction of the several first single particle states by a two-band effective mass approach. The results obtained in this applicative step can be summarized are as follows:
functions C B A f ,
(i) Expected red-shift with the first shell thickness is found. The energy structure calculation shows the single particle fundamental interband transition (in absence of excitonic effect and for infinite well:
( ) 2 2 1 2 2 − − * = R k m E nL nL α α h
with L n k , the n-th zero of the spherical Bessel function of order L, and h e, = α for electron and hole, respectively. For the fundamental state L=0, n=1.) increases with the coating: from 302nm for CdSe spherical nanocrystal with (core) radius 2nm to 587nm for I QD 0 (notation for QD with core radius of 2nm and CdS shell thickness of 11ML); from 320nm for ZnTe spherical nanocrystals with (core) radius 2.2nm to 529nm for II QD 0 (notation for QD with core radius of 2.2nm and ZnSe shell thickness of 6ML).
(ii) Expected slight variation of the single particle energy with the ZnS shell thickness is obtained. The single particle energies are shown in Fig. 2 for I QD and II QD as function of total radius R. We obtain, the single particle fundamental interband transition is blue-shifted when coating, a tendency which asymptotically reduces for both (iv) By varying the shell thickness of both middle and outermost shell, the strain field model predicts that a band lineup that favors the hole escape from the core is not possible for either I QD or II QD . Thus, we find that by a mechanism based on the epitaxial strain, in II QD the hole can not be extracted from the core.
It is worth mentioning that in our structures, the interlevel single particle states for energy domain we analyze (see Fig. 2) is larger than 10meV, consequently larger than the estimated EHEI of 0.1meV for large QDs. Thus, the exciton approximation with Coulomb electron-hole interaction as leading term and neglected EHEI is justified. For estimation of FEA, we take the basis set of the configurations corresponding to the first four energy levels obtained from Eq. (3).
The configurations have similar structure for type-I and type-II CSS QDs. These configurations are obtained from the (2L+1) degenerated electron and hole states with (n, L) = (1, 0), (2, 1), (3,2) and the (4, 3) electron states and the (4, 0) hole state. There are 160 configurations obtained by replacing one from the set of (1+3+5+1) VB orbitals with one of the (1+3+5+7) CB orbitals. In the basis we work, the convergence of excitonic ground state ( g X ) energy is checked.
Excitonic effect and optical absorption
Next, we consider the excitonic effect, and try to explain the chemical and optical stability reported for CSS QDs. The optical stability manifests by weakly affected absorption and improved fluorescence quantum yield in overcoated or thick shell CS QDs, such as CdSe/CdS [2,3,20], CdZnSe/ZnSe [21], and ZnTe/ZnSe [16]. To evaluate the excitonic effect by our model we need the value of the screened dielectric constant. For estimation, we setup its value from the fit the experimental FEA reports for thick multilayer nanocrystals, namely, Ref. [3] for I QD and
Ref. [16] for
II QD .
We first consider modeling the optical absorption of I QD . FEA for Ia QD 0 (notation for CdSe/CdS QD with core radius of 2nm and CdS shell thickness of 19ML) is about 620nm (Ref. [3]) and it reflects the core absorption. The volume-weighted average of the static dielectric constants of the two components is 9 .
8 ) ( 0 = Ia av QD ε
. By our excitonic model for Ia QD 0 one obtains the homogenized screened dielectric constant that fits this absorption line is
2 . 7 ) ( 0 = Ia QD ε
(smaller in QD than in the corresponding bulk material, in accord with the general agreement; see, e.g. Ref. [22] or Ref. [23]). Next, to analyze the ZnS coating effect in the I QD absorption, we heuristically estimate the dielectric constant in
I x QD by ) ( ) ( ) ( ) ( 0 0 Ia av I x av Ia I x QD QD QD QD ε ε ε ε =
and compute FEA variation with the ZnS shell thickness. Fig. 3a shows that ) (
FEA I x QD E (FEA of I x QD ) is blue-shifted comparatively to ) ( 0 FEA I QD E
and asymptotically blue-shifted with the ZnS overcoating. Thus, a relative weak
change of ) ( FEA I x QD E
with the ZnS overcoating as reported by experiment [3] is obtained. This behavior is primarily the result of the lattice-mismatch strain and secondly of the excitonic effect.
In We continue discussion, and consider modeling the optical absorption of II QD . FEA for IIa QD 0 (notation for ZnTe/ZnSe QD with core radius of 2.2nm and ZnSe shell thickness of 6ML)
is about 570nm [16]. We obtain the homogenized screened dielectric constant that fits this absorption line is 7
) ( 0 = IIa QD ε .
The volume-weighted average of the dielectric constants of the two components is 9 .
8 ) ( 0 = IIa av QD ε .
For
II x QD , we consider again ) ( ) ( ) ( ) ( 0 0 IIa av II x av IIa II x QD QD QD QD ε ε ε ε =
and analyze the ZnS shell thickness effect on FEA. FEA is weak, as reported by experiment [16]. Differently from the I x QD case, the single QD absorption coefficient is decreasing by one order of magnitude with ZnS coating and less by overcoating (see Fig. 3b). This is the result of the orbitals overlap decreasing with the ZnS coating or overcoating. We predict sol α of dilute solutions also strongly decreases with the ZnS coating, but as in the type-I case it is almost constant with the ZnS overcoating, see Fig. 3b. This dependence can be related to the protecting effect of the ZnS overcoating reported by experiment [16]. From Eq. (6)
Emission considerations
Emission modeling is a more complex process. The only emission related quantity found in our model, to the band-edge [25,26]. Thus, the luminescence is on/off when: (i) by Auger recombination of the photoexcited carriers on the trapping states the QD becomes neutral/charged [27] or (ii) charge fluctuations make the trapping sites inactive/active [20,28]. The on/off dynamics is explained by statistic models, such as the spectral diffusion models [29]. Relevant for the present work is that in the experiment one observes the light intensity fluctuation in CS QDs is reduced with the shell thickness [20]. According to the earlier mentioned localization of the photoexcited carriers in the I QD with the ZnS coating, our model best fits the model assuming a tunneling barrier between the photoexcited carrier and the trap states, similar to that proposed in Refs. [28] and [30]. QD core coating induces larger photoexcited carrier-trap separation and determines lowering of the trapping probability. In QDs with thicker shell(s), the tunneling is less probable and a continuous luminescence (nonblinking) is expected under continuous photoexcitation of QDs. Thickness irregularity of the shell(s) obtained as the chemical synthesis result can be a physical factor that induces the random ionization and neutralization of the trapping states.
Conclusions
By this investigation, we developed a continuum elasticity model of the strain field for isotropic, homogeneous and finite size multilayer structures that is applied to spherical large CSS QD nanocrystals. The quantum treatment based on the proposed strain field model can explain the optical stability of the overcoated core-shell QDs. According to our estimations, the measured absorption coefficient in colloidal QD solutions is practically not sensitive with the overcoating for the core-shell QDs analyzed. The most important finding is that related to the photoexcited carriers localization. According to our predictions, in both II I QD , with thick shells we analyzed, the photoexcited carriers are moved away from the surface and interfaces. The hole is strongly confined in the core. The electron is less confined than the hole in the core of the I QD , but it is strictly localized in the middle shell of the II QD we discussed. This shielding of the photoexcited carriers from the surface and interfaces plays an important role in explaining the nonradiative recombination in core-shell(s) QDs. Thus, our model advocates the nonblinking of thick shell QDs is the result of low tunneling rate of the barriers created by the surface or interface located traps, which would lead to lower Auger recombination.
In essence, our continuum model of the strain field for homogenous and isotropic elastic multilayer structures, when implemented in the specific quantum mechanics of the multishell semiconductor nanocrystals, is able to predict the main characteristics of fundamental absorption in the thick shells II I QD , we considered. We believe it is a useful framework, in which improved modeling (multi-band treatment, multi-exciton generation, electron-phonon interaction, statistics blinking and relaxation dynamics consideration) can overcome the present computational limits of the first-principle calculations for more accurate description of the complex optical processes in multilayer QDs.
Acknowledgements. Thanks are due to Yia-Chung Chang for useful discussions.
Appendix A: Strain field
Displacement field in spherical coordinates is of the form ) 0 , 0 ), ( ( r u r = u , and consequently it is irrotational. To find the strain tensor from the equilibrium equation 0 div grad = u [1] for spherical multilayer structures within the continuum elasticity model, we
seek solutions of the form
2 2 1 / ) ( r X r X r u X r + =
with: X = A, X 1 = A 1 , X 2 = A 2 =0 for core, X = B, X 1 = B 1 , X 2 = B 2 for the first shell, X = C, X 1 = C 1 , X 2 = C 2 for the second shell, etc. With Eqs.
(1) from the main text, we compute the strain tensor components from the above form of ) (r u , and
+ + = = = ε ε ϕϕ θθ , (A.1) A A A A A hyd S S S S e/ ) 1 ( ) 2 1 ( 2 / ) 1 ( ) 2 1 ( 2 1 / 1 / r R r r r R r r E E S C C B B C B A ν ν ν ν + + − + + − − − − = , 3 2 3 2 / ) 1 ( ) 2 1 ( 2 ) 1 ( 3 r R S C C B A ν ν ν + + − − − = ,and ( ) − − + + − + + − − + − − = 1 / / ) 2 1 ( 2 / ) 1 ( 1 / ) 1 ( ) 2 1 ( 2 1 / ) 1 ( ) 2 1 ( − − + − + − + − + + = 1 1 1 2 1 2 1 2 1 2 1 -+ − + = r R S C C C ν ν , − − − + − − − + + − − − + = C A B B B A B B C B C C C C S E E r r r r r R E E S S S 3 3 2 3 1 3 2 3 1 3 2 3 2 3 4 2 2 1 ) 2 1 )( 1 ( 1 ) 2 1 ( 2 ) 1 ( 2 1 1 2 1 ν ν ν ν ν ν .
If the elastic constants are the same,
C B A C B A E E E = = = = = , ν ν ν ν , Eqs. (A.1-8) become: ( ) ( ) [ ] 3 3 2 2 3 3 1 1 1 1 1 2 1 3 2 R r R r e e e A A A rr − + − − − = = = ε ε ν ν ϕϕ θθ , (A.9) ( ) ( ) [ ] 1 1 1 2 1 2 R r R r e A hyd − + − − − = ε ε ν ν , (A.10) − + − − + − − = 3 3 2 2 3 3 3 3 1 1 1 1 2 1 1 1 2 1 3 2 ) ( R r r R R r r e B rr ε ν ν ε ν ν , (A.11) − + + − + − − − = =) ( ) ( ) ( ) ( 2 2 r r ψ ψ E r V r m p = + , (B.1) where ) (r m
is the carrier r-dependent effective mass. The solution separates,
) , ( ) ( ) ( ϕ θ ψ m l l nlm Y r R = r
and the radial one-particle Schrödinger equation reads:
[ ] 0 ) ( ) ( 2 ) 1 ( 2 2 2 = − + + − l l l R r r V E r m R l l dr dR r dr d h . (B.2)
To solve Eq. (B.2), we introduce the notations,
r k i ⋅ = ρ and ) ( ) ( ρ l l v r R = , where 2 2 2 h i i i V E m k − = with index ir r r B < ≤ , outermost shell- ) ( 2 R r r C < ≤
, are linear combinations of these functions as follows:
< ≤ + = < ≤ + = < ≤ = R r r r f C r f C r R r r r r f B r f B r R r r r f A r R C l C l C l B l B l B l A l A l 2 2 2 1 1 2 1 2 2 1 1 1 1 1 ), ( ) ( ) ( ), ( ) ( ) ( 0 ), ( ) ( , (B.4) with l l l C B A 2 , 1 2 , 1 1 , ,
constants and η Table B.1, for both electrons and holes for the two types of core/shell/shell (CSS) of quantum dots (QDs) we considered. Imposing the physical conditions of continuity, (the index l omitted in notation) for electron and hole in each region according to Eq. (3) from the main text for the two types of CSS QDs considered, CdSe/CdS/ZnS and Fig. B.1, we represent the charge density (orbitals) for the two types of nanocrystal. We can see in ZnTe/6ZnSe/ZnS the carrier separation is enhanced. The shape of the orbitals is drawn according to the z chosen quantization axis. a Ref. [4]; b Ref. [5]; c Ref. [6]; d Ref. [7]; e Ref. [8]; f Ref. [9]; g Ref. [10]; h Ref. [11]; i calculated with [12]; j Ref. [13]; k Ref. [14]; n Ref. [15]; p Ref. [16].
) ( ) ( 1 1 r R r R B l A l = , ) ( ) ( 2 2 r R r R C l B l = , 0 ) ( = R RZnTe/ZnSe/ZnS. ) 0 ( 1 r r A < ≤ ) ( 2 1 r r r B < ≤ ) ( 2 R r r C < ≤ CdSe/CdS/ZnS Electron or hole ) ( 1 r k j f A l A = < < > = B v c B l B v c B l B V E r k i V E r k j f , , 1 0 ), ( ), ( < < > = B v c B l B v c B l B V E r k k V E r k y f , , 2 0 ), ( ), ( < < > = C v c C l C v c C l C V E r k i V E r k j f , , 1 0 ), ( ), ( < < > = C v c C l C v c C l C V E r k k V E r k y f , , 2 0 ), ( ), ( ZnTe/ZnSe/Zn S Electron < < > = A c A l A c A l A V E r k i V E r k j f 0 ), ( ),( 1 )( 1 r k j f B l B = ) ( 2 r k y f B l B = < < > = C c C l C c C l C V E r k i V E r k j f 0 ), ( ), ( 1 < < > = C c C l C c C l C V E r k k V E r k y f 0 ), ( ), ( 2 Hole ) ( 1 r k j f A l A = < < > = B v B l B v B l B V E r k i V E r k j f 0 ), ( ), ( 1 < < > = B v B l B v B l B V E r k k V E r k y f 0 ), ( ), ( 2 < < > = C v C l C v C l C V E r k i V E r k j f 0 ), ( ), ( 1 < < > = C v C l C v C l C V E r k k V E r k y f 0 ), ( ),( 2 In
Regarding the radial distribution probability density of the photoexcited charges, we have
∑ ∫ = Ω = 1 , 2 2 2 ) ( ) ( ) ( ) ( j i i X ij X d r C r D g∑ = = 1 , 2 ) ( ) ( j i i i X ij X r C r g g α α α ψ ψ , (B.6)
where α ψ i are the envelope wave functions, and h e, = α . where γ is the homogeneous electronic broadening. In the applicative part of the main text (see Eq. (5)), we estimate Ep=20.4eV for CdSe/11CdS/ZnS and Ep=19.1eV for ZnTe/6ZnSe/ZnS from Ref. [10].
The colloidal absorption coefficient is obtained from a probabilistic hitting target model (C.20)
of core+middle shell+outermost shell), we impose the following boundary conditions: (i) continuous stress at the interfaces, (ii) zero pressure outside QD, and (iii) shrink-fit induced by the lattices mismatch (which connects the continuum elastic and the discrete crystalline approaches). The corresponding algebraic equations are: relative lattice mismatches, and A, B, and C denote the core, middle, and outermost shell, respectively. Detailed expressions of the strain tensor components are presented in Appendix A. When adapting our analytic expressions for two
relative dielectric constant.
( 7 )
7characterizes the colloidal QD solutions, we name it colloidal absorption coefficient. Derivation is given in Appendix C (Eqs. (B. 18-20)
and hole confined by the above II I V , potentials. The analytical results obtained for the radial functions are presented in Appendix B(Table B.1). Then, we compute the single particle energy of the electron and hole by using in Eq. carrier type. In the approximation of unmixed light and heavy hole states, the bulk heavy-hole hh m and light-hole lh m masses assumed by the parabolic dispersion of the one-band model are[18]: parameters, and 0 m is the electron mass. As the limit of large nanocrystals is envisaged, we consider the bulk values of the material parameters (their values are presented in Appendix B
QD
(notation for ZnTe/6ZnSe/ZnS with xML ZnS). We name the coating with ML 6 = x as ZnS coating and ML 6 > x as ZnS overcoating.(iii) Specific localization of the photoexcited electron and hole is found. Thus, the electron and hole location in the core for I QD and with the hole located in the core and the electron in the shell for II QD is obtained. For a more comprehensive image on the carrier localization, useful in engineering such nanocrystals, in Appendix B (Fig. B.1), we represent the charge density (orbitals) for the two nanocrystals types.
Fig. 2 .
2Energy of the first four electron (green color) and hole (red color) single particle states in CSS QDs of total radius R: (a) CdSe/11CdS/ZnS QDs; (b) ZnTe/6ZnSe/ZnS QDs. Continuum lines with up (down) triangle symbols show the band lineup in presence of lattice mismatch strain of electron (hole) states for ZnTe-blue color, ZnSeorange color, ZnS-violet color in figure (a) and for CdSe-blue color, CdS-orange color, ZnS-violet color in figure (b).The insets are for the lineup guidance. Zero reference is ZnTe CB edge, seeFig. 1.
QD absorption from Eq. (5) for FEA) obtained within our ideal model is decreasing with either ZnS coating or overcoating. On the other hand, the colloidal absorption coefficient, also shown inFig. 3afor dilute colloidal QD solutions, is weakly changed by either ZnS coating or overcoating. This is in accord with the reported optical stability of such CSS QD[3]. From Eq. (6) we obtain g X f 0 is very slightly varying with either ZnS coating or overcoating.
Fig. 3 .
3FEA (black circle and red, borderd label for the energy expressed in units of nm), set we use also reproduces the absorption line of approximately 500nm associated in Ref. [3] to the CdS bulk bandgap (to the s-s orbital combination having probability of 0.98.
Fig.
shifted with the ZnS overcoating. Thus, our modeling confirms that, as a result of the latticemismatch strain and excitonic effect, as in the I x QD case, the influence of ZnS overcoating on
the ZnS overcoating. As expected, similarly to the type-I CSS QD, to the s-s orbital combination having probability of 0.98, with the photoexcited electron located in the middle ZnSe shell and hole located in the core. By our model, we can obtain useful information about the photoexcited carriers localization and optical absorption characteristics. The results regarding photoexcited carriers localization are presented in Appendix B (Table B.3). In Fig. 4, we also represented the radial distribution probability density of the photoexcited charges in the excitonic ground state (see details in Appendix B, Eqs. (B. 5-6)). We find the ZnS either coating or overcoating effect on radius expectation value is not significant for I QD (ZnS coating weakly moves the electron to the center of QD). On the other hand, for II QD , the hole radius expectation value is practically not affected by the ZnS either coating or overcoating, but the ZnS coating or overcoating has a strong effect on the electron localization, namely, the electron moves to the middle of the ZnSe shell. Thus, according to our model in both I QD and II QD the electron and hole is not confined in the proximity of the interfaces and surface.
Fig. 4 .
4Photoexcited carriers distribution probability density for electron (e) and hole (h): (a) CdSe/11CdS/xZnS CSS QD; (b) ZnTe/6ZnSe/xZnS CSS QD. Hole is localized in the core while the electron is localized in the core for type-I CSS QD and in the middle shell for type-II CSS QD. Current continuity is apparent in the represented distribution probability.
to the rate of radiative exciton recombination by spontaneous emission, see, e.g., Ref.[24]) is practically not changed by either ZnS coating or overcoating. If this finding could explain the stable emission intensity observed in ultra-thick-shell CdSe/CdS[20], it cannot explain the blinking suppression induced by coating in such nanocrystals. The surface and interface impurities and interface defects and, in addition, very likely as result of the synthesis, the irregular thickness of the shells can not be neglected any longer. They act as trapping states of the photoexcited carriers. Several mechanisms explain blinking in QDs by competition between the intercept of the photoexcited carrier by trapping states and its relaxation
expressions for two shells are adapted to the case of one shell or core embedded in infinite matrix, we recover the results of previous works, Ref.[2] and Ref.[3], respectively. If the core (or CS QD) is embedded in infinite matrix we get that the matrix is not strained.Appendix B: One-band Schrödinger equationSchrödinger equation for the envelope wave function within the one-band effective Hamiltonian is
standing for the materials A (core), B (first shell), C (second shell), in the separate cases of electron and hole. Handling the substitution, Eq. (B.2) reduces to the spherical Bessel differential equation: sign corresponds to E > V i and the lower to E < V i , and the solutions of Eq. (B.
superscript η stands for A, B, C, and the indexing quantum number l is omitted for the f functions). They are given explicitly in
valid for a general form of the three-region step confinement potential (see main text, Eq.(3)). From condition of normalization
Fig. B. 1 .
1Electron (green color) and hole (red color) -I CdSe/11CdS/12ZnS (c-h) and type-II ZnTe/6ZnSe/12ZnS (i-n). (a) and (b) show the nanocrystals without orbitals for the type-I and type-II CSSQDs, respectively. The figures are denoted according to the following quantum numbers:(c, i) n=1, l=0, m=0 (d, j) n=2, l=1, m=0 (e, k) n=2, l=1, m=±1 (f, l) n=3, l=2, m=0 (g, m)n=3, l=2, m=±1 (h, n) n=3, l=2, m=±2. The outermost shell is not represented the figures (c-n) for a better view of the orbitals.
exciton ground state, g X . The radius expectation value of the photoexcited electron and hole is obtained with
in contact and assume each QD is an absorber of volume QD V . At the low power densities of the field the assumption of a linear relation between the polarization and the electric field is a good approximation for the description of excitonic optical absorption. It can be obtained using either first order complex susceptibility or Fermi's golden rule, by treating the QD-field interaction as a perturbation. Next, we consider Fermi's golden rule treatment for the excitonic absorption. In macroscopic nonconducting media the gradient of the energy density of in z direction is (see. e.g., Ref. [17]): of energy per time unit can be written by considering the loss of energy in a single is the rate of change of the number of photons. From Eqs. (C.. (7.14) in Ref. [18]), then the intensity of the electromagnetic wave is (Eq. (7.13) in Ref. Eq. (7.5) in Ref. [18]). From Eq. (C.3) and the derivative of Eq. (C.4) one obtains R can be obtained following the standard textbook derivation of the Fermi golden rule. Next, for the safety of correct factors of the Dirac delta function in the expression of single QD absorption coefficient, we point out the steps of this derivation. Thus, the semi-classical QD-field interaction is written as the momentum of electrons that fill the VB at T=0K. Then, by applying the time-dependent perturbation theory, in the limit of large time and by using the Dirac delta function definition ( )
integrals of the Coulomb matrix elements eh ijlk V factorize for electron and hole and they are computed analytically by using Gaunt's formula.Exciton formation implies the initial state is the ground state, the last equality is obtained by making use of the slow spatial variation of the envelope wave functions over regions of the unit cell size and the orthonormality of the Bloch cell wave functions. By introducing the Kane momentum matrix element, polarization unit vector, e, parallel to the quantization axis, z, for example,
(
the target is the QD) by neglecting the light scattering. We divide the solution volume in where D is the effective path length of the light passing through a single QD) and we consider a light beam which propagates perpendicular to the grid cubes surface.We write the light intensity after it passes through a large number of grid cubes aligned parallel to the direction of light propagation. The light absorbed by the first grid cube is probabilistically calculated as ( is the probability of the light beam to hit the QD inside the grid cube. After light passes through the first grid cube is the light path length, iterate N times Eq. (C.39)with the light intensity absorption written with Eq. (C.19) and find the absorption coefficient for a dilute solution of colloidal QDs is given by
Table B . 1):
B
1 are
1Bessel functions given explicitly in Appendix B, Table B.1; the indices l for the order of Bessel functions and star for the effective mass are omitted, and the prime is used to denote the first radial derivative.Fig. 1. Schematic band lineups in CSS QDs in presence of strain: (a)4
)
(r
V I
for CdSe/CdS/ZnS; (b)
)
(r
V II
for
ZnTe/ZnSe/ZnS.
(where E is the Young modulus and ν is Poisson ratio). For two shell spherical structures, we obtain the following non-zero components of the strain tensor.X
r
and
then
find
the
stress
tensor
ij
σ
by
applying
Hooke's
law
[1],
( )
(
)
[
]
ij
ll
ij
ij
e
e
E
δ
ν
ν
ν
σ
1
1
2
1
1
−
−
−
+
+
=
For the core:
A
A
A
A
A
A
A
rr
S
S
S
S
e
e
e
3
1
2
2
1
1
Table B .
B1 The explicit spherical Bessel functions
C
B
A
f ,
,
2
,
1
Table B
B.2. Material parameters used in the work.
ZnTe
ZnSe
CdSe
CdS
ZnS
a (Å)
6.08 a
5.65 a
6.05 b
5.82 b
5.40 c
E (10 10 Nm -2 )
4.17 a
4.51 a
2.87 d
3.26 d
5.55 c
ν
0.363 a
0.376 a
0.408 d
0.410 d
0.384 c
gap
E (eV)
2.25 e
2.69 e
1.74 e
2.49 e
3.61 e
v
E (eV)
-5.34 e
-6.07 e
-6.00 e
-6.42 e
-6.6 e
v
a (eV)
0.79 f
1.65 f
0.9 b
0.4 b
2.31 f
c
a (eV)
-5.83 f
-4.17 f
-2.00 b
-2.54 b
-4.09 f
1
γ
3.74 g
3.77 g
3.33 h
4.11 d
2.54 g
2
γ
1.07 g
1.24 g
1.11 h
0.77 d
0.75 g
3
γ
1.64 g
1.67 g
1.11 h
1.53 d
1.09 g
lh
m i
0.152
0.148
0.18
0.15
0.225
hh
m i
1.092
1.292
0.90
0.60
1.582
el
m
0.20 j
0.21 j
0.15 k
0.22 k
0.34 n
ε p
7.4
9.1
10
8.9
9
Table B .
B, for type-I CdSe/CdS/ZnS QD and type-II ZnTe/ZnSe/ZnS QD in the ground excitonic state, expressed in nm.Appendix C: Linear absorption coefficient for single quantum dotIn the following derivation, we approximate the homogenous medium as formed by QDs3 Radius expectation value of the electron,
)
( g
X
e
r
, and hole,
)
( g
X
h
r
x (ML)
0
6
12
18
)
( g
X
h
r
1.146
1.152
1.155
1.156
Type-I
)
( g
X
e
r
1.899
1.890
1.878
1.873
)
( g
X
h
r
1.202
1.214
1.214
1.215
Type-II
)
( g
X
e
r
2.700
3.463
3.522
3.527
The excitonic spinless QD Hamiltonian formed by the kinetic part and Coulomb electronhole interaction in the second quantization is written as in Ref.[19]:where the first, second term stands for electron, hole kinetic energy, and the third for electronfor hole state m in VB. The Coulomb matrix element is with r ε the screened relative dielectric constant, which holds for a homogenized value. By applying the all bra configurations∑
∑
∑
+
+
+
+
+
+
=
q
p
n
m
q
p
n
m
eh
mnpq
m
hole Coulomb interaction, respectively;
)
( m
m c
c +
are the creation (annihilation) electron state m in
CB and
)
( m
m h
h +
(
) ∫∫
−
−
−
−
=
V
e
e
q
h
h
p
h
e
h
h
n
e
e
m
h
e
r
eh
mnpq
d
d
e
V
)
(
)
(
)
(
)
(
4
1
*
*
1
0
2
r
r
r
r
r
r
r
r
ψ
ψ
ψ
ψ
ε
πε
,
(C.11)
i
j c
h
0
to the QD Schrödinger
Ψ
=
Ψ E
H D
one obtains the
secular equation
3 2 2 3 3 1 1
. P V Kamat, J Phys Chem Lett. 4908P.V. Kamat, J Phys Chem Lett 4 (2013) 908.
. F Garcıa-Santamarıa, Y Chen, J Vela, R D Schaller, J A Hollingsworth, V I Klimov, Nano Lett. 93482F. Garcıa-Santamarıa, Y. Chen, J. Vela, R.D. Schaller, J.A. Hollingsworth, V.I. Klimov, Nano Lett 9 (2009) 3482.
. Y Chen, J Vela, H Htoon, J L Casson, D J Werder, D A Bussian, V I Klimov, J A Hollingsworth, J Am Chem Soc. 1305026Y. Chen, J. Vela, H. Htoon, J.L. Casson, D.J. Werder, D.A. Bussian, V.I. Klimov, J.A. Hollingsworth, J Am Chem Soc 130 (2008) 5026.
. P Sarkar, M Springborg, G Seifert, Chem Phys Lett. 405103P. Sarkar, M. Springborg, G. Seifert, Chem Phys Lett 405 (2005) 103.
. J B Li, L W Wang, Appl Phys Lett. 843648J.B. Li, L.W. Wang, Appl Phys Lett 84 (2004) 3648.
. S Y Yang, D Prendergast, J B Neaton, Nano Lett. 103156S.Y. Yang, D. Prendergast, J.B. Neaton, Nano Lett 10 (2010) 3156.
. J Gronqvist, N Sondergaard, F Boxberg, T Guhr, S Aberg, H Q Xu, J Appl Phys. 10653508J. Gronqvist, N. Sondergaard, F. Boxberg, T. Guhr, S. Aberg, H.Q. Xu, J Appl Phys 106 (2009) 053508.
. C Pryor, J Kim, L W Wang, A J Williamson, A Zunger, J Appl Phys. 832548C. Pryor, J. Kim, L.W. Wang, A.J. Williamson, A. Zunger, J Appl Phys 83 (1998) 2548.
. M Jones, S S Lo, G D Scholes, P Natl Acad Sci. 1063011M. Jones, S.S. Lo, G.D. Scholes, P Natl Acad Sci USA 106 (2009) 3011.
L D Landau, E M Lifshitz, Theory of Elasticity. PergamonL.D. Landau, E.M. Lifshitz, Theory of Elasticity, Pergamon, 1970.
. T O Cheche, V Barna, Y C Chang, Superlattice Microst. 60475T.O. Cheche, V. Barna, Y.C. Chang, Superlattice Microst 60 (2013) 475.
. M Grundmann, O Stier, D Bimberg, Phys Rev B. 5211969M. Grundmann, O. Stier, D. Bimberg, Phys Rev B 52 (1995) 11969.
. C G Van De Walle, Phys Rev B Condens Matter. 391871C.G. Van de Walle, Phys Rev B Condens Matter 39 (1989) 1871.
. J W Luo, G Bester, A Zunger, New J Phys. 11123024J.W. Luo, G. Bester, A. Zunger, New J Phys 11 (2009) 123024.
. S Brovelli, R D Schaller, S A Crooker, F Garcia-Santamaria, Y Chen, R Viswanatha, J A Hollingsworth, H Htoon, V I Klimov, Nat Commun. 2280S. Brovelli, R.D. Schaller, S.A. Crooker, F. Garcia-Santamaria, Y. Chen, R. Viswanatha, J.A. Hollingsworth, H. Htoon, V.I. Klimov, Nat Commun 2 (2011) 280.
. J Bang, J Park, J H Lee, N Won, J Nam, J Lim, B Y Chang, H J Lee, B Chon, J Shin, J B Park, J H Choi, K Cho, S M Park, T Joo, S Kim, Chem Mater. 22233J. Bang, J. Park, J.H. Lee, N. Won, J. Nam, J. Lim, B.Y. Chang, H.J. Lee, B. Chon, J. Shin, J.B. Park, J.H. Choi, K. Cho, S.M. Park, T. Joo, S. Kim, Chem Mater 22 (2010) 233.
. P C Sercel, K J Vahala, Phys Rev B. 423690P.C. Sercel, K.J. Vahala, Phys Rev B 42 (1990) 3690.
. A Baldereschi, N O Lipari, Phys Rev B. 82697A. Baldereschi, N.O. Lipari, Phys Rev B 8 (1973) 2697.
. A M Smith, A M Mohs, S Nie, Nat Nanotechnol. 456A.M. Smith, A.M. Mohs, S. Nie, Nat Nanotechnol 4 (2009) 56.
. C Galland, Y Ghosh, A Steinbruck, J A Hollingsworth, H Htoon, V I Klimov, Nat Commun. 3908C. Galland, Y. Ghosh, A. Steinbruck, J.A. Hollingsworth, H. Htoon, V.I. Klimov, Nat Commun 3 (2012) 908.
. X Y Wang, X F Ren, K Kahen, M A Hahn, M Rajeswaran, S Maccagnano-Zacher, J Silcox, G E Cragg, A L Efros, T D Krauss, Nature. 459686X.Y. Wang, X.F. Ren, K. Kahen, M.A. Hahn, M. Rajeswaran, S. Maccagnano-Zacher, J. Silcox, G.E. Cragg, A.L. Efros, T.D. Krauss, Nature 459 (2009) 686.
. L E Brus, J Chem Phys. 804403L.E. Brus, J Chem Phys 80 (1984) 4403.
. L W Wang, A Zunger, Phys Rev Lett. 731039L.W. Wang, A. Zunger, Phys Rev Lett 73 (1994) 1039.
. Z Hens, I Moreels, J Mater Chem. 2210406Z. Hens, I. Moreels, J Mater Chem 22 (2012) 10406.
. P Frantsuzov, M Kuno, B Janko, R A Marcus, Nat Phys. 4519P. Frantsuzov, M. Kuno, B. Janko, R.A. Marcus, Nat Phys 4 (2008) 519.
. X S Xu, Sci Rep. 45039X.S. Xu, Sci Rep 4 (2014) 5039.
. A L Efros, M Rosen, Phys Rev Lett. 781110A.L. Efros, M. Rosen, Phys Rev Lett 78 (1997) 1110.
. M Kuno, D P Fromm, H F Hamann, A Gallagher, D J Nesbitt, J Chem Phys. 1123117M. Kuno, D.P. Fromm, H.F. Hamann, A. Gallagher, D.J. Nesbitt, J Chem Phys 112 (2000) 3117.
. J Tang, R A Marcus, Phys Rev Lett. 95107401J. Tang, R.A. Marcus, Phys Rev Lett 95 (2005) 107401.
. K T Shimizu, R G Neuhauser, C A Leatherdale, S A Empedocles, W K Woo, M G Bawendi, Phys Rev B. 63205316K.T. Shimizu, R.G. Neuhauser, C.A. Leatherdale, S.A. Empedocles, W.K. Woo, M.G. Bawendi, Phys Rev B 63 (2001) 205316.
L D Landau, E M Lifshitz, Theory of Elasticity. PergamonL.D. Landau, E.M. Lifshitz, Theory of Elasticity, Pergamon, 1970.
. T O Cheche, V Barna, Y C Chang, Superlattice Microst. 60475T.O. Cheche, V. Barna, Y.C. Chang, Superlattice Microst 60 (2013) 475.
. M Grundmann, O Stier, D Bimberg, Phys Rev B. 5211969M. Grundmann, O. Stier, D. Bimberg, Phys Rev B 52 (1995) 11969.
. D Belincourt, H Jaffe, L R Shiozawa, Phys. Rev. 1291009D. Belincourt, H. Jaffe, L.R. Shiozawa, Phys. Rev. 129 (1963) 1009.
. Y H Li, X G Gong, S H Wei, Phys Rev B. 73Y.H. Li, X.G. Gong, S.H. Wei, Phys Rev B 73 (2006).
. R B Hall, J D Meakin, Thin Solid Films. 63203R.B. Hall, J.D. Meakin, Thin Solid Films 63 (1979) 203.
. S Adachi, Properties Of Group-Iv, Ii-Vi Iii-V And, Semiconductors, John Wiley & SonsChichesterS. Adachi, Properties of Group-IV, III-V and II-VI Semiconductors, John Wiley & Sons, Chichester, 2005.
. S S Lo, T Mirkovic, C H Chuang, C Burda, G D Scholes, Adv Mater. 23180S.S. Lo, T. Mirkovic, C.H. Chuang, C. Burda, G.D. Scholes, Adv Mater 23 (2011) 180.
. C G Van De Walle, Phys Rev B Condens Matter. 391871C.G. Van de Walle, Phys Rev B Condens Matter 39 (1989) 1871.
. P Lawaetz, Phys Rev B. 43460P. Lawaetz, Phys Rev B 4 (1971) 3460.
. D Mourad, J P Richters, L Gerard, R Andre, J Bleuse, H Mariette, ArXiv:1208.2188v2D. Mourad, J.P. Richters, L. Gerard, R. Andre, J. Bleuse, H. Mariette, ArXiv:1208.2188v2.
. A Baldereschi, N O Lipari, Phys Rev B. 82697A. Baldereschi, N.O. Lipari, Phys Rev B 8 (1973) 2697.
J Singh, Physics of Semiconductors and Their Heterostructures. New YorkMcGraw-HillJ. Singh, Physics of Semiconductors and Their Heterostructures, McGraw-Hill, New York, 1993.
J Singh, Semiconductor Optoelectronics: Physics and Technology. New YorkMcGraw-HillJ. Singh, Semiconductor Optoelectronics: Physics and Technology, McGraw-Hill, New York, 1995.
. B Barman, K C Sarma, Chalcogenide Lett. 8171B. Barman, K.C. Sarma, Chalcogenide Lett 8 (2011) 171.
. D W Palmer, 3D.W. Palmer, www.semiconductors.co.uk, 2008.03.
V Mitin, V Kochelap, M A Stroscio, Quantum Heterostructures: Microelectronics and Optoelectronics. CambridgeCambridge University PressV. Mitin, V. Kochelap, M.A. Stroscio, Quantum Heterostructures: Microelectronics and Optoelectronics, Cambridge University Press, Cambridge, 1999.
J D Jackson, Classical Electrodynamics. New YorkJohn Wiley & SonsJ.D. Jackson, Classical Electrodynamics, John Wiley & Sons, New York, 1999.
. P Hawrylak, Phys Rev B. 605597P. Hawrylak, Phys Rev B 60 (1999) 5597.
G Grosso, G P Parravicini, Solid State Physics. EastbourneAcademic PressG. Grosso, G.P. Parravicini, Solid State Physics, Academic Press, Eastbourne, 2003.
| []
|
[
"COMPLETENESS OF THE BISPECTRUM ON COMPACT GROUPS *",
"COMPLETENESS OF THE BISPECTRUM ON COMPACT GROUPS *"
]
| [
"Ramakrishna Kakarala "
]
| []
| []
| This paper derives completeness properties of the bispectrum for compact groups and their homogeneous spaces. The bispectrum is the Fourier transform of the triple correlation, just as the magnitude-squared spectrum is the Fourier transform of the autocorrelation. The bispectrum has been applied in time series analysis to measure non-Gaussianity and non-linearity. It has also been applied to provide orientation and position independent character recognition, as well as to analyze statistical properties of the cosmic microwave background radiation; in both cases, the data may be defined on a sphere. On the real line, it is known that the bispectrum is not only invariant under translation of the underlying function, but in many cases of interest, it is also complete, in that the function may be recovered uniquely up to a translation from its bispectrum. This paper extends the completeness theory of the bispectrum to compact groups and their homogeneous spaces, including the sphere. The main result, which depends on Tannaka-Krein duality theory, shows that every function whose Fourier coefficient matrices are always nonsingular is completely determined by its bispectrum, up to a single group action. Furthermore, algorithms are described for reconstructing functions defined on SU (2) and SO(3) from their bispectra. | null | [
"https://arxiv.org/pdf/0902.0196v2.pdf"
]
| 18,425,284 | 0902.0196 | 7a2c7bcbd832dfbb35d42c4f215c3cb5a4ef48dc |
COMPLETENESS OF THE BISPECTRUM ON COMPACT GROUPS *
2 Feb 2009
Ramakrishna Kakarala
COMPLETENESS OF THE BISPECTRUM ON COMPACT GROUPS *
2 Feb 2009bispectrumtriple correlationpattern recognitioninvariantsTannaka-Krein duality AMS subject classifications 68T1043A7714L24
This paper derives completeness properties of the bispectrum for compact groups and their homogeneous spaces. The bispectrum is the Fourier transform of the triple correlation, just as the magnitude-squared spectrum is the Fourier transform of the autocorrelation. The bispectrum has been applied in time series analysis to measure non-Gaussianity and non-linearity. It has also been applied to provide orientation and position independent character recognition, as well as to analyze statistical properties of the cosmic microwave background radiation; in both cases, the data may be defined on a sphere. On the real line, it is known that the bispectrum is not only invariant under translation of the underlying function, but in many cases of interest, it is also complete, in that the function may be recovered uniquely up to a translation from its bispectrum. This paper extends the completeness theory of the bispectrum to compact groups and their homogeneous spaces, including the sphere. The main result, which depends on Tannaka-Krein duality theory, shows that every function whose Fourier coefficient matrices are always nonsingular is completely determined by its bispectrum, up to a single group action. Furthermore, algorithms are described for reconstructing functions defined on SU (2) and SO(3) from their bispectra.
1. Introduction. The triple correlation of a complex-valued function on the real line is the integral of that function multiplied by two independently-shifted copies of itself:
a 3,f (s 1 , s 2 ) = ∞ −∞ f * (x)f (x + s 1 )f (x + s 2 )dx. (1.1)
It is easily seen that the triple correlation does not change if the function is translated. The Fourier transform of triple correlation is the bispectrum. If the Fourier transform of f is denoted F = F {f }, then the bispectrum is
A 3,f (u, v) = F {a 3,f (s 1 , s 2 )} = F (u)F (v)F * (u + v) (1.2)
The triple correlation extends the concept of autocorrelation (denoted here a 2,f ) which correlates a function with a single shifted copy of itself, thereby enhancing the function's latent periodicities:
a 2,f (s) = ∞ −∞ f * (x)f (x + s)dx. (1.3)
The Fourier transform of the autocorrelation is A 2,f = F {a 2,f } = |F | 2 , which obviously lacks phase information and therefore provides a limited analysis of a function's structure. In contrast, eq. (1.2) shows that the bispectrum contains both magnitude and phase information, while still being invariant to translation. These properties suggest applications in invariant matching for pattern recognition. More importantly for matching, the bispectrum in many cases of interest is not only invariant but also provides a complete description of the function: the function may be reconstructed from it, up to a single unknown translation [11].
The bispectrum was perhaps first investigated by statisticians examining the cumulant structure of non-Gaussian random processes [3]. It is well known that the third order cumulant, of which the triple correlation is a sample, has zero expected value for a Gaussian process. Hence, the bispectrum is a tool for measuring non-Gaussianity. The bispectrum was also independently studied by physicists as a tool for spectroscopy. H. Gamo [6] in 1963 described an apparatus for measuring the triple correlation of a laser beam, and also showed how phase information can be completely recovered from the real part of the bispectrum-up to sign reversal and linear offset. Gamo's is perhaps the first completeness result along the lines of what is explored in detail in this paper. However, Gamo's method implicitly requires the Fourier transform to never be zero at any frequency. This requirement was relaxed, and the class of functions which are known to be completely identified by their bispectra was considerably expanded, by the study of Yellott and Iverson [11]: for example, every integrable function with compact support is completely determined, up to a translation, by its bispectrum.
The statistical and physical applications described above are for data defined on Euclidean domains R n . The bispectrum also finds applications on non-Euclidean domains such as the sphere S 2 . For example, in astrophysics, the Cosmic Background Radiation (CBR) may be modelled as a function defined on a sphere. X. Luo [16] calculates the bispectrum of spherical CBR functions and examines the properties the bispectral coefficients have under cosmological scenarios such as inflation and late-time phase transitions.
The bispectrum's invariance and completeness have motivated researchers in pattern recognition to apply it for template matching and shape recognition. For example, R. Kondor [14,Ch. 8] shows how position and orientation independent optical character recognition can be accomplished by projecting the character from the plane on to the sphere, and subsequently using the bispectrum on the sphere for invariant matching. Kondor's results are discussed further in Section 5.
Despite the interest in applying the bispectrum for non-Euclidean domains, little has been published about important properties such as completeness. The contribution of this paper is to derive the completeness theory of the bispectrum for (noncommutative) compact groups and their homogeneous spaces. In order to construct the bispectrum on groups, we require concepts from harmonic analysis using group representation theory. Those concepts are presented in the next section. Then, a matrix form of the bispectrum which proves convenient for analysis is demonstrated. The matrix formulation allows a relatively simple criterion for completeness: it is shown that functions defined on compact groups that have nonsingular Fourier transform coefficients are completely determined by their bispectra. This result depends on the well-known Tannaka-Krein duality theory of compact group representations. The completeness result is extended to homogeneous spaces using the Iwahori-Sugiura duality theorem [9]. Reconstruction algorithms for functions defined on SU (2) and SO(3) in particular are described, expanding on material in a previous paper [13].
Preliminaries.
Let us review the basic concepts of representation theory for compact groups. For more details, the reader may consult Chevalley ([4], Chapter VI). Let G be a compact group. A n-dimensional unitary representation of G is a continuous homomorphism D of G into the group U (n, C) of n × n unitary matrices. The following operations are all defined on representations: complex-conjugation, direct sum, and tensor product. The complex conjugate of any representation is simply that representation D * whose matrices are complex-conjugates of those of D. Let D 1 , D 2 be respectively a n-dimensional and a m-dimensional representation; their directsum D 1 ⊕ D 2 is the (n + m)-dimensional representation which maps each element g to the block-diagonal matrix D 1 (g) ⊕ D 2 (g) with D 1 (g) in the upper left corner and D 2 (g) in the lower right corner. Similarly, the tensor product D 1 ⊗ D 2 of D 1 and D 2 is nm-dimensional representation whose matrices are n × n blocks, each block of size m × m, where for each g the (i, j)-th block is the matrix D 2 (g) multiplied by the (i, j)-th coefficient of D 1 (g). We reserve the symbol 1 for the trivial representation that maps all of G into the number 1.
We now define equivalence and reducibility of representations. Two representations D 1 and D 2 are equivalent if there exists a unitary matrix C such that D 1 (g) = CD 2 (g)C † for all g ∈ G, with † denoting the matrix conjugate transpose. A representation D is reducible if there exists a matrix C and two representations D 1 ,
D 2 such that D(g) = C [D 1 (g) ⊕ D 2 (g)] C † for all g ∈ G; otherwise D is irreducible. Every representation D is the direct sum of irreducible unitary representations.
The set G of all equivalence classes of irreducible unitary representations of G is called the dual object of G. Intuitively, G is the "frequency" domain of G. In general, G is not a group, unless G is abelian. In what follows, we denote elements of G, which are equivalence classes, by Greek letters such as α, β, etc. For each α ∈ G, let dim(α) denote the common dimension of the representations in α. Let {D α } α∈G denote any set of representations that contain exactly one member in each equivalence class in G. We call any such set a selection. A given selection has two properties, both consequences of the classical result known as the Peter-Weyl theorem: (1) there are at most countably many representations in any selection, and in particular there are finitely many if, and only if, G is finite, i.e., G is finite if and only if G is finite;
(2) the set of all matrix coefficients d pq α (·), where α ∈ G and 1 ≤ p, q ≤ dim(α), from any selection forms an orthogonal basis for the Hilbert space L 2 (G).
Duality theory.
Our main tool for proving completeness results is the duality theorem due to T. Tannaka and M. Krein. There are several formulations of that result, of which Chevalley's [4, pp 188-203] is the most convenient for our purposes. The formulation is as follows. Let Θ(G) be the representative algebra of G, which is the algebra of complex-valued functions on G that is generated by the set of matrix coefficients of any selection {D α } α∈G . In fact, Θ(G) is independent of the selection that is used. It is well-known that every function f ∈ Θ(G) may be expressed in exactly one way as a finite linear combination of the set of matrix coefficients from any given selection ( [4, pg 189]). The structure of Θ(G) may be understood by considering algebra homomorphisms, which are maps ω : Θ(G) → C that are both linear and multiplicative, i.e., ω(
c 1 f 1 + c 2 f 2 ) = c 1 ω(f 1 ) + c 2 ω(f 2 ) and ω(f 1 f 2 ) = ω(f 1 )ω(f 2 )
for all scalars c 1 , c 2 , and functions f 1 , f 2 in Θ(G). The set of all algebra homomorphisms is denoted Ω(G). Clearly, for every g ∈ G, the map ω g (f ) = f (g) is an algebra homomorphism. Note that ω g (f * ) = ω g (f ) * , i.e., ω g preserves complex-conjugation. In fact, the converse is also true, an identification that is essential to the duality between groups and their representations; see [4, pg 211] for details and proofs.
Theorem 2.1 (Tannaka-Krein). To every algebra homomorphism ω ∈ Ω(G) that preserves complex-conjugation, i.e., ω(f * ) = ω(f ) * , there corresponds a unique element g ∈ G such that ω(f ) = f (g) for all f ∈ Θ(G).
We recast this duality theorem in a slightly different form (see also [20, pp 303-306]). Let {D α } α∈G be any selection of irreducible representations, and let {U (α)} α∈G be a corresponding sequence of unitary matrices, such that for each α the matrix U (α) has the same dimension as the representation D α . We determine the necessary and sufficient conditions under which the latter sequence arises from the former by an element of Θ(G), i.e., when U (α) = ω(D α ) for some fixed homomorphism ω ∈ Ω(G). Consider the tensor product D σ ⊗ D δ of any two representations in our selection; that representation is, in general, reducible, and we write its decomposition into irreducibles (taken from our selection) as follows:
D σ ⊗ D δ = C σδ [D α1 ⊕ · · · ⊕ D α k ] C † σδ . (2.1)
The indices α 1 , . . ., α k appearing on the right are unique up to permutation ([4, pg 175]). Suppose now that there exists ω such that U (α) = ω(D α ) for all α. By applying ω to both sides of eq. (2.1), and using the fact that ω is both linear and multiplicative, we obtain that (writing U (α) for ω(D α ))
U (σ) ⊗ U (δ) = C σδ [U (α 1 ) ⊕ · · · ⊕ U (α k )] C † σδ (2.2) Equation (2.
2) is not only necessary, but also sufficient, as the following result shows.
Theorem 2.2. Let {D α } α∈G and {U (α)} α∈G be as above. If, whenever eq. (2.1) is true, we have that eq. (2.2) is also true for same σ, δ, and matrix C σδ , then there exists a fixed g ∈ G such that U (α) = D α (g) for all α.
Proof. To any sequence {U (α)} αG , there exists a unique linear map ω : Θ(G) → C such that U (α) = ω(D α ). To see this, note that the set of matrix coefficients in any selection is linearly independent, and furthermore, any function f ∈ Θ(G) may be written uniquely as a finite linear combination of the matrix coefficients. Thus it is always possible to construct a linear map ω : Θ(G) → C that gives any desired set of values to the corresponding coefficient functions; in particular, there exists an ω such that ω(D α ) = U (α). We now show that ω is multiplicative and conjugate preserving. Applying the linear map ω to both sides of eq. (2.1) results in the following identity:
ω(D σ ⊗ D δ ) = C σδ [ω(D α1 ) ⊕ · · · ⊕ ω(D α k )] C † σδ . (2.3)
Substituting the matrices U on the right side and using (2.2) reveals that
ω(D σ ⊗ D δ ) = ω(D σ ) ⊗ ω(D δ ). (2.4)
Consequently, ω is multiplicative. To prove that ω preserves conjugation, note that the equation D α D † α = I implies that ω(D α )ω(D † α ) = I for all α. Since the matrices U (α) = ω(D α ) are unitary, we also have ω(D α )ω(D α ) † = I. Matrix inverses are unique, and thus ω(D † α ) = ω(D † α ), showing that ω preserves conjugation. By the Tannaka-Krein theorem, there exists a unique g ∈ G such that U (α) = ω(D α ) = D α (g) for all α.
3. Bispectrum. We use this result to establish sufficient conditions for a function to be described uniquely by its bispectrum. It is convenient to establish the Fourier transform domain for compact groups. Let {D α } α∈G be any selection of irreducible representations. The Fourier transform of any f in L 1 (G) is the matrix-valued function F , such that for each α ∈ G, we have
F (α) = G f (g)D α (g) † dg. (3.1)
Here the integral uses the Haar measure dg on G; because G is compact, dg is both left and right invariant. For f ∈ L 1 (G), the triple correlation a 3,f is defined as follows:
a 3,f (g 1 , g 2 ) = G f (g) * f (gg 1 )f (gg 2 )dg. (3.2)
(Compare with eq. (1.1)). Note that the triple-correlation is invariant under lefttranslation, i.e., if there exists x such that r(g) = s(xg) for all g, then a 3,r = a 3,g . This follows directly from the left-invariance of the Haar measure dg. Similarly, we may define a right-translation invariant version of eq. (3.2) by integrating f (g) * f (g 1 g)f (g 2 g), but we will not pursue this minor variation in what follows.
Because f ∈ L 1 (G), we have that a 3,f is a function in L 1 (G× G). It is known that any irreducible representation of G × G is equivalent to a tensor product D σ ⊗ D δ , where D σ ⊗ D δ are irreducible representations of G ([17, pg 45]). Thus the Fourier transform of a 3,f with respect to the selection {D α } α∈G is the function on G × G that is defined as follows:
A 3,f (σ, δ) = G G a 3,f (g 1 , g 2 ) D σ (g 1 ) † ⊗ D δ (g 2 ) † dg 1 dg 2 . (3.3)
There exists a convenient formula for computing A 3,f . Lemma 3.1. For any pair σ, δ, let C σδ be the matrix and let α 1 , . . ., α k be the indices appearing in eq. (2.1). Then
A 3,f (σ, δ) = [F (σ) ⊗ F (δ)] C σδ F (α 1 ) † ⊕ · · · ⊕ F (α k ) † C † σδ . (3.4) Proof.
Since a 3,f is integrable, we use the Fubini theorem to interchange the order of integration in the following derivation:
A 3,f (σ, δ) = G G a 3,f (g 1 , g 2 ) D σ (g 1 ) † ⊗ D σ (g 2 ) † dg 1 dg 2 , = G G G f (g) * f (gg 1 )f (gg 2 ) D σ (g 1 ) † ⊗ D δ (g 2 ) † dg dg 1 dg 2 , = G f (g) * G G f (gg 1 )f (gg 2 ) D σ (g 1 ) † ⊗ D δ (g 2 ) † dg 1 dg 2 d g .
By making a change of variables, we find that the double integral inside simplifies as follows:
G G f (gg 1 )f (gg 2 ) D σ (g 1 ) † ⊗ D δ (g 2 ) † dg 1 dg 2 = [F (σ) ⊗ F (δ)] [D σ (g) ⊗ D δ (g)] .
Upon substituting into the expression for A 3,f , we find that
A 3,f = [F (σ) ⊗ F (δ)] G f (g) * [D σ (g) ⊗ D δ (g)] dg. (3.5)
Upon substituting the tensor product decomposition (2.1) into the above, we obtain
A 3,f (σ, δ) = [F (σ) ⊗ F (δ)] C σδ G f (g) * (D α1 (g) ⊕ · · · ⊕ D α k (g)) dg C † σδ . (3.6)
After evaluating the integral, the result (3.4) follows.
The lemma helps to quickly establish the basic completeness result for the bispectrum on compact groups.
Theorem 3.2. Let G be any compact group, and let r in L 1 (G) be such that its Fourier coefficients R(α) are nonsingular for all α ∈ G. Then a 3,s = a 3,r for some s ∈ L 1 (G) if and only if there exists x ∈ G such that s(g) = r(xg) for all g.
Proof. If s(g) = r(xg), then the translation-invariance of the triple correlation implies that a 3,r = a 3,s . We now prove the converse. Let s be such that a 3,s = a 3,r ; then A 3,r = A 3,s , and by Lemma 3.1, we obtain that for σ, δ that
[R(σ) ⊗ R(δ)] C σδ R(α 1 ) † ⊕ · · · ⊕ R(α k ) † = [S(σ) ⊗ S(δ)] C σδ S(α 1 ) † ⊕ · · · ⊕ S(α k ) † (3.7)
Set σ = δ = 1, where 1 is the trivial representation g → 1 of G. Both R(1) and S(1) are complex numbers, and the equality above becomes
R(1)R(1)R(1) * = S(1)S(1)S(1) * . (3.8) Thus R(1) = S(1). Now set δ = 1; for any σ, we have D σ ⊗ D 1 = D σ , and thus eq. (3.7) becomes [R(σ) ⊗ R(1)] R(σ) † = [S(σ) ⊗ S(1)] S(σ) † . (3.9)
By assumption R(1) = S(1) is a non-zero scalar, and we cancel it from both sides to obtain that R(σ)R(σ) † = S(σ)S(σ) † for all σ. Such an equality between matrices holds if and only if there exists a unitary matrix U (σ) such that S(σ) = R(σ)U (σ). Substituting for S in (3.7) yields, upon rearranging terms,
[R(σ) ⊗ R(δ)] C σδ R(α 1 ) † ⊕ · · · ⊕ R(α k ) † C † σδ = [R(σ) ⊗ R(δ)] [U (σ) ⊗ U (δ)] C σδ U (α 1 ) † ⊕ · · · ⊕ U (α k ) † R(α 1 ) † ⊕ · · · ⊕ R(α k ) † C † σδ
We cancel the nonsingular matrices R(σ) from both sides and rearrange the remaining terms to obtain the identity
U (σ) ⊗ U (δ) = C σδ [U (α 1 ) ⊕ · · · ⊕ U (α k )] C † σδ (3.10)
Since the identity above holds for all σ, δ, Theorem 2.2 implies that there exists
x ∈ G such that U (σ) = D σ (x) for all σ. Thus S(σ) = R(σ)D σ (x)
for all σ, and the translation property of the Fourier transform now implies that s(g) = r(xg) for all g.
The hypothesis that all coefficients R(σ) are nonsingular is satisfied generically, in the sense that almost every n × n matrix is nonsingular with respect to the Lebesgue measure on the set of n × n matrices. Nevertheless, it is desireable to weaken the hypothesis, to include for example functions on G that are invariant under the translations of a normal subgroup N of G. We prove a result for this case.
We review some facts concerning group representations and normal subgroups ([8, pg 64]). Let N be a closed normal subgroup of G. Any irreducible representation D of the quotient group G/N extends to an irreducible representation D of G by composition: D = D • π, where π is the canonical coset map π : G → G/N . The converse is also true: any representation D of G such that D(n) = I for all n ∈ N is of the form D = D • π for some representation D of G/N . Moreover, letting (G/N ) represent the dual object of the group G/N , the set
G[N ] = {D = D • π, D ∈ G/N }, (3.11)
is closed under both conjugation and tensor-product decomposition, i.e., the tensor product of any two representations from the set decomposes into irreducible representations that are also contained in the set. Conversely, to each subset of G that contains 1 and that is closed under both conjugation and tensor-product decomposition, there corresponds a unique closed and normal subgroup N of G such that
A = G[N ]. Now let f be a function in L 2 (G) that is invariant under N , i.e., f (ng) = f (gn) = f (g) for all n ∈ N . If α ∈ G[N ], then the Fourier coefficient matrix F (α) is a zero matrix.
To prove this, note that the Peter-Weyl theorem implies that the matrix coefficients d pq α (·) from any selection { D α } α∈( G/N) form an orthogonal basis for L 2 (G/N ); thus the corresponding functions d pq i = d pq i • π on G form an orthogonal basis for the closed subspace in L 2 (G) of functions invariant under N . Consequenctly, any Ninvariant function in L 2 (G) has zero inner product with the coefficients of D α when α ∈ G[N ]. We use those facts to produce a stronger version of Theorem 3.2.
Theorem 3.3. Let r ∈ L 2 (G) be such that its Fourier coefficients R satisfy the following conditions:
1. Each R(α) is either zero or nonsingular; 2. The set of α such that R(α) is non-singular includes 1, and is closed under conjugation and tensor product decomposition. Then there exists a normal subgroup N of G such that r is N -invariant, and furthermore r is uniquely determined up to left translation by its bispectrum A 3,f .
Proof. As discussed above, the set of α such that R(α) is nonsingular corresponds to G[N ] for some normal subgroup N of G. Furthermore, r = r•π for a unique function r on G/N . We obtain A 3,e r from A 3,r by restricting the latter to the arguments (σ, δ) for which R(σ) and R(δ) are nonsingular. Theorem 3.2 now shows that r is uniquely determined up to a left translation by A 3,e r , and thus r = r • π is uniquely determined up to a left translation by A 3,r . Remark. The hypotheses of Thms 3.2 and 3.3 have an interesting interpretation in the context of the Tauberian theorems for compact groups. The latter theorems determine what functions lie in the span of translates of a single function f in L 1 (G). Edwards ([5, pp 121-125]) describes one such result:
If f 1 , f 2 in L 1 (G) have Fourier transforms F 1 , F 2 , such that F 2 (α) = F 1 (α)M (α) for each α, where M (α)
is an arbitrary matrix whose dimensions match that of F 1 (α), then f 2 lies in the span of left translates of f 1 , i.e., f 2 may be approximated arbitrarily closely in L 1 by linear combinations of left translates of f 1 . Suppose now that f 1 satisfies the hypothesis of Theorem 3.2, i.e., F 1 (α) is nonsingular for all α. Then the aforementioned Tauberian theorem implies that any function f 2 ∈ L 1 (G) lies in the span of translates of f 1 , i.e., the translates of f 1 span L 1 (G). Similarly, if f 1 satisfies the hypothesis of Theorem 3.3, then the translates of f 1 span the closed subspace of L 1 (G) that consists of functions invariant under some fixed normal subgroup N . As our theorems show, the bispectrum of f 1 identifies exactly which functions are its translates.
4. Homogeneous spaces. The definition of a homogeneous space is as follows. Let G be any topological group and X any topological space. We say that G acts (on the right) on X if for each g ∈ G there exists a homeomorphism τ g : X → X, such that τ e (x) = x for the identity e in G, and furthermore, for g 1 , g 2 in G, we have τ g1g2 (x) = τ g2 (τ g1 (x)). The group G acts transitively on X if for each x 1 , x 2 in X, there exists g ∈ G such that τ g (x 1 ) = x 2 . The space X is a homogeneous space for G if G acts on X transitively and continuously. An important example of a homogeneous space is the quotient space of right cosets G\H = {Hg : g ∈ G} of a closed subgroup H in G. In fact, it is a theorem that any locally compact homogeneous space X of a separable and locally compact group G can be represented as a quotient space G\H for some closed subgroup H of G ([2, pg 124]).
Our goal in this section is to investigate the bispectrum's completeness for functions on arbitrary homogeneous spaces of compact groups. By the result cited above, we lose no generality by focusing on spaces of the form G\H, where G is some compact group and H some closed subgroup of G. To any function f on G\H there corresponds a unique function f on G such that f = f • π, where π : G → G\H is the canonical coset map; conversely, to any function f on G that is invariant under left H-translations, i.e., f (hg) = f (g) for all g ∈ G and h ∈ H, there corresponds a unique function f on G\H such that f = f • π. Thus we lose no generality by further restricting our study of functions on homogeneous spaces to functions on G that are left H-invariant for some closed subgroup H.
Our main tool for proving completeness results is the Iwahori-Sugiura duality theorem for homogeneous spaces of compact groups [9]. Let G be any compact group, {D α } α∈G be any selection of irreducible representations, and Θ(G) the representative algebra of G. We describe an equivalent formulation of the Iwahori-Sugiura theorem that is analogous to Theorem 2.2. Several preliminary results are required for the new formulation, with some of the longer proofs being relegated to the appendices. The proof is given in Appendix A. Let G, H, and {D α } α∈G be as before. Let us define a corresponding sequence of matrices {P α } α∈G as follows:
P α = H D α (h)dh, (4.2)
where dh denotes the normalized Haar measure on H. It is easy to show that each P α is a projection, i.e., a self-adjoint matrix such that P α P α = P α ( [8, pg 190]). Moreover, the projection matrices as defined above inherit some of the tensor product properties of the corresponding representations ( [8, pg 190]). Lemma 4.3. Let {P α } α∈G be as above. For each σ, δ, let C σδ be the Clebsch-Gordan matrix and α 1 , . . ., α k be the indices in the tensor product decomposition in eq. (2.1). Then
P σ ⊗ P δ = C σδ [P α1 ⊕ · · · ⊕ P α k ] C † σδ [P σ ⊗ P δ ] , = [P σ ⊗ P δ ] C σδ [P α1 ⊕ · · · ⊕ P α k ] C † σδ .
It proves convenient to apply the following similarity transformations to the P matrices. For each α, let rank(α) denote the rank of P α , and let I(rank(α)) be the diagonal matrix whose first rank(α) diagonal entries (from the upper left) are 1, and the rest are 0. Then there exists a unitary matrix U (α) such that ( [15, pg 195]):
U (α)P α U (α) † = I(rank(α)), (4.3)
If we apply the same similarity transformation to the representation D α , then it is easily seen that
U (α)D α (h)U (α) † = ⊕ rank(α) q=1 1(h) ⊕ D H α (h), h ∈ H. (4.4)
In the decomposition above, 1 is the trivial representation of H, and the last term D H α is some unitary representation of H that does not contain 1.
Rather than starting with an arbitrary selection of {D α } α∈G , suppose now that we choose one in which each matrix D α (h) is exactly equal to a direct sum where the first rank(α) representations that appear in the sum are 1, i.e.,
D α (h) = ⊕ rank(α) q=1 1(h) ⊕ D H α (h), h ∈ H. (4.5)
We always obtain such a convenient selection (that is what we shall call it henceforth) from a given one by applying similarity transformations as described above. For a convenient selection, the projection matrices in eq. (4.2) are simply P α = I(rank(α)) for all α. Proof. Each matrix P α D α = I(rank(α))D α has its first rank(α) rows equal to those of D α , while the remaining rows are identically zero. Moreover, P α D α (hg) = P α D α (g) for all h and g (simply substitute H D α dh for P α and use the translation invariance of the Haar measure dh), and thus the nonzero coefficient functions in each P α D α are left H-invariant. We now show the converse: any left H-invariant coefficient d pq α of D α is one of the nonzero coefficients in P α D α . Left H-invariance requires that
d pq α (g) = d pq α (hg) = dim(α) ℓ=1 d pℓ α (h)d ℓq α (g). (4.6)
The linear independence of the coefficients implies that d pℓ α (h) = 1 for all h if ℓ = p, and d pℓ α (h) = 0 for all h if ℓ = p. But the assumption on D α requires that d pp α (h) = 1 on H only if p ≤ rank(α), and thus any left H-invariant coefficient d pq α must appear in one of the first rank(α) rows of P α D α .
Since the left H-invariant coefficients are a basis for Θ H (G), any linear map ω : Θ H (G) → C is uniquely determined by the values that it gives to those coefficients.
For each matrix P α D α , the map ω produces a corresponding matrix ω(P α D α ). We now determine conditions in terms of the matrices ω(P D) under which ω is not only linear but also multiplicative and conjugate-preserving. In the following, we use the standard inner product < ζ 1 , ζ 2 >= ζ 1 ζ † 2 for complex-valued row vectors ζ 1 , ζ 2 , and the standard norm ζ = (< ζ, ζ >) 1 2 . Theorem 4.5. Let {D α } α∈G be a convenient selection and {P α } α∈G be its projections. Any linear map ω : Θ H (G) → C is both multiplicative and conjugate-preserving if and only if the following two conditions hold for all σ, δ, α in G: Integrating over h, we find that F (α) = F (α)P α for all α. We say that each Fourier coefficient F (α) is of maximal H-rank if the rank of F (α) equals the rank of P α . We now show that if f is any left H-invariant function whose Fourier coefficients F all have maximal rank, then f is uniquely determined by its bispectrum A 3,f up to a left translation. The proof of our assertion uses the standard notation from linear algebra [15]. For each matrix A, let image(A) and ker(A) denote respectively the image and kernel of A. For each α ∈ G, let H α denote the Hilbert space on which the corresponding representations D α act. The proof is given in Appendix C. In the theorem above, we did not require that the function s also be left Hinvariant. (Equality of bispectra may hold regardless of whether both functions are H-invariant.) Suppose now that two left H-invariant functions r, s are such that both have maximal H-rank coefficients and both have exactly the same bispectrum. The theorem just proved demonstrates that under those conditions, there exits x ∈ G such that s(g) = r(xg) for all g. Yet the element x cannot be arbitrary, for s is left H-invariant, and thus s(hg) = s(g), implying that r(xhg) = r(xg) for all h ∈ H and g ∈ G. But since r is also left H-invariant, we must have r(xg) = r(hxg), and thus r(xhg) = r(hxg) for all g and h. The last identity is always satisfied if x lies in the normalizer of H in G, which is the subgroup N H of G defined as follows: Proof. The "if" assertion is shown above, so we prove the "only if" part. Suppose that a 3,r = a 3,s , and that r, s, both have maximal H-rank coefficients. Under those conditions, Theorem 4.6 shows that there exits x ∈ G such that r(g) = s(xg) for all g. Then R(α) = S(α)D α (x) for all α ∈ G. Furthermore, the left invariance of r implies that R(α) = R(α)P α for each α, Thus S(α)D α (x) = S(α)D α (x)P α for each α, and combining that with the identity S(α) = S(α)P α yields S(α)P α D α (x) = S(α)P α D α (x)P α , and thus S(α) [P α D α (x) − P α D α (x)P α ] = 0. By the maximal Hrank hypothesis, we obtain that
ω(P σ D σ ) ⊗ ω(P σ D σ ) = [P σ ⊗ P δ ] C σδ [ω(P α1 D α1 ) ⊕ · · · ⊕ ω(P α k D α k )] C † σδ ; (4.7) ω(P α D α )ω(P α D α ) † = P α .N H = {x ∈ G : xH = Hx}.P α D α (x) = P α D α (x)P α . (4.10)
Since P α = I(rank(α)) for a convenient selection, the element x satisfies the above equality if and only if the unitary matrix D α (x) is the direct sum of two smaller unitary matrices, the first with dimensions rank(α) × rank(α) and the second with dimensions (n − rank(α)) × (n − rank(α)). For such an x, it follows for any h ∈ H that In the interesting special case when G is the group SO(3) and H is the subgroup of rotations that fix the z-axis, we have that N H = H. In that case, if r is any left H-invariant function with maximal H-rank coefficients, then there are no other left H-invariant functions with the same bispectrum besides r itself. However, that does not mean that the bispectrum uniquely determines r: any function s such that s(g) = r(xg) on G has the same bispectrum, although s is not necessarily H-invariant.
P α D α (x) † D α (h)D α (x) = P α D α (x −1 hx) = P α .
If G = SO(3) and H as above, then the maximal H-rank condition is easy to satisfy. Here it is well-known that rank(P α ) = 1 for all α ∈ G ( [19]). Thus an arbitrary left H-invariant function r has maximal H-rank coefficients if for all α, the matrix R(α) contains at least one nonzero coefficient. That is evidently true if any noise is present in measuring r.
5. Reconstruction algorithms. The completeness theory for arbitrary compact groups in the preceeding sections can be refined further for the special case when the group is SU (2), which is the group of all 2 × 2 unitary matrices with determinant +1. The group SU (2) arises frequently in applications because it is a double-covering of the rotation group SO(3), and in many problems it is more convenient to model three-dimensional rotations by elements of SU (2) rather than the corresponding elements of SO(3)-one reason is that the addition of rotations is much simpler for SU (2) (Cayley-Klein parameters) than for SO(3) (Euler parameterization). The representation theory of SU (2) is known in extensive detail, and we take advantage of the special properties of SU (2)'s irreducible representations to analyze the bispectrum of bandlimited functions. The latter are functions whose Fourier coefficients are identically zero except for a finite set of indices. One reason why bandlimited functions are important is that any L 2 -function can be approximated as closely as desired in the L 2 -metric by a bandlimited function.
The irreducible representations of SU (2) have several properties that simplify our analysis of the bandlimited case ([18, Chapter 2]). First, there exists one and only one irreducible representation (up to equivalence) in each dimension. It is thus possible to index the set of all irreducible representations (modulo equivalence) by the nonnegative integers, in such a way that for each ℓ ≥ 0, the representation D ℓ has dimension ℓ + 1. With that indexing, D 0 is the trivial representation g → 1, and D 1 is the self represntation g → g. Furthermore, for any nonnegative integers p, q, the tensor product D p ⊗ D q reduces explicitly as follows:
D p ⊗ D q = C pq D p+q ⊕ D p+q−2 ⊕ D p+q−4 ⊕ · · · ⊕ D |p−q| C † pq . (5.1)
The unitary matrix C pq above is the Clebsch-Gordan matrix for p and q.
We now review briefly the methods of Fourier analysis on SU (2). Let f be any L 2 function on SU (2). Its Fourier coefficients are the matrices
F (ℓ) = G f (g)D ℓ (g) † dg; ℓ ≥ 0. (5.2)
The function f is the limit in L 2 of the series
∞ ℓ=0 (ℓ + 1)Tr [F (ℓ)D ℓ (g)] . (5.3)
By Lemma 3.1 we know that the bispectrum on SU (2) has the form
A 3,f (p, q) = [F (p) ⊗ F (q)] C pq F (p + q) † ⊕ F (p + q − 2) † ⊕ · · · ⊕ F (|p − q|) † C † pq .
We now devise an algorithm for recovering any real-valued bandlimited function with nonsingular coefficients from its bispectrum. Our algorithm makes use of the following facts from matrix theory. First, any positive definite matrix H has a unique "positive square root", i.e., a positive-definite matrix H We require one last fact: the coefficient matrix F (1) of any real-valued function f on SU (2) has nonnegative determinant. To see that, recall from above that D 1 is the self-representation of SU (2), and thus
F (1) = G f (g)D 1 g † dg = G f (g) d 11 1 (g) * −d 21 1 (g) * d 21 1 (g) d 11 1 (g)
dg. (5.5) By evaluating the matrix coefficients above, and using the assumption of real f , we find that det[F (1)] ≥ 0.
Putting all the facts above together, we obtain the following result. Proposition 5.1. Let L > 0, and let f be any real-valued function on SU (2) whose Fourier coefficients are such that F (ℓ) is a nonsingular matrix for each ℓ ≤ L, and furthermore, F (ℓ) = 0 if ℓ > L. Then f can be uniquely recovered up to a left translation from its bispectrum A 3,f .
Proof. Since f is real-valued, it follows that F (0) is a real number. Equation (5) shows that A 3,f (0, 0) = F (0) 3 , and thus we obtain F (0) by taking cube roots. By assumption, F (0) is nonzero, and thus we obtain from (5) that
A 3,f (1, 0) F (0) = F (1)F (1) † . (5.6)
The matrix on the right hand side above is positive definite. LetF (1) = (1), and thus det U † = +1. Consequently, U † ∈ SU (2), and we may writê F (1) = F (1)D 1 (x) for x = U † . If L = 1, then we are done. Otherwise, the following algorithm produces matricesF (2), . . . ,F (L), such thatF (ℓ) = F (ℓ)D ℓ (x) for the same x and for all 2 ≤ ℓ ≤ L. Since we knowF (1) and A 3,f (1, 1), we obtainF (2) from the upper-left 3 × 3 submatrix of the following 4 × 4 matrix:
C † 11 F (1) −1 ⊗F (1) −1 A 3,f (1, 1)C 11 . (5.7)
The reason we use the matrix above is as follows. All terms above are known, and if we substitute forF (1) and A 3,f (1, 1), then we find that
C † 11 F (1) −1 ⊗F (1) −1 A 3,f C 11 = C † 11 D 1 (x) † ⊗ D 1 (x) † F (1) −1 ⊗ F (1) −1 [F (1) ⊗ F (1)] C 11 F (2) † ⊕ F (0) C † 11 C 11 .
Cancelling terms, and using the reduction formula (5.1), shows that
C † 11 F (1) −1 ⊗F (1) −1 A 3,f (1, 1)C 11 = D 2 (x) † F (2) † ⊕ F (0). (5.8)
The upper left 3 × 3 submatrix of the right hand side is exactly the matrix [F (2)D 2 (x)] † , and we set its adjoint equal toF (2). Having obtainedF (2) in that way, we obtainF (ℓ) for any ℓ > 2 from the upper (ℓ + 1) × (ℓ + 1) submatrix of the following matrix
C † (ℓ−1)1 F (ℓ − 1) −1 ⊗F (1) −1 A 3,f (ℓ − 1, 1)C (ℓ−1)1 (5.9)
The same argument as above shows that
C † (ℓ−1)1 F (ℓ − 1) −1 ⊗F (1) −1 A 3,f (ℓ − 1, 1)C (ℓ−1)1 = D ℓ (x) † F (ℓ) † ⊕F (ℓ − 2) † .
We setF (ℓ) equal to the adjoint of the upper (ℓ + 1) × (ℓ + 1) submatrix of the right hand side, and thus obtain thatF (ℓ) = F (ℓ)D ℓ (x). On doing so for all ℓ ≤ L, the functionf on SU (2) obtained by Fourier series expansion with the coefficients F (0), F (1), . . .,F (L) is such thatf (g) = f (xg) for all g.
It is easy to prove the same completeness result for bandlimited functions on SO(3). We describe the few differences that exist, drawing on standard facts about representations of SO(3) ([18, Chap II]). First, the irreducible representations of SO(3) occur only in odd dimensions, and there is exactly one representation (modulo equivalence) in each odd dimension. Thus we may list any selection of irreducible representations as {D ℓ } ∞ ℓ=0 , where for each ℓ, the representation D ℓ has dimension (2ℓ + 1). In that indexing, D 0 is the trivial representation, and D 1 is equivalent to the self-representation g → g of SO(3), i.e., D 1 (g) = U gU † for some unitary matrix U . (Recall that SO(3) is the set of all real-valued 3 × 3 orthogonal matrices with determinant +1.) For each n, m, the tensor-product D n ⊗ D m reduces explicitly as follows:
D n ⊗ D m = C nm D n+m ⊕ D n+m−1 ⊕ · · · ⊕ D |n−m| C † nm . (5.10)
With the formula above, it is easy to see that the recursive algorithm given in the proof of Proposition 5.1 generalizes to recover all real-valued bandlimited functions on SO(3). To initialize the algorithm, we require an estimateF (1) of the first coefficient F (1) from the data F (1)F (1) † , such thatF (1) = F (1)D 1 (g) for some element g of SO(3). Assuming that F (1) is nonsingular, we obtain the estimate as follows. The representation D 1 is such that D 1 (g) = U gU † , where U is fixed as g varies in SO (3). Thus
F (1) = G f (g)D 1 (g) † dg, = U G f (g)g † U † .
Let F s (1) denote the matrix that results by evaluating the integral in brackets. Since f is real-valued, and every matrix g has real coefficients, the matrix
F (1) = UF s (1)U † = U F s (1)gU † = U F s (1)U † U gU † = F (1)D 1 (g). (5.11)
The assumption that det[F (1)] > 0 is not critical. We use it only to obtain that det 1)] is known, and the algorithm described above is used.
6. Applications. As mentioned in the introduction, the invariance and completeness properties of the bispectrum lend themselves to applications in pattern matching problems. One particular application is described here. R. Kondor [14] demonstrates how translation-and rotation-invariant matching of hand-written characters is accomplished with bispectral invariants. To do so, Kondor notes that, for practical purposes, the characters themselves may defined as intensity-valued functions on a compact patch on R 2 of radius 1. A transformation may be constructed that maps the planar patch to the upper hemisphere of the sphere S 2 as follows:
ω : (r, φ R 2 ) → (θ, φ S 2 ) with r = sin(θ); φ R 2 = φ S 2 . (6.1)
The subscripts denote the domain of the angle involved, whether it be the plane R 2 or the sphere S 2 . Kondor shows that rigid body motions on the patch, each of which consists of a rotation by an angle α and a translation by a vector T = (t x , t y ) with T ≤ 1, may be mapped to 3-D rotations through the use of Euler angles (θ, φ, ψ) as follows α = ψ; t x = sin θ cos φ; t y = sin θ sin φ. (6.2) This mapping produces a local isomorphism between planar rigid motions and spherical rotations.
By using the transformation (6.1), every intensity function defined on the planar patch may be converted to a function on S 2 . The problem of finding rigid-motion invariants on R 2 now becomes one of finding rotation invariants on the sphere S 2 . Since the sphere is a homogeneous space for SO (3), every function f on S 2 may in turn be lifted to a function f on SO(3) using the "north-pole" mapping: if z = [0, 0, 1], then f (R) = f (Rz) for every R ∈ SO(3). We may now construct the bispectrum of f from eq. (3.4) using the Fourier transform on SO(3), which may be calculated using spherical harmonic basis functions. Kondor calculates bispectral invariants in this way, and shows, in an experiment using 1000 hand-written characters from a standard dataset, that the invariants perform well in matching over arbitrary orientations and starting positions of the characters.
A second application of the bispectrum occurs in astrophysical models of primordial fluctuations, as mentioned in the introduction. Cosmic inflation [7] predicts a Gaussian pattern of temperature anisotropies in the cosmic microwave background radiation (CBR). The CBR anisotropy is a function defined on S 2 , and therefore we may calculate its bispectrum using eq. (3.4). If the anisotropy is Gaussian, then the expected value of the angular bispectrum is zero. However, as X. Luo [16] shows, the stochastic nature of anisotropies means that cosmic variance makes it difficult to extract non-Gaussian structure from CBR data. In that paper, as in much of the physics literature, expressions of the bispectrum follow the "summation notation", which implicitly focuses attention at the level of individual elements of bispectral matrices. It is hoped that the approach of this paper, including the matrix form derived in (3.4), proves useful in allowing insight into higher level properties, such as matrix rank, decomposition, and completeness.
Both of the applications mentioned apply the bispectrum to functions on the sphere. Healy et al [10] describe a fast "divide-and-conquer" discrete Legendre transform, which leads to an "FFT" on S 2 . They show how this transform leads to efficient computation of bispectrum on the sphere.
7. Summary and future directions. This paper derives completeness properties of the bispectrum for functions defined on compact groups and their homogeneous spaces. A matrix form of the bispectrum is derived, and it is shown that every function with nonsingular coefficients is completely determined, up to a group translation, by its bispectrum. A reconstruction algorithm for functions defined on the groups SU (2) and SO(3) is described.
Results similar to those in this paper may be established for non-compact, noncommutative groups [12]. Those results rely on the duality theorem of Tatsuuma. The Tannaka-Krein duality theorem, which is central to this paper, has been extended to compact groupoids [1]. It would be interesting to see if a corresponding bispectral theory may be constructed there. Appendix B. Proof of Theorem 4.5. If ω preserves multiplication and complex-conjugation, then the Iwahori-Sugiura theorem shows that there exists a unique coset Hg such that ω(P α D α ) = P α D α (Hg) for all α. From this, equations (4.7) and (4.8) follow immediately. Suppose now that ω is some linear map that also satisfies eq. (4.7). Applying Lemma 4.3 to both sides of the tensor product decomposition in eq. (2.1) yields
(P σ D σ ) ⊗ (P δ D δ ) = [P σ ⊗ P δ ] C σδ [(P α1 D α1 ) ⊕ · · · ⊕ (P α k D α k )] C † σδ . (B.1)
Now apply ω to both sides to obtain
ω ((P σ D σ ) ⊗ (P δ D δ )) = [P σ ⊗ P δ ] C σδ [ω(P σ1 D α1 ) ⊕ · · · ⊕ ω(P α k D α k )] C † σδ .
Because eq. (4.7) holds, we obtain that for all σ, δ,
ω ((P σ D σ ) ⊗ (P δ D δ )) = ω(P σ D σ ) ⊗ ω(P δ D δ ). (B.2)
Thus ω is multiplicative. Suppose now that the linear and multiplicative map ω also satisfies eq. (4.8). Applying ω to both sides of the identity (P σ D σ )(P α D σ ) † = P α yields
ω(P α D α )ω (P α D α ) † = P α . (B.3)
We show that ω (P α D α ) † = ω(P α D α ) † , proving that ω preserves conjugation. Let ζ α be any nonzero row of the matrix P α D α . We establish the following three equalities:
< ω(ζ α ), ω(ζ α ) > = 1, (B.4) < ω(ζ α ), ω(ζ * α ) * > = 1, (B.5) < ω(ζ * α ), ω(ζ * α ) > = 1. (B.6)
The first equality (B.4) follows from (4.8) (recall that we are working with a convenient selection, for which P α = I(rank(α))). The second is derived from eq. (B.3). The final equality requires more work, but is a straightforward consequence of (4.8) and the linearity of ω. We give its proof later, but for now assume that it is true. The three inequalities above imply that
< ω(ζ α ), ω(ζ * α ) * >= ω(ζ α ) ω(ζ * α ) * . (B.7)
The Cauchy-Schwartz inequality shows that the identity above holds if and only if ω(ζ α ) = cω(ζ * α ) * , and from eq.(B.5) we see that c = 1. Thus ω(ζ α ) = ω(ζ * α ) * . Since the preceeding argument applies to any nonzero-row ζ α of any matrix P α D α , it follows that ω preserves conjugation. Now to prove (B.6). For any representation D α in our selection, the conjugate representation D * α is also irreducible, and there are two cases: (i) D * α = A α D α A † α for some unitary matrix A α ; (ii) D * α = A β D β A † β where β = α. Assume that the first case is true. It is easy to show that any matrix A α expressing the equivalence of conjugate representations is symmetric, and thus A † α = A * α ([5, pg 15]). Furthermore, only the first rank(α) rows of D α are H-invariant, and that must also be true of the matrix D * α = A α D α A * α . Thus A α transforms the first rank(α) rows among themselves, which means that A α must have the block-diagonal form A α = A α,1 ⊕ A α,2 , where A α,1 is a symmetric unitary matrix with dimensions rank(α) × rank(α). Thus P α A α = A α P α . Putting those facts together, we obtain the following identity by virtue of ω's linearity:
ω(P α D * α ) = ω(P α A α D α A * α ) = A α ω(P α D α )A * α . (B.8)
By using the identity above and eq. (4.8), we find that ω(P α D * α )ω(P α D * α ) † = P α . Noting that P α = I(rank(α)), the previous equality for matrices implies eq. (B.6) for their nonzero rows. Case (ii) is similar.
Appendix C. Proof of Theorem 4.6.
If r and s are left translates of each other, then a 3,r = a 3,s , as follows from the definition of triple correlation and the left invariance of Haar measure. Now suppose that a 3,r = a 3,s . Lemma 3.1 shows that for all σ, δ, [R(σ) ⊗ R(δ)] C σδ R(α 1 ) † ⊕ · · · ⊕ R(α k ) † C † σδ = (C.1)
[S(σ) ⊗ S(δ)] C σδ S(α 1 ) † ⊕ · · · ⊕ S(α k ) † C † σδ .
If we set σ = δ = 1 and apply the same argument used in the proof of Theorem 3.2, we obtain R(1) = S(1). The maximal H-rank assumption implies that the scalar R(1) = S(1) is nonzero. Now set δ = 1 in (C.2) above. Cancelling R(1) = S(1) from both sides shows that R(σ)R(σ) † = S(σ)S(σ) † . (C.2) Hence, we have for each σ that S(σ) = R(σ)U (σ) for some unitary matrix U (σ). Substituting into eq. (C.2) reveals that [R(σ) ⊗ R(δ)] C σδ R(α 1 ) † ⊕ · · · ⊕ R(α k ) † C † σδ = [R(σ) ⊗ R(δ)] C σδ U (α) † ⊕ · · · ⊕ U (α k ) † R(α 1 ) † ⊕ · · · ⊕ R(α k ) † C † σδ .
We cancel C † σδ from both sides. The identity R(α) = R(α)P α implies that R(α) † = P α R(α) † , and thus image(R(α) † ) ⊂ P α (H α ). But the rank of R(α) † equals that of P α , and thus image(R(α) † ) = P α (H α ). The last identity implies that [R(σ) ⊗ R(δ)] C σδ [P α1 ⊕ · · · ⊕ P α k ] = [R(σ) ⊗ R(δ)] [U (σ) ⊗ U (δ)] C σδ U (α 1 ) † ⊕ · · · ⊕ U (α k ) † [P α1 ⊕ · · · ⊕ P α k ] .
Multiplying both sides from the right by C † σδ [P σ ⊗ P δ ] and using Lemma 4.3 reveals that [R(σ) ⊗ R(δ)] [P σ ⊗ P δ ] = [R(σ) ⊗ R(δ)] [U (σ) ⊗ U (δ)] C σδ U (α 1 ) † ⊕ · · · ⊕ U (α k ) † C † σδ [P σ ⊗ P δ ] .
We substitute R(σ) = R(σ)P σ and R(δ) = R(δ)P σ into the leftmost tensor product term on the right hand side of the equation above and simplify, to obtain
[R(σ) ⊗ R(δ)] [P σ ⊗ P δ ] = [R(σ) ⊗ R(δ)] [P σ ⊗ P δ ] [U (σ) ⊗ U (δ)] C σδ U (α 1 ) † ⊕ · · · ⊕ U (α k ) † C † σδ [P σ ⊗ P δ ] .
For each α, the identity R(α)P α = R(α) together with the assumption that R(α) has maximal H-rank imply that R(α) is one-to-one on P α (H α ). Thus the equation above implies that
P σ ⊗ P δ = [P σ ⊗ P δ ] [U (σ) ⊗ U (δ)] C σδ U (α 1 ) † ⊕ · · · ⊕ U (α k ) † C † σδ [P σ ⊗ P δ ]
The matrix in between the two orthogonal projections on the right hand side is unitary; it is easily seen that for any unitary matrix U and any orthogonal projection P , the matrix equation P = P U P holds only if U P = P . Thus we obtain P σ ⊗ P δ = [U (σ) ⊗ U (δ)] C σδ U (α 1 ) † ⊕ · · · ⊕ U (α k ) † C † σδ [P σ ⊗ P δ ] (C.3)
Rearranging terms, we obtain the following identity:
U (σ) † ⊗ U (δ) † [P σ ⊗ P δ ] = C σδ U (α 1 ) † ⊕ · · · ⊕ U (α k ) † C † σδ [P σ ⊗ P δ ].
Substituting from Lemma 4.3 in the right hand side, and subsequently taking the matrix adjoint of both sides, reveals that [(P σ U (σ)) ⊗ (P δ U (δ))] = [P σ ⊗ P δ ] C σδ [(P α1 U (α 1 )) ⊕ · · · ⊕ (P α k U (α k ))] C † σδ .
Theorem 4.5, together with the Iwahori-Sugiura Theorem, shows that the identity above holds if and only if there exists a coset Hx such that P α U (α) = P α D α (Hx) for all α ∈ G. Thus for each α we have the string of identities S(α) = R(α)U (α) = R(α)P α U (α) = R(α)P α D α (Hx) = R(α)D α (x). (C.4)
The translation property of the Fourier transform now shows that s(g) = r(xg) for all g.
We use two important properties of the Fourier transform in what follows ([5, pp 73-78]): (i) the Fourier transform of any f ∈ L 1 (G) determines f uniquely up to a set of Haar measure zero; (ii) s(g) = r(xg) for all g if, and only if, S(α) = R(α)D α (x) for all α.
For any closed subgroup H of G, let Θ H (G) denote the subalgebra of Θ(G) consisting of functions that are invariant under left H-translations. For each f ∈ Θ H (G), let f (Hg) denote the common value given to elements of the coset Hg by f . The algebraic structure of Θ H (G) is revealed to a large extent by the multiplicative linear functionals ω : Θ H (G) → C, i.e., algebra homomorphisms of Θ H (G). The Iwahori-Sugiura theorem characterizes those algebra homomorphisms that preserve conjugation. Theorem 4.1 (Iwahori-Sugiura). To each algebra homomorphism ω : Θ H (G) → C that preserves conjugation, there corresponds a unique coset Hg in the quotient space G\H such that for all f ∈ Θ H (G), ω(f ) = f (Hg). (4.1)
Lemma 4. 2 .
2Any function f ∈ Θ H (G) can be expressed as a unique finite linear combination of the left H-invariant matrix coefficients of a given selection.
Lemma 4. 4 .
4Let {D α } α∈G be a convenient selection and {P α } α∈G be its projections. The nonzero coefficients in the matrices {P α D α } are precisely those coefficients of the selection that are left H-invariant.
In eq.(4.7), the matrix C σδ and the indices α 1 ,. . . ,α k are as in eq. (2.1).The proof is given in Appendix B.Let f be a function in L 1 (G) such that f (hg) = f (g) for all h in a given closed subgroup H of G. The translation property of the Fourier transform ensures that each Fourier coefficient F (α) satisfies the identity F (α) = F (α)D α (h) for all h in H.
Theorem 4 . 6 .
46Let G be any compact group, and let H be any closed subgroup of G. Let r ∈ L 1 (G) be invariant under left H-translations. If the Fourier coefficients {R(α)} α∈G all have maximal H-rank, then a 3,r = a 3,s for some s ∈ L 1 (G) if and only if there exists x ∈ G such that s(g) = r(xg) for all g.
(
The normalizer of H is the largest subgroup N H of G such that G itself is a normal subgroup of N H .) In fact, we show that x must lie in N H in the following theorem.
Theorem 4 . 7 .
47Let r, s in L 1 (G) be two left H-invariant functions whose Fourier coefficients R(α) and S(α) both have maximal H-rank for all α. Then a 3,r = a 3,s if and only if s(g) = r(xg) for some x ∈ N H .
now see byLemma 4.4 that P α D α (x −1 hx) = P α if and only if x −1 hx ∈ H.The last inclusion holds for all h ∈ H, and thus x −1 Hx = H, or equivalently, x ∈ N H .
+Q
= H ([15, pg 181]). The square root is constructed explicitly by diagonalizing H, i.e., finding a unitary matrix U such that H = U DU † , where D is the diagonal matrix of eigenvalues (here, all nonnegative), and setting H the diagonal matrix containing the positive square roots of the eigenvalues. Although the positive square root H 1 2 + is unique, there are in fact several possible matrix square roots, each formed by setting H 1 2 Q = U QU † , where U is the unitary matrix reducing H to diagonal form, and Q is a diagonal matrix whose entries are either the positive or negative square roots of the eigenvalues λ 1 , . . ., λ n of H: diag ± λ 1 , . . . , ± λ n . = H for any such Q. The second fact from matrix theory that we use is that any nonsingular matrix A has a unique polar decomposition A = H + U , where H + = (AA † ) 1 2 + , and U is a unitary matrix. The polar decomposition is unique in the sense that if A = H + U = H ′ + U ′ for positive definite matrices H + , H ′ + , and unitary matrices U , U ′ , then H + = H ′ + and U = U ′ . It is easy to see that H + as chosen above is such that det(H + ) = |detA|. If the determinant of A is real, then we may choose a square root H of (AA † ) such that A = HU , where U is unitary and det(H) = det(A). The last observation becomes important in our analysis of SO(3) below.
positive square-root as constructed above. In polar form F (1) =F (1)U , and thusF (1) = F (1)U † . The determinant of F (1) is positive, as is the determinant ofF
F s (1) has only real coefficients. Thus the determinant of F (1) = U F s (1)U † is a real number. Assume for the moment that det [F (1)] = det [F s (1)] > 0. LetF (1) andF s (1) denote respectively the (unique) positive square roots ofF (1)F (1) † and F s (1)F s (1) † . Since F (1)F (1) † = U F s (1)F s (1) † U † ,it is easily seen thatF (1) = UF s (1)U † . Now consider the polar decomposition F s (1) = HV , where H is positive definite and V is unitary. Recall from the earlier discussion for SU (2) that H = F s (1)F s (1) † 1 2 + , and thus H =F s (1). Since F s (1) is real-valued, V must be real-valued orthogonal matrix. Matching determinants on both sides of the equation F s (1) =F s (1)V reveals that det[V ] = +1, and thus V = g, for some g ∈ SO(3). Substitution reveals that
[V ] = +1, where V =F s (1) −1 F s (1). Instead of selectingF (1) to be the positive definite square root of F (1)F (1) † , we may chooseF (1) to be any square root such that det[F (1)] = det[F (1)], e.g., by multiplying the top row of the positive definite square root matrix by −1 if necessary. We do not know det[F (1)] a priori, but if we store it as "side information" along with the bispectrum, then we obtain a complete rotation-invariant description for any real-valued bandlimited function on SO(3). Note that det[F (1)] remains invariant under translation on SO(3), i.e., if f (g) = s(hg), then F (1) = S(1)D 1 (h), but since det[D 1 (h)] = +1, we obtain that det[F (1)] = det[S(1)]. To sum up, any real-valued bandlimited function f on SO(3), whose coefficient matrices are all nonsingular up to the bandlimit, can be recovered completely-up to a single translation on SO(3)-if both its bispectrum and the value of det[F (
Appendix A. Proof of Lemma 4.2. Since Θ H ⊂ Θ(G), each f ∈ Θ H (G) is a unique finite linear combination of matrix coefficients d pq α (not necessarily Hrule D α (hg) = D α (h)D α (g) for representation matrices independence of the matrix coefficients implies that in the equation above, d pp α (h) = 1 and d pℓ α (h) = 0 if ℓ = p. Thus d pq α (hg) = d pq α (g) for every coefficient function in eq. (A.1).
Acknowledgments. I thank the numerous people who wrote for a copy of my Ph.D. dissertation[12], in which this work was first presented. This work was influenced in many ways by the suggestions and insights of my late supervisor, Professor Bruce M. Bennett.
M Amini, Tannaka-krein duality for compact groupoids i, representation theory. 214M. Amini, Tannaka-krein duality for compact groupoids i, representation theory, Advances in Mathematics, 214 (2007), pp. 78-91.
A O Barut, R Raczka, Theory of group representations and applications. Singapore2nd ed.A. O. Barut and R. Raczka, Theory of group representations and applications, World Sci- entific, Singapore, 2nd ed., 1986.
Some history of higher-order statistics and spectra. D Brillinger, Proceedings of IEEE Workshop on Higher Order Spectral Analysis. IEEE Workshop on Higher Order Spectral AnalysisD. Brillinger, Some history of higher-order statistics and spectra, in Proceedings of IEEE Workshop on Higher Order Spectral Analysis, 1989.
C Chevalley, Theory of Lie groups. Princeton University PressC. Chevalley, Theory of Lie groups, Princeton University Press, 1946.
Integration and harmonic analysis on groups. R E Edwards, Cambridge University PressCambridge, MAR. E. Edwards, Integration and harmonic analysis on groups, Cambridge University Press, Cambridge, MA, 1972.
Triple correlator of photoelectric fluctuations as a spectroscopic tool. H Gamo, Journal of Applied Physics. 34H. Gamo, Triple correlator of photoelectric fluctuations as a spectroscopic tool, Journal of Applied Physics, 34 (1963), pp. 875-876.
A H Guth, The inflationary universe: the quest for a new theory of cosmic origins. Perseus, Cambridge, MAA. H. Guth, The inflationary universe: the quest for a new theory of cosmic origins, Perseus, Cambridge, MA, 1997.
Abstract harmonic analysis. E Hewitt, K A Ross, Springer-VerlagIIBerlinE. Hewitt and K. A. Ross, Abstract harmonic analysis, vol. II, Springer-Verlag, Berlin, 1970.
A duality theorem for homogeneous manifolds of compact lie groups. N Iwahori, M Sugiura, Osaka Journal of Mathematics. 3N. Iwahori and M. Sugiura, A duality theorem for homogeneous manifolds of compact lie groups, Osaka Journal of Mathematics, 3 (1966), pp. 139-153.
Ffts for the 2-sphereimprovements and varations, The journal of Fourier analysis and applications. D M HealyJr, D Rockmore, P Kostelec, S S B Moore, 9D. M. Healy Jr., D. Rockmore, P. Kostelec, and S. S. B. Moore, Ffts for the 2-sphere- improvements and varations, The journal of Fourier analysis and applications, 9 (2003), pp. 341-385.
Uniqueness theorems for generalized autocorrelation functions. J I YellottJr, G J Iverson, Journal of the Optical Society of America A. 9J. I. Yellott Jr. and G. J. Iverson, Uniqueness theorems for generalized autocorrelation functions, Journal of the Optical Society of America A, 9 (1992), pp. 388-401.
Triple correlation on groups. R Kakarala, University of California, IrvinePhD thesisR. Kakarala, Triple correlation on groups, PhD thesis, University of California, Irvine, 1992.
Bispectral techniques for spherical functions. R Kakarala, B M Bennett, G J Iverson, M D'zmura, Proceedings of ICASSP. ICASSP4R. Kakarala, B. M. Bennett, G. J. Iverson, and M. D'Zmura, Bispectral techniques for spherical functions, in Proceedings of ICASSP, vol. 4, 1993, pp. 216-219.
Group theoretical methods in machine learning. R Kondor, Columbia UniversityPhD thesisR. Kondor, Group theoretical methods in machine learning, PhD thesis, Columbia University, 2008.
The theory of matrices. P Lancaster, M Tismenetsky, Academic PressSan Diego2nd ed.P. Lancaster and M. Tismenetsky, The theory of matrices, Academic Press, San Diego, 2nd ed., 1985.
The angular bispectrum of the cosmic microwave background. X Luo, Astrophysical Journal. 427X. Luo, The angular bispectrum of the cosmic microwave background, Astrophysical Journal, 427 (1994), pp. L71-L74.
M A Naimark, A I Stern, Theory of group representations. New YorkSpringer-VerlagM. A. Naimark and A. I. Stern, Theory of group representations, Springer-Verlag, New York, 1982.
H Sugiura, Unitary group representations and harmonic analysis. New YorkHalsted PressH. Sugiura, Unitary group representations and harmonic analysis, Halsted Press, New York, 1975.
D A Varshalovich, A N Moskalev, V K Kershonskii, Quantum theory of angular momentum. SingaporeWorld ScientificD. A. Varshalovich, A. N. Moskalev, and V. K. Kershonskii, Quantum theory of angular momentum, World Scientific, Singapore, 1988.
D P Zelobenko, Compact Lie groups and their representations. Providence, RIAmerican Mathematical SocietyD. P. Zelobenko, Compact Lie groups and their representations, American Mathematical Society, Providence, RI, 1973.
| []
|
[
"CBA: Contextual Quality Adaptation for Adaptive Bitrate Video Streaming (Extended Version)",
"CBA: Contextual Quality Adaptation for Adaptive Bitrate Video Streaming (Extended Version)"
]
| [
"Bastian Alt \nBioinspired Communication Systems Lab (BCS) ‡ Multimedia Communications Lab (KOM)\nTechnische Universität Darmstadt\nGermany\n",
"Trevor Ballard [email protected] \nBioinspired Communication Systems Lab (BCS) ‡ Multimedia Communications Lab (KOM)\nTechnische Universität Darmstadt\nGermany\n",
"Ralf Steinmetz \nBioinspired Communication Systems Lab (BCS) ‡ Multimedia Communications Lab (KOM)\nTechnische Universität Darmstadt\nGermany\n",
"Heinz Koeppl \nBioinspired Communication Systems Lab (BCS) ‡ Multimedia Communications Lab (KOM)\nTechnische Universität Darmstadt\nGermany\n",
"Amr Rizk amr.rizk|[email protected] \nBioinspired Communication Systems Lab (BCS) ‡ Multimedia Communications Lab (KOM)\nTechnische Universität Darmstadt\nGermany\n"
]
| [
"Bioinspired Communication Systems Lab (BCS) ‡ Multimedia Communications Lab (KOM)\nTechnische Universität Darmstadt\nGermany",
"Bioinspired Communication Systems Lab (BCS) ‡ Multimedia Communications Lab (KOM)\nTechnische Universität Darmstadt\nGermany",
"Bioinspired Communication Systems Lab (BCS) ‡ Multimedia Communications Lab (KOM)\nTechnische Universität Darmstadt\nGermany",
"Bioinspired Communication Systems Lab (BCS) ‡ Multimedia Communications Lab (KOM)\nTechnische Universität Darmstadt\nGermany",
"Bioinspired Communication Systems Lab (BCS) ‡ Multimedia Communications Lab (KOM)\nTechnische Universität Darmstadt\nGermany"
]
| []
| Recent advances in quality adaptation algorithms leave adaptive bitrate (ABR) streaming architectures at a crossroads: When determining the sustainable video quality one may either rely on the information gathered at the client vantage point or on server and network assistance. The fundamental problem here is to determine how valuable either information is for the adaptation decision. This problem becomes particularly hard in future Internet settings such as Named Data Networking (NDN) where the notion of a network connection does not exist.In this paper, we provide a fresh view on ABR quality adaptation for QoE maximization, which we formalize as a decision problem under uncertainty, and for which we contribute a sparse Bayesian contextual bandit algorithm denoted CBA. This allows taking high-dimensional streaming context information, including client-measured variables and network assistance, to find online the most valuable information for the quality adaptation. Since sparse Bayesian estimation is computationally expensive, we develop a fast new inference scheme to support online video adaptation. We perform an extensive evaluation of our adaptation algorithm in the particularly challenging setting of NDN, where we use an emulation testbed to demonstrate the efficacy of CBA compared to state-of-the-art algorithms. | 10.1109/infocom.2019.8737418 | [
"https://arxiv.org/pdf/1901.05712v1.pdf"
]
| 58,014,102 | 1901.05712 | f62a825fa33a97968f9ca5f04a70941d8e4aec73 |
CBA: Contextual Quality Adaptation for Adaptive Bitrate Video Streaming (Extended Version)
Bastian Alt
Bioinspired Communication Systems Lab (BCS) ‡ Multimedia Communications Lab (KOM)
Technische Universität Darmstadt
Germany
Trevor Ballard [email protected]
Bioinspired Communication Systems Lab (BCS) ‡ Multimedia Communications Lab (KOM)
Technische Universität Darmstadt
Germany
Ralf Steinmetz
Bioinspired Communication Systems Lab (BCS) ‡ Multimedia Communications Lab (KOM)
Technische Universität Darmstadt
Germany
Heinz Koeppl
Bioinspired Communication Systems Lab (BCS) ‡ Multimedia Communications Lab (KOM)
Technische Universität Darmstadt
Germany
Amr Rizk amr.rizk|[email protected]
Bioinspired Communication Systems Lab (BCS) ‡ Multimedia Communications Lab (KOM)
Technische Universität Darmstadt
Germany
CBA: Contextual Quality Adaptation for Adaptive Bitrate Video Streaming (Extended Version)
Recent advances in quality adaptation algorithms leave adaptive bitrate (ABR) streaming architectures at a crossroads: When determining the sustainable video quality one may either rely on the information gathered at the client vantage point or on server and network assistance. The fundamental problem here is to determine how valuable either information is for the adaptation decision. This problem becomes particularly hard in future Internet settings such as Named Data Networking (NDN) where the notion of a network connection does not exist.In this paper, we provide a fresh view on ABR quality adaptation for QoE maximization, which we formalize as a decision problem under uncertainty, and for which we contribute a sparse Bayesian contextual bandit algorithm denoted CBA. This allows taking high-dimensional streaming context information, including client-measured variables and network assistance, to find online the most valuable information for the quality adaptation. Since sparse Bayesian estimation is computationally expensive, we develop a fast new inference scheme to support online video adaptation. We perform an extensive evaluation of our adaptation algorithm in the particularly challenging setting of NDN, where we use an emulation testbed to demonstrate the efficacy of CBA compared to state-of-the-art algorithms.
I. INTRODUCTION
Video streaming services such as Netflix, YouTube, and Twitch, which constitute an overwhelming share of current Internet traffic, use adaptive bitrate streaming algorithms that try to find the most suitable video quality representation given the client's networking conditions. Current architectures use Dynamic Adaptive Streaming over HTTP (DASH) in conjunction with client-driven algorithms to adjust the quality bitrate of each video segment based on various signals, such as measured throughput, buffer filling, and derivatives thereof. In contrast, new architectures such as SAND [1] introduce network-assisted streaming via DASH-enabled network elements that provide the client with guidance, such as accurate throughput measurements and source recommendations. Given the various adaptation algorithms that exist in addition to client-side and network-assisted information, a fundamental question arises on the importance of this context information for the Quality of Experience (QoE) of the video stream. This work has been funded by the German Research Foundation (DFG) as part of the projects B4 and C3 within the Collaborative Research Center (CRC) 1053 -MAKI. + The first two authors equally contributed major parts of this article.
The problem of video quality adaptation is aggravated in Future Internet architectures such as Named Data Networking (NDN). In NDN, content is requested by name rather than location, and each node within the network will either return the requested content or forward the request. Routers are equipped with caches to hold frequently-requested content, thereby reducing the round-trip-time (RTT) of the request while simultaneously saving other network links from redundant content requests. Several attempts to make DASH-style streaming possible over NDN exist, e.g., [2], for which the key difficulty is that traditional algorithms rarely play to the strengths of NDN where the notion of a connection does not exist. Throughput, for example, is not a trivial signal in NDN as data may not be coming from the same source.
In this paper, we closely look at the problem of using context information available to the client for video quality adaptation. Note that our problem description is agnostic to the underlying networking paradigm, making it a good fit to traditional IP-based video streaming as well as NDN. In essence, we consider the fundamental problem of sequential decision-making under uncertainty where the client uses network context information received with every fetched video segment. In Fig. 1 we show a sketch where the client adaptation algorithm decides on the quality of the next segment based on a high-dimensional network context. We model the client's decision on a video segment quality as a contextual multi-armed bandit problem aiming to optimize an objective QoE metric that comprises (i) the average video quality bitrate, (ii) the quality degradation, and (iii) the video stalling.
One major challenge with incorporating high-dimensional network context information in video quality adaptation is extracting the information that is most relevant to the sought QoE metric. We note that the interactions within this context space become complicated given the NDN architecture, where the network topology and cache states influence the streaming session. Our approach introduces a sparse Bayesian contextual bandit algorithm that is fast enough to run online during video playback. The rationale behind the sparsity is that the given information, including network-assisted and client-side measured signals such as buffer filling and throughput, constitutes a high-dimensional context which is difficult to model in detail. Our intuition is that, depending on the client's network context, only a few input variables have a significant impact on QoE. Note, however, that sparse Bayesian estimation is usually computationally expensive. Hence, we develop here a fast new inference scheme to support online quality adaptation.
Our contributions in this paper can be summarized as: • We formulate the quality adaptation decision for QoE maximization in ABR video streaming as a contextual multi-armed bandit problem. • We provide a sparse Bayesian contextual bandit algorithm, denoted CBA, which is computationally fast enough to provide real-world video players with quality adaptation decisions based on the network context. • We show emulation testbed results and demonstrate the fundamental differences to the established state-of-the-art quality adaptation algorithms, especially given an NDN architecture. The developed software is provided here 1 . The remainder of this paper is organized as follows: In Sect. II, we review relevant related work on ABR video streaming and contextual bandits. In Sect. III, we present the relevant background on ABR video streaming. In Sect. IV, we model the quality adaptation problem as a contextual multi-armed bandit problem before providing a fast contextual bandit algorithm for high-dimensional information. In Sect. V, we show how ABR streaming uses CBA and define a QoE-based reward. We describe the evaluation testbed before providing emulation results in Sect. VI. Section VII concludes the paper.
II. RELATED WORK
In the following, we split the state-of-the-art related work into two categories; i.e., work on ABR quality adaptation, especially in NDN, and related work on contextual bandit algorithms with high-dimensional covariates.
Significant amounts of research have been given to finding streaming architectures capable of satisfying high bitrate and minimal rebuffering requirements at scale. CDN brokers such as Conviva [3] allow content producers to easily use multiple CDNs, and are becoming crucial to meet user demand [4]. Furthermore, the use of network assistance in CDNs has received significant attention recently as a method of directly providing network details to DASH players. SAND [1] is an ISO standard which permits DASH enabled in-network entities to communicate with clients and offer them QoS information. SDNDASH [5] is another such architecture aiming to maintain QoE stability across clients, as clients without network assistance information are prone to misjudge current network conditions, causing QoE to oscillate. Beyond HTTP, the capabilities of promising new network paradigms such as NDN pose challenges to video streaming. The authors of [2] compare three state-of-the-art DASH adaptation algorithms over NDN and TCP/IP, finding NDN performance to notably exceed that of TCP/IP given certain network conditions. New adaptation algorithms specific to NDN have also been proposed, such as NDNLive [6], which uses a simple RTT mechanism to stream live content with minimal rebuffering.
In this work, we model the video quality adaptation problem as a contextual bandit problem assuming a linear parametrization, which has successfully been used, e.g., for ad placement [7]. Another promising approach is based on cost-sensitive classification in the bandit setting [8]. Recently, [9] has discussed the use of variational inference in the bandit setting, wherein Thompson sampling is considered to cope with the exploration-exploitation trade-off. By assuming a highdimensional linear parametrization, we make use of sparse estimation techniques. High-dimensional information arises in video streaming due to the network context. Sparsity has been a major topic in statistical modeling and many Bayesian approaches have been proposed. Traditionally, double exponential priors which correspond to 1 regularization have been used. However, these priors often fail due to limited flexibility in their shrinkage behavior. Other approaches that induce sparsity include 'spike-and-slab' priors [10] and continuous shrinkage priors. Between these two, continuous shrinkage priors have the benefit of often being computationally faster [11]. For our approach we use the Three Parameter Beta Normal (TPBN) continuous shrinkage prior introduced by [11], which generalizes diverse shrinkage priors, e.g, the horseshoe prior [12], the Strawderman-Berger prior, the normal-exponentialgamma prior, and the normal-gamma prior.
III. ADAPTIVE BITRATE STREAMING: DECISIONS UNDER UNCERTAINTY
In this section, we review the established model for quality adaptation in ABR video streaming and highlight the changes that arise when streaming over NDN.
A. Adaptive Bitrate Streaming: A Primer
In adaptive bitrate streaming, the content provider offers multiple qualities of the same content to clients, who decide which one to pick according to their own client-side logic. Each video is divided into T consecutive segments which represents some fixed L seconds of content. These segments are encoded at multiple bitrates corresponding to the perceived average segment quality. In practice, segment lengths are often chosen to be two to ten seconds [13] with several distinct quality levels to choose from, such as 720p and 1080p. Let V represent the set of all available video qualities, such that V = {v(1), v (2), ..., v (K )} and v (i) > v (j) for all i > j; i.e., a higher index indicates a higher bitrate and better quality. Let the t-th segment encoded at the i-th quality be denoted s t (i).
Received video segments are placed into a playback buffer which contains downloaded, unplayed video segments. Let the number of seconds in the buffer when segment t is received be B t , and let the playback buffer size BUF_MAX be the maximum allowed seconds of video in the buffer. By convention, we define B 0 = 0 and write the recursion of the buffer filling as B t = max{B t−1 +L−ξ t (i), L}, where ξ t (i) denotes the fetch time for s t (i). A stalling event is ascribed to the t-th segment when ξ t (i) > B t−1 . Note that the recursion above holds only if B t−1 + L < BUF_MAX; i.e., the client is blocked from fetching new segments if the playback buffer is full. If this occurs, the client idles for exactly L seconds before resuming segment fetching. In some related work [13], BUF_MAX is chosen between 10 and 30 seconds.
To allow the client to select a segment in the i-th quality, the client fetches a Media Presentation Description (MPD), an XML-like file with information on the available video segments and quality levels, during session initialization. After obtaining the MPD, the client may begin to request each segment according to its adaptation algorithm. In general, uncertainty exists over the segment fetch time. The most prevalent quality adaptation algorithms take throughput estimates [14] or the current buffer filling B t [15], or combinations and functions thereof to make a decision on the quality of the next segment s t+1 (i). The decision aims to find the segment quality which maximizes a QoE metric, such as the average video bitrate, or compound metrics taking the bitrate, bitrate variations, and stalling events into account.
B. Streaming over Named Data Networking
In NDN, consumers or clients issue interests which are forwarded to content producers, i.e., origin servers, via cachingenabled network routers. These interests are eventually answered with data provided by the producer or an intermediary router cache. To request a video, a consumer will first issue an interest for the MPD of the video. Each s t (i) is given a name in the MPD, e.g., of the form /video ID/quality level/segment number. The client issues an interest for each data packet when requesting a particular segment. Since NDN data packets are of a small, fixed size, higher-quality video segments will require more data packets to encode. We do not permit the client to drop frames, so all data packets belonging to some segment s t (i) must be in the playback buffer to watch that segment.
IV. A FAST CONTEXTUAL BANDIT ALGORITHM FOR HIGH DIMENSIONAL COVARIATES
In this work, we model the problem of video quality adaptation as a sequential decision-making problem under uncertainty, for which a successful framework is given by the multi-armed bandit problem dating back to [16]. The contextual bandit problem [17] is an extension to the classic problem, where additional information is revealed sequentially. The decision-making can therefore be seen as a sequential game.
At decision step t, i.e., at the t-th segment, a learner observes a D dimensional context variable x t (a) ∈ R D for a set of K actions a ∈ {1, ... , K }. Here, the actions map to the K video qualities that the client chooses from. The client chooses an action a t , for which it observes a reward r t (a t ). This reward can be measured in terms of low-level metrics such as fetching time or, as we consider later, QoE. The decision making is performed over a typically unknown decision horizon T , i.e., t ∈ {1, ... , T }. Therefore, the learner tries to maximize the cumulative reward T t=1 r t (a t ) until the decision horizon. It is important to note that after each decision the learner only observes the reward r t (a t ) associated to the played action a t ; hypothetical rewards for other actions a = a t , are not revealed to the learner. Next, we model the contextual bandit problem under the linearizability assumption, as introduced in [18]. Here, we assume that a parameter β β β * (a) ∈ R D controls the mean reward of each action a at decision step t as E[r t (a)] = x t (a) β β β * (a). We introduce the regret R T of an algorithm to evaluate its performance as
R T = T t=1 r t (a * t ) − T t=1 r t (a t ),(1)
with a * t = arg max a x t (a) β β β * (a). The regret compares the cumulative reward of the algorithm against the cumulative reward with hindsight. In order to develop algorithms with a small regret in the linear setting, many different strategies have been proposed. Such algorithms include techniques based on forced sampling [19], Thompson sampling [20], and the upper confidence bound (UCB) [7], [18], [21], [22].
Network-assisted video streaming environments provide high-dimensional context information, so it is natural to assume a sparse parameter β β β * (a). We therefore impose a sparsity-inducing prior on the sought regression coefficients β β β(a). To cope with the contextual bandit setting, we start with the Bayes-UCB algorithm with liner bandits introduced in [23] and develop a version which fits the given problem. Since previously developed sparse Bayesian inference algorithms are computationally expensive, we develop a fast new inference scheme for the contextual bandit setting.
A. The Contextual Bayes-UCB Algorithm -CBA
The Contextual Bayes-UCB algorithm (CBA-UCB) selects in each round the action a which maximizes the index
q t (a) = Q(1 − 1 αt , ζ t−1 (a)),(2)
where α is a width parameter for the UCB and Q(t, ρ) is the quantile function associated with the distribution ρ, i.e., P(X ≤ Q(t, ρ)) = t, with X ∼ ρ. Additionally, we denote ζ t−1 (a) as the posterior distribution of the mean reward
ζ t−1 (a) = p(x t (a) β β β(a) | D t−1 (a)),(3)
where D t−1 (a) is the set of data points of contexts and rewards for which action a was previously played
D t−1 (a) = {(x t (a), r t (a), a t ) : a t = a, 1 ≤ t ≤ t − 1}. (4)
In the following subsections, we derive a Gaussian distribution for the posterior distribution of the regression coefficients
p(β β β(a) | D t−1 (a)) = N (β β β(a) | µ µ µ β β β(a) , Σ Σ Σ β β β(a)
). In this case the index in (2) reduces to Assuming a linear regression model r = Xβ β β + , with i.i.d. noise ∼ N ( | 0, I M /σ −2 ) the regression response r follows the likelihood
q t (a) = x t (a) µ µ µ β β β (a)+Q 1 − 1 αt , N (0, 1) x t (a) Σ Σ Σ β β β(a) x t (a),(5)p(r | β β β, σ −2 ) = M m=1 p(r m | β β β, σ −2 ) = M m=1 N (r m | x m β β β, 1/σ −2 ),
where σ −2 is the noise precision for the regression problem. For the application of video streaming with high-dimensional context information, we use a sparsity inducing prior over the regression coefficients β β β to find the most valuable context information. We use here the Three Parameter Beta Normal (TPBN) continuous shrinkage prior introduced by [11] , which puts on each regression coefficient β j , j ∈ {1, ... , D}, the following hierarchical prior
β j ∼ N (β j | 0, τ j /σ −2 ), τ j ∼ Gam (τ j | a 0 , λ j ), λ j ∼ Gam (λ j | b 0 , φ),(6)
2 For readability we drop the dependency on a of the regression coefficients β β β(a) where τ j is a Gamma distributed 3 continuous shrinkage parameter that shrinks β j , as τ j gets small. The parameter λ j controls τ j via a global shrinkage parameter parameter φ. For appropriate hyper-parameter choice of a 0 and b 0 different shrinkage prior are obtained. For example we use a 0 = 1/2, b 0 = 1/2 which corresponds to the horseshoe prior [12]. For notational simplicity, we collect the parameters λ j , τ j for the context dimensions j ∈ {1, ... , D} in the column vectors λ λ λ, τ τ τ , respectively.
For the estimation of the global shrinkage parameter φ an additional hierarchy is used as φ ∼ Gam (φ | 1/2, ω) and ω ∼ Gam (ω | 1/2, 1). For the noise precision a gamma prior is used σ −2 ∼ Gam (σ −2 | c 0 /2, d 0 /2), with hyper parameters c 0 and d 0 . The graphical model [24] of this generative model is depicted in Fig. 3.
C. Variational Bayesian Inference (VB)
In the following, we review the general approximate inference scheme of mean field variational Bayes (VB) and the application to the linear regression with TPBN prior as proposed in [11]. Thereafter, we leverage stochastic variational inference (SVI) to develop a new contextual bandit algorithm.
Since exact inference of the posterior distribution p(β β β, σ −2 , τ τ τ , λ λ λ, φ, ω | r) is intractable [25], we apply approximate inference in form of variational Bayes (VB) for posterior inference. We use a mean field variational approximation,
with q(β β β, σ −2 , τ τ τ , λ λ λ, φ, ω) = q(β β β)q(σ −2 )q(τ τ τ )q(λ λ λ)q(φ)q(ω)
for the approximate distribution. The variational distributions are obtained by minimizing the Kullback-Leibler (KL) divergence between the variational distribution and the intractable posterior distribution
KL(q(β β β, σ −2 , τ τ τ , λ λ λ, φ, ω) p(β β β, σ −2 , τ τ τ , λ λ λ, φ, ω | r)) =E q [log(q(β β β, σ −2 , τ τ τ , λ λ λ, φ, ω))] − E q [log(p(β β β, σ −2 , τ τ τ , λ λ λ, φ, ω, r))]
+ log(p(r)).
(7)
By Jensen's inequality, a lower bound on the marginal likelihood (evidence) can be found
L(q) = E q [log(p(β β β, σ −2 , τ τ τ , λ λ λ, φ, ω, r))] − E q [log(q(β β β, σ −2 , τ τ τ , λ λ λ, φ, ω))]
≤ log(p(r)).
(8)
The evidence lower bound (ELBO) L(q) is used for solving the optimization problem over the KL divergence (7), since maximizing L(q) is equivalent to minimizing the KL divergence. Using calculus of variations [25], the solution of the Algorithm 1: MAIN-ROUTINE CBA-UCB with Gaussian Posterior Input: T (decision horizon), α ∈ R + (UCB width parameter), µ µ µ β β β(a) , Σ Σ Σ β β β(a) (initial parameters for all actions a ∈ {1, ... , K })
for t = 1 to T do observe contexts x t (a), ∀a ∈ {1, ... , K } for each action a = 1 to K do compute q t (a) = x t (a) µ µ µ β β β(a) + √ 2erf −1 (1− 2 αt )) x t (a) Σ Σ Σ β β β(a) x t (a) end
play action a t = arg max a=1,...,K q t (a) observe reward r t (a t ) Call a subroutine to update estimates for µ µ µ β β β(at ) and Σ Σ Σ β β β(at ) end optimization problem can be found with the following optimal variational distributions 4
Algorithm 2: SUB-ROUTINE SVI Input: X (M × D design matrix), r (M × 1 response vector), a 0 , b 0 , c 0 , d 0 (hyper-parameter), γ n (step size schedule) Output: µ µ µ β β β , Σ Σ Σ β β β (updated parameters) Initialize all natural parameters η η η β β β , η η η σ −2 , η η η τ ,j , η η η λ,j , η η η φ , η η η ω n ← 1 (iteration step) while ELBO not converged do draw a random sample (x m , r m ) from (X,n ← n + 1 end return µ µ µ β β β , Σ Σ Σ β β β Algorithm 3: SUB-ROUTINE VB Input: X (M × D design matrix), r (M × 1 response vector), a, b, c 0 , d 0 (hyper-parameter) Output: µ µ µ β β β , Σ Σ Σ β β β (updatedq(β β β) = N (β β β | µ µ µ β β β , Σ Σ Σ β β β ), q(σ −2 ) = Gam (σ −2 | c * , d * ), q(τ τ τ ) = D j=1 GIG (τ j | p τ ,j , a τ ,j , b τ ,j ), q(λ λ λ) = D j=1 Gam (λ j | a λ,j , b λ,j ), q(φ) = Gam (φ | a φ , b φ ), q(ω) = Gam (ω | a ω , b ω ),(9)
4 GIG (x | p, a, b) denotes the generalized inverse Gaussian distribution, see Appendix A.
with the parameters of the variational distributions and the moments β β β = µ µ µ β β β , β β ββ β β = Σ Σ Σ β β β + µ µ µ β β β µ µ µ β β β ,
µ µ µ β β β = (X X + T −1 ) −1 X r Σ Σ Σ β β β = σ −2 −1 (X X + T −1 ) −1 , T −1 = diag ( τ −1 1 , ... , τ −1 D ), c * = M + D + c 0 2 , d * = (r r − 2r X β β β + M m=1 x m β β ββ β β x m + D j=1 β 2 j τ −1 j + d 0 )/2, p τ ,j = a 0 − 1/2, a τ ,j = 2 λ j , b τ ,j = β 2 j σ −2 , a λ,j = a 0 + b 0 , b λ,j = τ j + φ , a φ = Db 0 + 1/2, b φ = ω + D j=1 λ j , a ω = 1, b ω = φ + 1 (10) r m N dot x m β j N 0 dot τ j (·) −1 σ −2 Gam c 0 /2 d 0 /2σ −2 = c * /d * , λ j = a λ,j /b λ,j , φ = a φ /b φ , ω = a ω /b ω , v j = 2a τ ,j b τ ,j , τ j = b τ ,j a τ ,j 1/2 K p τ ,j +1 (v j ) K p τ ,j (v j ) , τ −1 j = a τ ,j b τ ,j 1/2 K 1−p τ ,j (v j ) K −p τ ,j (v j ) ,(11)
where K p (·) is the modified Bessel function of second kind. The calculation of the ELBO L is provided in Appendix B. Fig 4 shows the probabilistic graphical model of the mean field approximation for the generative model. Note the factorization of the random variables which enables tractable posterior inference in comparison to the probabilistic graphical model for the coupled Bayesian regression in Fig. 3. A local optimum of the ELBO L can be found by cycling through the coupled moments of the variational distributions. This corresponds to a coordinate ascent algorithm on L. The corresponding algorithm is shown in Fig. 2 Alg. 3.
D. Stochastic Variational Inference (SVI)
Next, we present a new posterior inference scheme with TPBN prior based on stochastic variational inference (SVI) [26]. We optimize the ELBO L by the use of stochastic approximation [27] where we calculate the natural gradient that is obtained with respect to the natural parameters η η η of the exponential family distribution of the mean field variational distributions.
r m N dot (·) −1 x m β β β N D µ µ µ β β β Σ Σ Σ β β β τ j σ −2 Gam c * d * GIG p τ ,j a τ ,j b τ ,j λ j Gam a λ,j b λ,j φ Gam a φ b φ ω Gam a ω b ω m = 1,
Consider the mean field approximation q(θ θ θ) = m q(θ m ) for the intractable posterior distribution p(θ θ θ | D), where θ θ θ and D denote the tuple of parameters and the data, respectively. For each factor q(θ m ), assuming it belongs to the exponential family, the probability density is
q(θ m ) = h(θ m ) exp{η η η S(θ m ) − A(η η η)}.
Here, h(θ m ) denotes the base measure, η η η are the natural parameters, S(θ m ) is the sufficient statistics of the natural parameters, and A(η η η) is the log-normalizer.
We compute the natural gradient of the ELBO L with respect to the natural parameters of the factorized variational distributions for each variational factor q(θ m ). Therefore, the natural gradient computes tô
∇ η η η L =η η η − η η η,(12)
whereη η η = E q [η η η ]. The parameter η η η is the natural parameter of the full conditional distribution p(θ m | θ −m , D), where θ −m denotes the tuple of all variables but θ m . Using a gradient update the variational approximation can be found as
η η η (n+1) = η η η (n) + γ n∇η L,(13)
where n denotes the iteration step of the algorithm and γ n is a step size parameter.
Random subsampling of the data enables constructing a stochastic approximation algorithm. For this,η η η is replaced by an unbiased estimateη η η, which yields a stochastic gradient ascent algorithm on the ELBO L in the form
η η η (n+1) = (1 − γ n )η η η (n) + γ nη η η.(14)
For the step size γ n we use 1 n . In the case of the regression problem, we sample one data point (x m , r m ) from the set of observed data points and replicate it M times to calculateη η η. The intermediate estimates of the natural parameters are then obtained bŷ
η η η β β β = σ −2 Mx m r m , − 1 2 σ −2 (Mx m x m + T −1 ) , η η η σ −2 = M + D + c 0 2 − 1, −(Mr 2 m − 2Mr m x m β β β + Mx m β β ββ β β x m + D j=1 β 2 j τ −1 j + d 0 )/2 , η η η τ ,j = a 0 − 3 2 , − λ j , β 2 j σ −2 /2 , η η η λ,j = a 0 + b 0 − 1, − τ j − φ , η η η φ = Db 0 − 1 2 , − ω − D j=1 λ j , η η η ω = 0, − φ − 1 .(15)
The derivation is provided in Appendix C. The transformation from the natural parametrization to the variational parametrization is calculated using
µ µ µ β β β , Σ Σ Σ β β β = − 1 2 η η η (2) β β β −1 η η η (1) β β β , − 1 2 η η η (2) β β β −1 , (c * , d * ) = η (1) σ −2 + 1, −η (2) σ −2 , p τ ,j , a τ ,j , b τ ,j = η (1) τ ,j + 1, −2η (2) τ ,j , 2η(3)
τ ,j ,
a λ,j , b λ,j = η (1) λ,j + 1, −η (2) λ,j , a φ , b φ = η (1) φ + 1, −η (2) φ , (a ω , b ω ) = η (1) ω + 1, −η (2) ω(16)
and the moments can then be calculated with (11). We denote by η (i) the i-th variable of the tuple of natural parameters η η η. The gradient update (14) with random subsampling is performed until the ELBO L converges. For an algorithmic description of SVI see Fig. 2 Alg. 2.
E. One Step Stochastic Variational Inference (OS-SVI)
Since the optimization of the ELBO L until convergence with both VB and SVI is computationally expensive, we present a novel one-step SVI (OS-SVI) algorithm for the bandit setting. In each round of OS-SVI the learner observes a context and a reward (x t (a), r t (a)) based on the taken action a t = a. This data point is used to update the variational parameters of the a-th regression coefficients β β β(a) by going (15) based on t replicates of the observed data point (x t (a), r t (a)). Thereafter, the stochastic gradient update is performed with (14). By transforming the natural parameters back to their corresponding parametric form (16), the updated mean µ µ µ β β β(a) and covariance matrix Σ Σ Σ β β β(a) can be found. This update step is computationally significantly faster than using VB or SVI. The OS-SVI subroutine is described in Fig. 2 Alg. 4.
F. Accuracy and Computational Speed of the CBA-UCB Algorithms
For the numerical evaluation of the CBA-UCB with three parameter Beta Normal prior, we first create data based on the linearization assumption. We use a problem with decision horizon T = 1000, D = 20 dimensions, and K = 20 actions. We use two experimental setups with a dense regression coefficient vector and a sparse regression coefficient vector, i.e., only five regression coefficients are unequal to zero.
We compare the developed algorithm CBA-UCB using the variants VB, SVI and OS-SVI with two base-line algorithms: LinUCB [7] and CGP-UCB [22]. For the CGP-UCB, we use independent linear kernels for every action. Fig. 5 and Fig. 6 show the average regret (1) for the dense and the sparse setting, respectively. For the sparse setting expected in high-dimensional problems such as network-assisted video streaming, CBA-UCB with VB yields the smallest regret. We observe in Fig. 5 that in the dense setting CGP-UCB obtains a high performance which is closely followed by CBA-UCB with VB. Note that CGP-UCB performs well, as Gaussian process regression with a linear kernel corresponds to a dense Bayesian regression with marginalized regression coefficients, and therefore matches the model under which the dense data has been created.
In Fig 1 we show the run-times of the algorithms, where we observe that the run-times for CBA-UCB with VB / SVI and the CGP-UCB baseline are impractically high. Further, this running performance deteriorates as the dimensions of the context grow, since the computational bottleneck of both VB and SVI are multiple matrix inversions of size D × D, see Fig. 7. Fig. 8 shows the scaling of the run-time with the decision horizon T with an identical setup as in Tab. 1. The CGP-UCB scales badly with T , as the kernel matrix of size M a × M a is inverted at every time step. Here, M a denotes the number of already observed contexts and rewards for decision a. Since the decision making has to be made in the order of a few hundred milliseconds for video streaming applications, neither CBA-UCB with VB nor CGP-UCB can be computed within this timing restriction. Therefore, we resort to the OS-SVI variant of the CBA algorithm, which empirically obtains a much smaller regret than the fast LinUCB baseline algorithm, but still retains a comparable run-time performance 5 . This renders the use of CBA with One Step Stochastic Variational Inference for network-assisted video quality adaptation feasible.
V. VIDEO QUALITY ADAPTATION AS A CONTEXTUAL BANDIT PROBLEM
In the following, we model ABR streaming as a contextual bandit problem where we use our developed CBA algorithm for video quality adaptation. The action set corresponds to the set of available bitrates V such that action a t ∈ {1, ... , K } represents the decision to request quality v (a t ) for the t-th segment; i.e., to request the segment s t (a t ). Below we formulate a real-valued segment-based QoE function to represent the reward r t (a t ) obtained by performing a t . Furthermore, we let x t (a) represent the network context vector corresponding to an available action a at segment t. At each t, therefore, there will be K unique context vectors available.
A. Online Video Quality Adaptation using CBA CBA performs online video quality adaptation by calculating the index presented in (5) for each available action after observing the context vector x t (a) of the action to determine the optimal bitrate to request for the next segment t. There are no constraints on the contents of the context vectors, allowing CBA to learn with any information available in the networking environment. Furthermore, each context feature may be either global or action-specific; for example, the current buffer filling percentage or the last 50 packet RTTs at bitrate v (a), respectively. The action a t with the largest computed index is chosen, and a request goes out for s t (a t ). Once s t (a t ) is received, its QoE value below is calculated and fed to CBA as the reward r t (a t ). CBA then updates its internal parameters before observing the next set of context vectors and repeating the process for segment t + 1, until the video ends at segment T .
The performance of CBA depends upon several hyperparameters. In the description in Fig. 2, Alg. 1., we choose α = 1 as it was shown to yield the most promising results [23]. As mentioned in Sect IV, we use a 0 = b 0 = 1/2 to obtain the horseshoe shrinkage prior. We let c 0 = d 0 = 10 −6 ; we choose c 0 and d 0 to be small nonzero values such that a vague prior is obtained.
B. Reward Formulation: Objective QoE
The calculated QoE metric is the feedback used by CBA to optimize the quality adaptation strategy. As QoE scores for a video segment may vary among users, we resort in this work to an objective QoE metric similar to [28] which is derived from the following set of factors:
1) Video quality: The bitrate of the segment. v (a t ) ∈ V.
2) Decline in quality: If the current segment is at a lower bitrate than the previous one, [v (a t−1 ) − v (a t )] + for two back to back segments 6 . 3) Rebuffer time: The amount of time spent with an empty buffer after choosing v (a t ). The rationale behind using the decline in quality, in contrast to the related work that counts quality variations, is that we do not want to penalize CBA if the player strives for higher qualities without risk of rebuffering. The importance of each component may vary based on the specific user or context, so, similar to [28], we define the QoE of a segment as a weighted sum of the above factors. Let the rebuffer time G(v (a t )) be the amount of time spent rebuffering after choosing v (a t ). We define the QoE then as:
QoE(s t (a t )) = w 1 v (a t ) − w 2 [v (a t−1 ) − v (a t )] + − w 3 G(v (a t ))(17)
where w 1 , w 2 , and w 3 are non-negative weights corresponding to the importance of the video quality, decline in quality, and rebuffer time, respectively. For a comparison of several instantiations of these weights, see [28]. Note that the above QoE metric is independent from CBA; the bandit is only given the scalar result of the calculation. CBA is able take arbitrary QoE metrics as specified input as long as these comprise a real-valued function to produce the reward metric.
VI. EVALUATION OF QUALITY ADAPTATION IN NDN
To evaluate the performance of CBA and compare it with Throughput-based (TBA) and Buffer-based (BBA) adaptation peers, we emulate the two NDN topologies: the doubles 6 we use [x]+ to denote max{x, 0}. topology, shown in Fig. 9; and the full topology, shown in Fig. 10. The topologies are built using an extension of the Containernet project 7 which allows the execution of Docker hosts as nodes in the Mininet emulator. The NDN clients use a DASH player implemented with libdash, based on the code from [2] with Interest Control Protocol (ICP) parameters of γ ICP = 2, β ICP = 0.5, and initialWindow = 300. We note that traffic burstiness can vary significantly depending on the ICP parameters used.
The clients begin playback simultaneously, where they stream the first 200 seconds of the BigBuckBunny video encoded in two-second H.264-AVC segments offered at the K = 5 quality bitrates {1, 1.5, 2.1, 3, 3.5}Mbps, with a playback buffer size of 30 seconds. All containers run instances of the NDN Forwarding Daemon (NFD) with the access strategy, and repo-ng is used to host the video on the servers and caches.
In the following, we compare the performance of CBA in the VB and OS-SVI variants, in addition to the baseline algorithm LinUCB [7]. We also examine the performance of two state-of-the-art BBA and TBA algorithms, i.e., BOLA [15] and PANDA [14], respectively. There are many adaptation algorithms in the literature, some of which use BBA and TBA simultaneously, including [28], [29], [30], and [31]; however, BOLA and PANDA were chosen because they are widely used and achieve state-of-the-art performance in standard HTTP environments. Buffer filling percentage and quality-specific segment packet RTTs are provided to the client as context. Furthermore, we added a numHops tag to each Data packet to track the number of hops from the Data origin to the consumer.
We track the RTTs and number of hops of the last 50 packets of each segment received by the client in accordance with measurements from [32]. If a segment does not contain 50 packets, results from existing packets are resampled. As a result, each CBA algorithm is given a D = 101 dimensional context vector constituted of the buffer fill percentage, packet RTTs, and numHops for each of the K = 5 available qualities.
A. Results on the Doubles Topology
We modulate the capacity of the bottleneck link using truncated normal distributions. The link capacity is hence drawn with mean of 7Mbps, where it stays unchanged for a period length drawn with a mean of 5s. The weights in Eq. 17 are set to w 1 = 6, w 2 = 2, and w 3 = 2, emphasizing the importance of the average quality bitrate without allowing a large amount of rebuffering to take place. We note that the use of subjective quality evaluation tests for different users to map these weights to QoE metrics via, e.g., the mean opinion score (MOS), is out of the scope of this work. Examining Tab. 1, we see that the one-step CBA-OS-SVI yields a significantly higher average bitrate. This is expected based on the QoE definition (17), but we might expect CBA-VB to pick high bitrates as well. However, we observe that the parameter update time for CBA-VB is 20 times greater than that of CBA-OS-SVI; this puts a delay of one-sixth of each segment length on average between receiving one segment and requesting another. Looking at CBA-VB in Fig. 11 we see that CBA-VB accumulates a much larger rebuffer time than other methods. Hence, CBA-VB is forced to request lower bitrates to cope with the extra rebuffer time incurred by updating its parameters. In addition, note that LinUCB fails to select high bitrates despite having a very small parameter update time, implying that LinUCB is not adequately fitting the context to the QoE and is instead accumulating a large amount of regret. This is corroborated by its cumulative QoE depicted in Fig. 11, which performs nearly as poorly as CBA-VB. By inducing sparsity on the priors and using just one sample, CBA-OS-SVI successfully extracts the most salient features quickly enough to obtain the highest cumulative QoE of all algorithms tested.
Interestingly, the CBA approaches shown in Fig. 1 also result in the lowest number of quality switches, though our QoE metric does not severely penalize quality variation. We see that the magnitude of their quality switches is also nearly half that of the other algorithms.
Concerning the rebuffering behavior, we observe rebuffering ratios of {4.5%, 8.4%, 11.4%, 17.6%, 32.9%} for LinUCB, BOLA, PANDA, CBA-OS-SVI, and CBA-VB, respectively. We trace some of the rebuffering events to the ICP congestion control in NDN. Note that tuning the impact of rebuffering on the adaptation decision is not a trivial task [2]. Fortunately, this is not hardwired in CBA but rather given through (17). Hence, in contrast to state-of-the-art adaptation algorithms, CBA could learn to filter the contextual information that is most important for rebuffering by tweaking the QoE metric used.
An important consideration when choosing a quality adaptation algorithm is fairness among clients while simultaneously streaming over common links. While this is taken care of in DASH by the underlying TCP congestion control, we empirically show here how the ON-OFF segment request behavior, when paired with the considered quality adaptation algorithms, impacts the QoE fairness in NDN. This is fundamentally different from considering bandwidth sharing fairness in NDN; e.g., in [2]. Here we are interested in QoE fairness since the QoE metric and not the bandwidth share is the main driver of the quality adaptation algorithm. Fig. 12 shows the regret of QoE fairness between both clients , where a larger regret indicates a greater difference in QoE between both clients up to a particular segment. Note that the regret is defined as a cumulative metric similar to (1). In accordance to the discussion in [33], the fairness measure used here is the entropy of the relative QoE of the two clients H B
QoE client1 (t) QoE client1 (t)+QoE client2 (t) ,
where H B (·) denotes the binary entropy and the QoE is given by (17). The regret is calculated with respect to the optimal fairness of H * B QoE client1 (t) QoE client1 (t)+QoE client2 (t) = 1. Observe that the CBA algorithms attain a significantly lower QoE fairness regret than other techniques.
B. Results on the Full Topology
To evaluate the capacity of CBA to adapt to different reward functions in complex environments, we compare performance with the full topology on two sets of weights in Eq. 17: HIGH_QUALITY_WEIGHTS sets w 1 = 6, w 2 = 2, and w 3 = 2, identical to those used in the evaluation on the doubles topology; conversely, NO_REBUFFERING_WEIGHTS sets w 1 = 1, w 2 = 1, and w 3 = 3, placing greater importance on continuous playback at the expense of video quality. We evaluate each algorithm with each weighting scheme for 30 epochs, where one epoch corresponds to streaming 200 seconds of the BigBuckBunny video. All clients use the same adaptation algorithm and weighting scheme within an epoch, and bandits begin each epoch with no previous context information.
Inspecting Tab. 2, we observe that the performance statistics among algorithms, even with different weighting schemes, are much closer than for the doubles topology. We attribute this to the use of a more complicated topology in which many more clients are sharing network resources, resulting in fewer and less predictable resources for each client. Furthermore, the average bitrate for the bandit algorithms does not change significantly across weighting schemes, and either stays the same or increases when using NO_REBUFFERING_WEIGHTS. This may seem contradictory, but, analyzing part (a) of Figs. 13 and 14, we note that CBA-OS-SVI tended to choose much lower bitrates with NO_REBUFFERING_WEIGHTS, and therefore accruing less rebuffer time in part (b), than with HIGH_-QUALITY_WEIGHTS, indicating that CBA-OS-SVI successfully adapted to either weighting scheme within the playback window. Similarly to the doubles topology, LinUCB failed to map the context to either weighting scheme, selecting higher bitrates and rebuffering longer with NO_REBUFFERING_-WEIGHTS. Note that, for either CBA-OS-SVI or LinUCB, the cumulative rebuffer time in part (b) of Figs. 13 off roughly halfway through the video, as either algorithm learns to request more appropriate bitrates. Interestingly, CBA-VB also fails to adapt to either weighting scheme, performing nearly identically in either case. This is a byproduct of the excessive parameter update time for CBA-VB in Tab. 2, which stems from the unpredictable nature of a larger network and the computational strain of performing up to 7 CBA-VB parameter updates simultaneously on the test machine. CBA-VB is therefore spending over half of the length of each segment deciding on which segment to request next, causing long rebuffering times in part (b) of Figs. 13 and 14, culminating in very low QoE scores regardless of the weighting scheme used. This obfuscates the underlying QoE function, preventing CBA-VB from differentiating between the weights in either case within the time allotted. In a real-world scenario, where each client is an independent machine, we expect that CBA-VB, as well as CBA-OS-SVI and LinUCB to a lesser extent, would have parameter update times comparable to those in the doubles topology, resulting in better performance; however, we note that evaluation in such an environment is out of the scope of this work.
Again, we see in Tab. 1 that CBA-OS-SVI switches qualities least frequently despite neither weighting scheme explicitly penalizing quality variation. Furthermore, according to parts (c) and (d) of Fig. 13 and Fig. 14, CBA-OS-SVI and CBA-VB are both stable in the number of quality switches and the quality switch magnitude across epochs, even under different weighting schemes, as opposed to the other algorithms tested.
VII. CONCLUSIONS AND FUTURE WORK
In this paper, we contributed a sparse Bayesian contextual bandit algorithm for quality adaptation in adaptive video streaming, denoted CBA. In contrast to state-of-theart adaptation algorithms, we take high-dimensional video streaming context information and enforce sparsity to shrink the impact of unimportant features. In this setting, streaming context information includes client-measured variables, such as throughput and buffer filling, as well as, network assistance information. Since sparse Bayesian estimation is computationally expensive, we developed a fast new inference scheme to support online video quality adaptation. Furthermore, the provided algorithm is naturally applicable to different adaptive video streaming settings such as DASH over NDN. Finally, we provided NDN emulation results showing that CBA yields higher QoE and better QoE fairness between simultaneous streaming sessions compared to throughput-and buffer-based video quality adaptation algorithms.
APPENDIX A THE GENERALIZED INVERSE GAUSSIAN
The probability density function of a generalized inverse Gaussian (GIG) distribution is
GIG (x | p, a, b) = (a/b) p/2 (2K p ( √ ab)) −1 x p−1 exp{(ax + b/x)/2}(18)
The GIG distribution with parameters θ θ θ = [p, a, b] is a member of the exponential family distribution with base measure h(x) = 1, natural parameters η η η(θ θ θ) = [p − 1, −a/2, b/2] , sufficient statistics S(x) = [log(x), x, 1/x] and log-normalizer A(η η η) = log((−η (2) /η (3) ) (η (1) +1)/2 {2K η (1) +1 ( −4η (2) η (3) )} −1 ). The inverse transform of the natural parameters is obtained by θ θ θ(η η η) = [η (1) + 1, −2η (2) , 2η (3) ] .
APPENDIX B CALCULATION OF THE ELBO
Here, we present the calculation for the ELBO. The joint distributions involved in the calculation of the evidence lower bound (8) factorize as p(β β β, σ −2 , τ τ τ , λ λ λ, φ, ω, r) = p(r | β β β, σ −2 )p(β β β | σ −2 , τ τ τ ) p(σ −2 )p(τ τ τ | λ λ λ)p(λ λ λ | φ)p(φ | ω)p(ω) (19) and q(β β β, σ −2 , τ τ τ , λ λ λ, φ, ω) = q(β β β)q(σ −2 )q(τ τ τ )q(λ λ λ)q(φ)q(ω) (20) Denoting · as the expactation w.r.t. to the distribution q, the evidence lower bound (8) is L(q) = log p(r | β β β, σ −2 ) + log p(β β β | σ −2 , τ τ τ ) + log p(σ −2 ) + log p(τ τ τ | λ λ λ) + log p(λ λ λ | φ) + log p(φ | ω) + log p(ω) − log q(β β β)
− log q(σ −2 ) − log q(τ τ τ ) − log q(λ λ λ)
− log q(φ) − log q(ω)
The expected values of the log factorized joint distribution (19) needed for (21) log( τ j + φ ) − log( φ + 1).
APPENDIX C CALCULATION FOR THE INTERMEDIATE ESTIMATES OF THE
NATURAL PARAMETERS
In order to calculate (14) we need to calculate the intermediate parametersη η η for all parameters, i.e., β β β , σ −2 , τ τ τ , λ λ λ, φ and ω. For this we calculateη η η = E q [η η η ], where η η η is the natural parameter of the full conditional for the corresponding variational factor. Therefore, we compute the full conditionals for the parameters p(β β β | r, X, σ −2 , τ τ τ ) = N (β β β | µ µ µ β β β , Σ Σ Σ β β β ) p(σ −2 | r, X, β β β, τ τ τ ) = Gam (σ −2 | c , d ) p(τ τ τ | β β β, σ −2 , λ λ λ) = D j=1 GIG (τ j | p τ ,j , a τ ,j , b τ ,j ) p(λ λ λ | τ τ τ , φ) = D j=1 Gam (λ j | a λ,j , b λ,j ) p(φ | λ λ λ, ω) = Gam (φ | a φ , b φ )
p(ω | φ) = Gam (ω | a ω , b ω ),(25)
with the parameters µ µ µ β β β = (X X + T −1 ) −1 X r, Σ Σ Σ β β β = 1/σ −2 (X X + T −1 ) −1 ,
Fig. 1 :
1A standard client-based and/or network-assisted ABR streaming model (black) with the proposed Context-based Adaptation-CBA (dotted). In CBA, high-dimensional context features from the network, along with client-side information, undergo sparsity enforcement to shrink the impact of unimportant features.
where the quantile function computes to √ 2erf −1 (1− 2 αt ), with the inverse error function erf −1 (·). The algorithm for CBA-UCB is depicted inFig. 2, Alg. 1.B. Generative model of the linear rewardsHere, we derive the posterior inference for the regression coefficients β β β(a).The posterior distributions are calculated for each of the K actions. For the inference of the posterior (3), we use Bayesian regression to infer the posterior of the regression coefficients 2 β β β = [β 1 , ... , β D ] . We use the data D t−1 (a), which is a set of M previously observed contexts X = [x 1 , ... , x M ] and rewards r = [r 1 , ... , r M ] when taking action a.
parameters) Initialize variational parameters with Eq. (11) while ELBO not converged do update the variational parameters with Eq. (10) update the variational moments with Eq. (11) end return µ µ µ β β β , Σ Σ Σ β β β Algorithm 4: SUB-ROUTINE OS-SVI Input: x m (context vector for the last played action), r m (reward for the last played action), a, b, c 0 , d 0 (hyper-parameter), γ (step size), t (current decision step) Output: µ µ µ β β β , Σ Σ Σ β β β (updated parameters) calculate intermediate parameters with Eq. (15) and M ← t do gradient update with Eq. (14) and step size γ update variational parameters with Eq. (16) return µ µ µ β β β , Σ Σ Σ β β β
Fig. 2 :
2The CBA-UCB Algorithm with three Bayesian inference schemes for the regression coefficients: Variational Bayesian Inference (VB), Stochastic Variational Inference (SVI) and One Step Stochastic Variational Inference (OS-SVI).
Fig. 3 :
3Probabilistic graphical model for the Bayesian regression in Sect. IV-B with Three Parameter Beta Normal prior using factor graph notation. (Deterministic functions are depicted in diamond-shaped nodes and 'dot' denotes the inner product.)
Fig. 4 :
4Probabilistic graphical model using a mean field approximation for the Bayesian regression (see Sect. IV-C).
Fig. 5 :
5Average regret for our contextual bandit algorithms vs. the baseline (CGP and LinUCB) for a dense linear model. one step in the direction of the natural gradient of the ELBO L. For this we calculate the intermediate estimates
Fig. 6 :
6Average regret for our contextual bandit algorithms vs. the baseline (CGP and LinUCB) for a sparse linear model.
Fig. 7 :Fig. 8 :
78Run-time vs. context dimensions D for a sparse linear model, with K = 20 actions and a decision horizon of T = 100. CBA-UCB with SVI not shown for clarity. Run-time vs. decision horizon T for a sparse linear model, with K = 20 actions and D = 20 features. CBA-UCB with SVI not shown for clarity.
Fig. 9 :
9Emulation testbed for the doubles topology. Client link capacity follows a bandwidth trace, server links have a capacity of 20Mbps, and the internal cache link has a capacity of 1000Mbps. Caches can store up to 1500 Data chunks.Fig. 10: Emulation testbed for the full topology. Client and server links have a capacity of 20Mbps, and the internal cache links have a capacity of 1000Mbps. Caches can store up to 1500 Data chunks.
Fig. 12 :
12QoE fairness evaluation on the doubles topology.
) CCDF of the average magnitude of quality switches per epoch.
Fig. 13 :
13Results for full topology with HIGH_QUALITY_WEIGHTS cumulative rebuffer time during playback.
) CCDF of the number of quality switches per epoch. ) CCDF of the average magnitude of quality switches per epoch.
Fig. 14 :
14Results for the full topology with NO_REBUFFERING_WEIGHTS
|Σ Σ Σ β β β |) log(q(σ −2 )) = c * log(d * ) − log(Γ(c * )) + (c * − 1) log(σ −2 ) − d * σ ( τ j + φ ) + (a 0 + b 0 − 1)
Tab. 1: Run-times for N = 100 simulations of the CBA algorithms compared to the baseline algorithms CGP-UCB and LinUCB. Simulations executed on an Intel ® Xeon ® E5-2680 v3 @2.5GHz machine.Algorithm
Sparse Setting Dense Setting
CGP-UCB
638.68 s
643.44 s
LinUCB
31.24 s
30.70 s
CBA-OS-SVI 91.40 s
89.56 s
CBA-SVI
3784.00 s
4081.74 s
CBA-VB
1434.11 s
1760.83 s
and 14 tapers Tab. 2: Client 1 streaming statistics on the full topology.Algorithm
Bitrate
[Mbps]
Quality
switches
[#]
Switch
magnitude
[Mbps]
Parameter
update
time [ms]
HIGH_QUALITY_WEIGHTS
CBA-OS-SVI 1.55
5
0.82
53
CBA-VB
1.52
15
1.16
1254
LinUCB
1.27
17
1.01
11
BOLA
1.96
8
0.63
PANDA
1.15
18
0.56
NO_REBUFFERING_WEIGHTS
CBA-OS-SVI 1.55
6
0.93
55
CBA-VB
1.68
12
1.08
1362
LinUCB
1.43
22
1.04
16
BOLA
1.92
12
0.71
PANDA
1.13
17
0.70
https://github.com/arizk/cba-pipeline-public
We use the shape and rate parametrization of the Gamma distribution.
For updating CBA-UCB with OS-SVI or LinUCB we only have to invert a D × D matrix once after a decision.
https://github.com/containernet/containernet
Next, we transform the parameters of the full conditionals into the exponential family parametrization.The inverse transform of the natural parameters of the full conditionals is given byTaking the expected value w.r.t. the variational distribution q we calculate the parametersη η η = E q [η η η ] as η η η β β β = σ −2 X r, −Replicating one data point (x m , r m ) M times yields the intermediate estimateŝ
Information technology, dynamic adaptive streaming over HTTP (DASH), part 5: Server and network assisted DASH (SAND). ISO/IEC 23009-5:2017"Information technology, dynamic adaptive streaming over HTTP (DASH), part 5: Server and network assisted DASH (SAND)," ISO/IEC 23009-5:2017, Feb. 2017.
Dynamic Adaptive Video Streaming: Towards a Systematic Comparison of ICN and TCP/IP. J Samain, IEEE Trans. Multimedia. 1910J. Samain et al., "Dynamic Adaptive Video Streaming: Towards a Systematic Comparison of ICN and TCP/IP," IEEE Trans. Multimedia, vol. 19, no. 10, pp. 2166-2181, Oct 2017.
Conviva. "Conviva," https://www.conviva.com.
The impact of brokers on the future of content delivery. M K Mukerjee, I N Bozkurt, B Maggs, S Seshan, H Zhang, Proc. 15th ACM Workshop Hot Topics Net. 15th ACM Workshop Hot Topics NetACMM. K. Mukerjee, I. N. Bozkurt, B. Maggs, S. Seshan, and H. Zhang, "The impact of brokers on the future of content delivery," in Proc. 15th ACM Workshop Hot Topics Net. ACM, 2016, pp. 127-133.
Sdndash: Improving qoe of http adaptive streaming using software defined networking. A Bentaleb, A C Begen, R Zimmermann, Proc. ACM Multimedia. ACM MultimediaNew York, NY, USAACMA. Bentaleb, A. C. Begen, and R. Zimmermann, "Sdndash: Improving qoe of http adaptive streaming using software defined networking," in Proc. ACM Multimedia, New York, NY, USA, 2016, MM '16, pp. 1296- 1305, ACM.
When video streaming meets named data networking: A case study. L Wang, I Moiseenko, D Wang, IEEE 18th Internat. Conf. High Perf. L. Wang, I. Moiseenko, and D. Wang, "When video streaming meets named data networking: A case study," in IEEE 18th Internat. Conf. High Perf. Comput. Commun., Dec 2016, pp. 166-173.
A contextual-bandit approach to personalized news article recommendation. L Li, W Chu, J Langford, R E Schapire, Proc. 19th Internat. Conf. WWW. ACM. 19th Internat. Conf. WWW. ACML. Li, W. Chu, J. Langford, and R. E. Schapire, "A contextual-bandit approach to personalized news article recommendation," in Proc. 19th Internat. Conf. WWW. ACM, 2010, pp. 661-670.
Taming the monster: A fast and simple algorithm for contextual bandits. A Agarwal, D Hsu, S Kale, J Langford, L Li, R Schapire, Internat. Conf. Mach. Learn. A. Agarwal, D. Hsu, S. Kale, J. Langford, L. Li, and R. Schapire, "Taming the monster: A fast and simple algorithm for contextual bandits," in Internat. Conf. Mach. Learn., 2014, pp. 1638-1646.
Variational inference for the multi-armed contextual bandit. I Urteaga, C Wiggins, in AI Stat. I. Urteaga and C. Wiggins, "Variational inference for the multi-armed contextual bandit," in AI Stat., 2018, pp. 698-706.
Spike and slab variable selection: frequentist and Bayesian strategies. H Ishwaran, J S Rao, Ann. Stat. 332H. Ishwaran, J. S. Rao, et al., "Spike and slab variable selection: frequentist and Bayesian strategies," Ann. Stat., vol. 33, no. 2, pp. 730- 773, 2005.
Generalized beta mixtures of gaussians. A Armagan, M Clyde, D B Dunson, Adv. Neural Inform. Proc. Sys. A. Armagan, M. Clyde, and D. B. Dunson, "Generalized beta mixtures of gaussians," in Adv. Neural Inform. Proc. Sys., 2011, pp. 523-531.
Handling sparsity via the horseshoe. C M Carvalho, N G Polson, J G Scott, in AI Stat. C. M. Carvalho, N. G. Polson, and J. G. Scott, "Handling sparsity via the horseshoe," in AI Stat., 2009, pp. 73-80.
Where are the sweet spots?: A systematic approach to reproducible dash player comparisons. D Stohr, Proc. ACM Multimedia, 2017, MM '17. ACM Multimedia, 2017, MM '17D. Stohr et al., "Where are the sweet spots?: A systematic approach to reproducible dash player comparisons," in Proc. ACM Multimedia, 2017, MM '17, pp. 1113-1121.
Probe and adapt: Rate adaptation for http video streaming at scale. Z Li, IEEE J. Sel. Areas Commun. 324Z. Li et al., "Probe and adapt: Rate adaptation for http video streaming at scale," IEEE J. Sel. Areas Commun., vol. 32, no. 4, pp. 719-733, Apr. 2014.
Bola: Near-optimal bitrate adaptation for online videos. K Spiteri, R Urgaonkar, R K Sitaraman, Proc. IEEE INFOCOM. IEEE INFOCOMK. Spiteri, R. Urgaonkar, and R. K. Sitaraman, "Bola: Near-optimal bitrate adaptation for online videos," in Proc. IEEE INFOCOM, Apr. 2016, pp. 1-9.
Some aspects of the sequential design of experiments. H Robbins, Bull. American Math. Soc. 585H. Robbins et al., "Some aspects of the sequential design of experi- ments," Bull. American Math. Soc., vol. 58, no. 5, pp. 527-535, 1952.
Regret analysis of stochastic and nonstochastic multi-armed bandit problems. S Bubeck, N Cesa-Bianchi, Foundat. Trends Mach. Learn. 51S. Bubeck, N. Cesa-Bianchi, et al., "Regret analysis of stochastic and nonstochastic multi-armed bandit problems," Foundat. Trends Mach. Learn., vol. 5, no. 1, pp. 1-122, 2012.
Using confidence bounds for exploitation-exploration tradeoffs. P Auer, J. Mach. Learn. Res. 3P. Auer, "Using confidence bounds for exploitation-exploration trade- offs," J. Mach. Learn. Res., vol. 3, no. Nov, pp. 397-422, 2002.
Online decision-making with highdimensional covariates. H Bastani, M Bayati, SSRN. H. Bastani and M. Bayati, "Online decision-making with high- dimensional covariates," SSRN, 2015.
Thompson sampling for contextual bandits with linear payoffs. S Agrawal, N Goyal, Internat. Conf. Mach. Learn. S. Agrawal and N. Goyal, "Thompson sampling for contextual bandits with linear payoffs," in Internat. Conf. Mach. Learn., 2013, pp. 127-135.
Contextual bandits with linear payoff functions. W Chu, L Li, L Reyzin, R Schapire, in AI Stat. W. Chu, L. Li, L. Reyzin, and R. Schapire, "Contextual bandits with linear payoff functions," in AI Stat., 2011, pp. 208-214.
Contextual gaussian process bandit optimization. A Krause, C S Ong, Adv. Neural Inform. Proc. Sys. A. Krause and C. S. Ong, "Contextual gaussian process bandit optimization," in Adv. Neural Inform. Proc. Sys., 2011, pp. 2447-2455.
On Bayesian upper confidence bounds for bandit problems. E Kaufmann, O Cappé, A Garivier, in AI Stat. E. Kaufmann, O. Cappé, and A. Garivier, "On Bayesian upper confi- dence bounds for bandit problems," in AI Stat., 2012, pp. 592-600.
Directed factor graph notation for generative models. L Dietz, Max Planck Inst. Informatics. L. Dietz, "Directed factor graph notation for generative models," Max Planck Inst. Informatics, 2010.
C M Bishop, Pattern Recognition and Machine Learning. SpringerC. M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006.
Stochastic variational inference. M D Hoffman, D M Blei, C Wang, J Paisley, J. Mach. Learn. Res. 141M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley, "Stochastic variational inference," J. Mach. Learn. Res., vol. 14, no. 1, pp. 1303- 1347, 2013.
A stochastic approximation method. H Robbins, S Monro, Ann. Math. Stat. H. Robbins and S. Monro, "A stochastic approximation method," Ann. Math. Stat., pp. 400-407, 1951.
A control-theoretic approach for dynamic adaptive video streaming over http. X Yin, A Jindal, V Sekar, B Sinopoli, Proc. ACM SIGCOMM. ACM SIGCOMMX. Yin, A. Jindal, V. Sekar, and B. Sinopoli, "A control-theoretic approach for dynamic adaptive video streaming over http," in Proc. ACM SIGCOMM, 2015, pp. 325-338.
Neural adaptive video streaming with pensieve. H Mao, R Netravali, M Alizadeh, Proc. Conf. ConfH. Mao, R. Netravali, and M. Alizadeh, "Neural adaptive video streaming with pensieve," in Proc. Conf. ACM Special Interest Group Data Commun. ACM, 2017, pp. 197-210.
Oboe: Auto-tuning video abr algorithms to network conditions. Z Akhtar, Proc. Conf. ConfZ. Akhtar et al., "Oboe: Auto-tuning video abr algorithms to network conditions," in Proc. Conf. ACM Special Interest Group Data Commun. ACM, 2018, pp. 44-58.
Want to play dash?: A game theoretic approach for adaptive streaming over http. A Bentaleb, A C Begen, S Harous, R Zimmermann, Proc. ACM Multimedia Sys. Conf. ACM Multimedia Sys. ConfNew York, NY, USAACMA. Bentaleb, A. C. Begen, S. Harous, and R. Zimmermann, "Want to play dash?: A game theoretic approach for adaptive streaming over http," in Proc. ACM Multimedia Sys. Conf., New York, NY, USA, 2018, MMSys '18, pp. 13-26, ACM.
Design and analysis of qoeaware quality adaptation for dash: A spectrum-based approach. C Wang, D Bhat, A Rizk, M Zink, ACM Trans. Multimedia Comput. Commun. Appl. 133s24C. Wang, D. Bhat, A. Rizk, and M. Zink, "Design and analysis of qoe- aware quality adaptation for dash: A spectrum-based approach," ACM Trans. Multimedia Comput. Commun. Appl., vol. 13, no. 3s, pp. 45:1- 45:24, July 2017.
An axiomatic theory of fairness in network resource allocation. T Lan, D Kao, M Chiang, A Sabharwal, Proc. IEEE INFOCOM. IEEE INFOCOMT. Lan, D. Kao, M. Chiang, and A. Sabharwal, "An axiomatic theory of fairness in network resource allocation," in Proc. IEEE INFOCOM, Mar. 2010, pp. 1-9.
| [
"https://github.com/arizk/cba-pipeline-public",
"https://github.com/containernet/containernet"
]
|
[
"On the Deployment of Distributed Antennas for Wireless Power Transfer with Safety Electromagnetic Radiation Level Requirement",
"On the Deployment of Distributed Antennas for Wireless Power Transfer with Safety Electromagnetic Radiation Level Requirement"
]
| [
"Chao Zhang ",
"Guanghe Zhao "
]
| []
| []
| The extremely low efficiency is regarded as the bottleneck of Wireless Power Transfer (WPT) technology. To tackle this problem, either enlarging the transfer power or changing the infrastructure of WPT system could be an intuitively proposed way. However, the drastically important issue on the user exposure of electromagnetic radiation is rarely considered while we try to improve the efficiency of WPT. In this paper, a Distributed Antenna Power Beacon (DA-PB) based WPT system where these antennas are uniformly distributed on a circle is analyzed and optimized with the safety electromagnetic radiation level (SERL) requirement. In this model, three key questions are intended to be answered: 1) With the SERL, what is the performance of the harvested power at the users ? 2) How do we configure the parameters to maximize the efficiency of WPT? 3) Under the same constraints, does the DA-PB still have performance gain than the Co-located Antenna PB (CA-PB)? First, the minimum antenna height of DA-PB is derived to make the radio frequency (RF) electromagnetic radiation power density at any location of the charging cell lower than the SERL published by the Federal Communications Commission (FCC). Second, the closed-form expressions of average harvested Direct Current (DC) power per user in the charging cell for pass-loss exponent 2 and 4 are also provided. In order to maximize the average efficiency of WPT, the optimal radii for distributed antennas elements (DAEs) are derived when the passloss exponent takes the typical value 2 and 4. For comparison, the CA-PB is also analyzed as a benchmark. Simulation results verify our derived theoretical results. And it is shown that the proposed DA-PB indeed achieves larger average harvested DC power than CA-PB and can improve the efficiency of WPT.Index Terms-Wireless power transfer, average harvested DC power, average efficiency of WPT, antenna height, antenna location optimization. | null | [
"https://arxiv.org/pdf/1703.02284v1.pdf"
]
| 15,569,972 | 1703.02284 | cc2dabf04e112450e93a7cbb2565fc76dfa236fa |
On the Deployment of Distributed Antennas for Wireless Power Transfer with Safety Electromagnetic Radiation Level Requirement
7 Mar 2017
Chao Zhang
Guanghe Zhao
On the Deployment of Distributed Antennas for Wireless Power Transfer with Safety Electromagnetic Radiation Level Requirement
7 Mar 20171
The extremely low efficiency is regarded as the bottleneck of Wireless Power Transfer (WPT) technology. To tackle this problem, either enlarging the transfer power or changing the infrastructure of WPT system could be an intuitively proposed way. However, the drastically important issue on the user exposure of electromagnetic radiation is rarely considered while we try to improve the efficiency of WPT. In this paper, a Distributed Antenna Power Beacon (DA-PB) based WPT system where these antennas are uniformly distributed on a circle is analyzed and optimized with the safety electromagnetic radiation level (SERL) requirement. In this model, three key questions are intended to be answered: 1) With the SERL, what is the performance of the harvested power at the users ? 2) How do we configure the parameters to maximize the efficiency of WPT? 3) Under the same constraints, does the DA-PB still have performance gain than the Co-located Antenna PB (CA-PB)? First, the minimum antenna height of DA-PB is derived to make the radio frequency (RF) electromagnetic radiation power density at any location of the charging cell lower than the SERL published by the Federal Communications Commission (FCC). Second, the closed-form expressions of average harvested Direct Current (DC) power per user in the charging cell for pass-loss exponent 2 and 4 are also provided. In order to maximize the average efficiency of WPT, the optimal radii for distributed antennas elements (DAEs) are derived when the passloss exponent takes the typical value 2 and 4. For comparison, the CA-PB is also analyzed as a benchmark. Simulation results verify our derived theoretical results. And it is shown that the proposed DA-PB indeed achieves larger average harvested DC power than CA-PB and can improve the efficiency of WPT.Index Terms-Wireless power transfer, average harvested DC power, average efficiency of WPT, antenna height, antenna location optimization.
I. INTRODUCTION
D ESPITE of the significant advances in Wireless Power
Transfer (WPT), there are a lot of open issues that are summarized as follows: First, the transfer distance in WPT is stringently limited and desperately need to be increased. It is known that the signal power attenuates by the exponent of transfer distance. In order to get viable received power, the distance is generally severely small thus restricts its application in electronics such as portable and wearable electronics. Second, wireless power transfer efficiency, which is becoming C. Zhang a vital metric, is extremely small based on the current stateof-the-art research and also needs to be improved.
A. Context and Motivation
Wireless power transfer (WPT) has recently drawn more and more attention due to that it enables proactive energy replenishment of user terminals. There are two related research topics, i.e., simultaneous wireless information and power transfer (SWIPT) and PB-assisted wirelessly powered communication networks (PB-assisted WPCN). The study of SWIPT can be referred to [1]- [4] and references therein. Compared with the point-to-point SWIPT, the authors in [5] proposed an iterative dynamic power splitting algorithm to maximize the receiving signal-to-noise ratio (SNR) at the destination node for the multi-relay networks with wireless energy harvesting. SWIPT is suitable for the case where users are close to the base station (BS). It is due to the fact that the operating power of the energy harvesting component is generally much higher than that of the information decoding component [6]. Compared with SWIPT, PB-assisted WPCN system generally has a larger coverage region. Furthermore, the users in PB-assisted WPCN tend to harvest more energy.
The other research topic focuses on the PB-assisted WPCN. Three different configurations for a wireless-powered cellular network were investigated in [7]. The first was full-duplex BS with energy transfer in the downlink and information transfer in the uplink; In the second configuration, distributed PBs were exploited to power the user nodes and the power harvested at the user was used to transmit information to the BS; In the third configuration, distributed PBs and distributed antenna elements (DAEs) were considered. The authors argued that by exploiting distributed PBs, the system performance could be significantly improved. However, [7] did not consider the RF electromagnetic radiation, which is extremely indispensable and draws more and more attention in practice. In [8], the authors proposed a novel multi-user scheduling strategy, i.e., opportunistic scheduling, and analyzed its performance gain in two systems namely homogeneous and heterogeneous users system over the round-robin scheduling. It is worthy to point that the safety radiation was considered in [8]. The authors in [9] proposed an adaptively directional WPT scheme for power beacon to improve the efficiency in a large WPT system. Specifically, the power beacon can adaptively perform energy beamforming according to the number of users and their locations in order to lead the power to the users within the charging region of power beacon. Unfortunately, the authors in [9] did not consider the electromagnetic exposure either.
As a mature technology, Distributed Antenna Systems (DAS) has been shown to have the ability to significantly increase coverage as well as improve system throughput [7], [10]- [13]. Uniform circular layout (UCL) of DAEs was generally exploited to analyze the performance of DAS in company with circular cell [7], [10]- [13]. In this paper, we pursue the work of DAS and investigate the optimal deployment of PB DAEs with uniform circular layout.
B. Contributions and Organization of the Paper
The contributions of this paper are summarized as follows:
• A novel deployment architecture of antennas for PB is proposed to implement efficient WPT. Considering the radio frequency (RF) electromagnetic radiation safety level drafted by the Federal Communications Commission (FCC), we get the closed-form expression of DA-PB antenna height to make the RF electromagnetic radiation power density at any location of the charging cell lower than the safety level limited by FCC. • For the proposed DA-PB, we give the closed-form result of average harvested DC power per user in the charging cell when path-loss exponent takes the typical value 2 and 4, which are the typical values for suburban area and urban city, respectively. • In order to maximize the average efficiency of wireless power transfer, we get the optimal radii of distributed antennas of DA-PB when path-loss exponent takes the typical value 2 and 4. The remainder of the paper is organized as follows. Section II elaborates the system model. The calculation of antenna height of DA-PB and the performance analysis are presented in Section III. In Section IV, in order to maximize the average efficiency of WPT when using DA-PB, we get the optimal radii of distributed antennas when path-loss exponent takes the typical value 2 and 4. Simulation results and discussion are presented in Section V. Finally, Section VI concludes the paper and followed by detailed derivation process of some results relegated to appendices.
Notation: For a complex variable x, operators ℜ{x}, |x| and arg(x) denote its real part, amplitude and phase, respectively. E y {x} stands for the statistical expectation of real random variable x with respect to y and x ∼ U(a, b) denotes that x is a random variable following the uniform distribution in the interval from a to b. Finally, P out−x stands for the average harvested DC power per user, where x ∈ {CA, DA} stands for the deployment structure of the PB antennas ('CA' for co-located antennas and 'DA' for distributed antennas). η x stands for the average efficiency of WPT, where the meaning of x is similar to that in P out−x .
II. SYSTEM MODEL
As depicted in Fig.1, we assume that the region covered by the PB is a circle with the radius R. Suppose the PB has N antennas with the total power P and each user has a single antenna. For the convenience of illustration, N equals to 4 in Fig.1. The users whose height is assumed to be zero are uniformly distributed in the charging cell. Specifically, in Fig.1(a), the PB with multi-antennas is located at the center of the circle and the distances between those antennas are extremely small compared to the distances from the PB to the users. Thus it can be deemed as the so-called Co-located Antenna Power Beacon (CA-PB). We denote the antenna height of CA-PB as h C . In contrast, the PB in Fig.1(b) is the Distributed Antenna Power Beacon (DA-PB). The distributed antenna elements (DAEs) of DA-PB are uniformly deployed on the circle whose radius is r and might be connected to a central power source through power lines or different power sources. We further assume that the PB has no knowledge of the channel state information (CSI) between the PB and the users, so equal power allocation among the antennas is considered in this paper, i.e., the transmit power of each antenna is P/N . Suppose all antennas of DA-PB have the same height h D . To let the radiation power density at any location of the charging cell lower than the SERL published by FCC, we should set h C and h D carefully.
A. Signal Propagation Model
The power transmitted by each antenna of PB can be aggregated at the user. The RF signal transmitted by the i th antenna at time slot t can be expressed as
s i (t) = 2P i ℜ x i (t)e j2πf t ,(1)
where P i denotes the transmit power of the i th antenna, f refers to the carrier frequency, and x i (t) is the complex baseband signal with bandwidth B Hz and unit power, i.e., |x i (t)| 2 = 1. It is assumed that B ≪ f . For a fixed user, the received signal at the user is
r(t) = N i=1 2P i c|h i (t)| 2 d α i ℜ x i (t)e j[2πf t+θi(t)] + n(t),(2)
where c stands for the constant scaling factor, d i , θ i (t), and |h i (t)| 2 denote the distance, phase shift, and power gain of the fast fading channel from the i th antenna to the user, respectively. Additionally, n(t) is the additive white Gaussian noise (AWGN) at the user at time slot t. Compared to the received RF signal, the noise power is usually greatly small thus can be neglected. Therefore
r(t) ≈ N i=1 2P i c|h i (t)| 2 d α i ℜ x i (t)e j[2πf t+θi(t)] = N i=1 2P i c|h i (t)| 2 d α i cos [2πf t + arg (x i (t)) + θ i (t)] .(3)
At the energy receiver of the user, the received RF signal first goes through the nonlinear Schottky diode, thus the output current includes the DC component as well as the harmonic components at kf (k ≥ 1). Due to the Shockley's diode equation [14], the output current after the Schottky diode at time slot t is
i(t) = I s e r(t) ρV T −1 = ∞ k=1 I s k!(ρV T ) k r k (t) ≈ I s 2(ρV T ) 2 r 2 (t)(4)
where I s denotes the reverse saturation current of the diode, ρ is the ideality factor of the diode 1 , and V T refers to the thermal voltage. The second equation in (4) is derived by exploiting Taylor series expansion of the exponential function. After rectifying, we only consider the quadratic term of output signal because the coefficients of the higher-order (k > 2) terms in (4) is very small [4] [8]. After that, the output current i(t) is fed into the low pass filter (LPF). Then the direct current signal without high frequency components is
i dc (t) = I s c 2(ρV T ) 2 N i=1 P i |h i (t)| 2 d α i + N i=1 N j=1,j =i P i P j |h i (t)| 2 |h j (t)| 2 d α i d α j × [cos [arg (x i (t)) + θ i (t)] cos [arg (x j (t)) + θ j (t)] + sin [arg (x i (t)) + θ i (t)] sin [arg (x j (t)) + θ j (t)]]} ,(5)
in which θ i (t) and |h i (t)| 2 are the phase and the power gain of fast Rayleigh fading channel, respectively. θ i (t) is an uniformly distributed variable, i.e., θ i (t) ∼ U(−π, π) and |h i (t)| 2 is a random variable following the exponent distribution [15]. In addition, {θ 1 (t), θ 2 (t), ..., θ N (t)} and {|h 1 (t)| 2 , |h 2 (t)| 2 , ..., |h N (t)| 2 } are independently identically distributed (i.i.d.), respectively. Note that θ i (t) and |h i (t)| 2 are independent on d i and x i (t). The probability density function (PDF) of |h i (t)| 2 is 1 The ideality factor of the diode ρ generally has a range between 1 and 2, which depends upon the operating conditions and physical construction.
f |hi(t)| 2 (ζ) = 1 σ 2 h e − ζ σ 2 h , if ζ > 0, 0, otherwise.(6)
where σ 2 h denotes the mean of the random variable |h i (t)| 2 . After averaging the random phase θ i (t) and |h i (t)| 2 , we get the average DC current as
i dc (t) = I s c 2(ρV T ) 2 N i=1 E |hi(t)| 2 P i |h i (t)| 2 d α i = I s cσ 2 h 2(ρV T ) 2 N i=1 P i d α i(7)
Finally, the DC current is converted to the DC power and then stored in the rechargeable battery. The power charged to the battery is generally linearly proportional to the input DC with the scaling factor being energy transfer efficiency 0 < ξ < 1.
Thus the ergodic harvested DC power P out (x, y, 0, t) for the user at the coordinate of (x, y, 0) is given by (8) is actually the sum of average received power transmitted from different antennas. Thus we have completed the proof of our assumption. It is worth mentioning that (8) is similar to those in [7][9] [16], which verifies our assumption and derivation. In addition, we assume that a quasi-static block-fading is considered and the channel gain from the antenna to the user is independent from block to block. Therefore, for the convenience of illustration, we discard the index t in the remainder of the paper.
P out (x, y, 0, t) = ξi dc (t) = K 0 N i=1 P i d α i (8) where K 0 ξIscσ 2 h 2(ρVT ) 2 is a constant. Note that
Remark 1. (Technology of Maximizing Instantaneous Harvested DC Power):
We admit that by elaborately designing the power allocation and transmission phase in (5), the instantaneous harvested DC power of a user can be maximized (see [17] and references therein), but this will need estimation and feedback of the instantaneous CSI. First, the estimation and feedback are generally not as accurate as expected, which hinders us from getting optimal system performance and even deteriorates the system performance; Second, estimation and feedback of CSI increase the system overhead. Thus, in this paper, we consider the ergodic harvested DC power of (5). Note that the DA-PB without any extra estimation and feedback of CSI discussed in this paper is quite easy to implement in practice.
B. Radio Frequency Electromagnetic Radiation
Considering the safety levels of human exposure to RF electromagnetic fields, we place the antennas at the height of h C and h D for CA-PB and DA-PB, respectively. Generally speaking, because the industrial, scientific, and medical (ISM) frequency band is open and free, WPT can use the ISM band such as 2.45 GHz, 5.8 GHz to perform WPT in practice [8]. The radiation power density is computed via Ψ = Pr 4πd 2 (see [18], p. 32) where Ψ, P r , d are the radiation power density at the distance of d from the power beacon, power beacon transmit power, and the distance between the user and the power beacon, respectively.
III. ANTENNA HEIGHT OF PB AND PERFORMANCE ANALYSIS
In this section, considering the equal power allocation among antennas, we first derive the minimum antenna height of PB in order to protect users from being hurt by RF electromagnetic radiation. Then, we analyze the average harvested DC power per user in the charging cell and the average efficiency of WPT for CA-PB and our proposed DA-PB. In addition, the users follow the uniform distribution in the charging cell. The system performance is characterized by the average harvested DC power per user and the average efficiency of WPT. Specifically, we average the resultant ergodic harvested DC power in the whole cell and yield the average harvested DC power per user. The average efficiency of WPT, which can be exploited to judge what kind of deployment is more energy efficient, is defined as the ratio of average harvested DC power per user and the total transmit power of PB.
A. Antenna Height of PB 1) CA-PB: For CA-PB, the transmit power P and the antenna height h C of power beacon should be limited by (9) in order to avoid exclusion zone in the charging cell (We are only interested in the disc whose height is zero because users height is assumed to be zero). By the way, to avoid exclusion zone is referred to making the radiation power density at any location in the charging cell lower than the safety level.
P < 4πh 2 C Ψ 0(9)
where Ψ 0 denotes the safety radiation level 2 given by FCC. This result can offer useful directions when deploying the CA-PB antennas to avoid exclusion zone in the charging cell.
2) DA-PB: For DA-PB with uniform circular layout (UCL) of DAEs and equal power allocation (see Fig.1(b)), without loss of generality, we assume the coordinates of the DAEs are listed as follows. For the convenience of expression, we have assigned a number for each DAE. Specifically, we denote the coordinate of DAE i as
O i = r cos 2π(i − 1) N , r sin 2π(i − 1) N , h D ,(10)
We aim to derive the expression of h D in the remainder of this subsection with which to avoid exclusion zone in the charging cell. In other words, the maximal radiation density in the charging cell must be lower than the safety level. For DA-PB with DAEs located as (10), because of the symmetry property, the coordinates of maximal radiation density in the disc must be as follows
E i = ν ⋆ cos 2π(i − 1) N , ν ⋆ sin 2π(i − 1) N , 0 ,(11)
where ν ⋆ is the distance between the maximal radiation density coordinates and the charging cell center. All the radiation power densities at each E i are the same, so we only consider the first coordinate of maximal radiation density without loss of generality. The equality of maximal electromagnetic radiation density in the charging cell for CA-PB and DA-PB guarantees the fairness of CA-PB and DA-PB in order to compare their performance. In addition, suppose the maximal radiation density is lower than the safety level given by FCC. So we straightly get
P 4πh 2 C = N i=1 P 4πN ν ⋆ −r cos 2π(i−1) N 2 + r sin 2π(i−1) N 2 +h 2 D(12)
It is hard to give a closed-form expression of ν ⋆ and h D from (12). However, when N → ∞, we get explicit simple analytical expressions for ν ⋆ and h D , i.e.,
ν ⋆ = 0, 0 ≤ r ≤ hC √ 2 , r 2 − h 2 C 2r 2 , hC √ 2 ≤ r ≤ R.(13)
and
h D = h 2 C − r 2 , 0 ≤ r ≤ hC √ 2 , h 2 C 2r , hC √ 2 ≤ r ≤ R.(14)
The detailed derivation process can be found in Appendix A. Note that ν ⋆ is piecewise function of DAE radius r, to speak specifically, a non-decreasing function, and is continuous at the point of r = hC √ 2 . However, h D is a decreasing function of r, and is also continuous at the point of r = hC B. Average Harvested DC Power 1) CA-PB: Compared with the distance between the antennas and the user, the distance between antennas in CA-PB is extremely smaller, so we regard all the PB antennas as co-located so as to simplify the analysis. Thus the distance between different antennas and the user is the same. Without loss of generality, we assume the coordinate of the user is (x, y, 0). The distance between the CA-PB antennas and the user is denoted as d 0 = x 2 + y 2 + h 2 C . By virtue of (8), the ergodic harvested DC power of the user at (x, y, 0) is
P out−CA (x, y, 0) = K 0 P d α 0(15)
Assume that the users are uniformly distributed in the charging cell, we thereby straightly give the probability density function (PDF) when user node is at the coordinate of (x, y, 0)
f (x, y, 0) = 1 πR 2 , if x 2 + y 2 ≤ R 2 , 0, otherwise.(16)
So the average harvested DC power per user in the charging cell is
P out−CA = K 0 2P (α − 2)R 2 1 h α−2 C − 1 (R 2 + h 2 C ) α 2 −1(17)
For the special case, when α takes the value 2, we get
P out−CA = K 0 P R 2 ln 1 + R 2 h 2 C(18)
It is obvious that the average harvested DC power per user for CA-PB linearly increases as the transmit power goes up.
2) DA-PB: Compared with CA-PB, the distances between the DAEs and user in the DA-PB are usually different.
d i = x−r cos 2π(i−1) N 2 + y −r sin 2π(i−1) N 2 +h 2 D , ∀i ∈ [1,
N ] denotes the distance between DAE i and the user. Consequently, the ergodic harvested power of the user at (x, y, 0) is
P out−DA (x, y, 0) = K 0 P N N i=1 1 d α i(19)
By averaging (19), we get
P out−DA = 2π 0 R 0 K 0 P N πR 2 N i=1 1 d α i ρ dρ dθ = K 0 P πR 2 2π 0 R 0 ρ d α 1 dρ dθ Q(20)
in which the symmetry property has been used to get (20). Q is intractable but we get an explicit closed-form expression when α takes the typical value 2 and 4 as follows
Q = π ln R 2 +h 2 D −r 2 + √ (R 2 +h 2 D −r 2 ) 2 +4r 2 h 2 D 2h 2 D , α = 2, π R 2 −h 2 D −r 2 + √ R 4 +R 2 (2h 2 D −2r 2 )+(r 2 +h 2 D ) 2 2h 2 D √ R 4 +R 2 (2h 2 D −2r 2 )+(r 2 +h 2 D ) 2 , α = 4.(21)
The detailed derivation process of (21) is presented in Appendix B. Note that the P out−DA is also proportional to the transmit power because the definite integral Q in (20) is actually a constant and power independent.
On the other hand, let r MS denote the distance between the user and the cell center, then
P out−DA (r MS ) = N i=1 K 0 P N r 2 MS +r 2 −2rr MS cos 2π(i−1) N +h 2 D α 2(22)
When N → ∞, we get
lim N →∞ P out−DA (r MS ) = K 0 P N N 2π N i=1 1 r 2 MS +r 2 −2rr MS cos 2π(i−1) N +h 2 D α 2 2π N = K 0 P 2π 2π 0 1 (r 2 MS + r 2 − 2rr MS cos θ + h 2 D ) α 2 dθ = K 0 P (r 2 MS + r 2 + h 2 D ) 2 − 4r 2 r 2 MS − α 4 × P α 2 −1 r 2 MS + r 2 + h 2 D (r 2 MS + r 2 + h 2 D ) 2 − 4r 2 r 2 MS(23)
where P · (·) denotes the Legendre function ( [20]) and P a (b) = F (−a, a + 1; 1; 1−b 2 ), where F (·, ·; ·; ·) is the Gauss hypergeometric function ( [20]). This function can be calculated by using any standard mathematical software packages such as MATLAB and MAPLE. Note that we have used ( [21], (2.5.16.38)) to get the last equation in (23).
C. Average Efficiency of WPT
In our system, the average efficiency of WPT is defined as the ratio of average harvested DC power per user and the PB transmit power. The average efficiency of WPT can be deemed as an extraordinarily important metric when judging which deployment of antennas for PB is more energy efficient.
1) CA-PB: Note that all the antennas simulcast energy signal to the user, thus the total transmit power is P . The average efficiency of WPT for CA-PB is
η CA P out−CA P = 2K 0 (α − 2)R 2 1 h α−2 C − 1 (R 2 + h 2 C ) α 2 −1
(24) From the result above, we can argue that the average efficiency of WPT of CA-PB is determined by the antenna height of PB and the path-loss exponent.
2) DA-PB: Similarly, the average efficiency of DA-PB is
η DA P out−DA P = K 0 πR 2 Q(25)h 2 C 2R ≥ d ref(26)
Without loss of generality, we use d ref = 1 throughout this paper, thus h C ≥ √ 2R. On the other hand, in order to improve the average efficiency of WPT, the antenna height is as lower as better but it must satisfy the safety radiation level limited by FCC. Given this, we assume that h C < R. Antenna height of BS being lower than the cell radius is a common assumption in the existing wireless communications related literatures. From the above, we only focus on √ 2R ≤ h C < R from now on to continue our analysis.
A. Path-Loss Exponent 2
When α = 2, in order to maximize the average efficiency of WPT, we formulate an optimization problem to get the optimal DAE radius as follows
P1 : max r Υ 1 (r) s.t. 0 ≤ r ≤ R(27)
where
Υ 1 (r) = K0 R 2 ln R 2 +h 2 D −r 2 + √ (R 2 +h 2 D −r 2 ) 2 +4r 2 h 2 D 2h 2 D
and h D is given by (14). We get the closed-form expression of the optimal DAE radius as follows
r ⋆ = 1 2 R 2 + R 4 + 4h 4 C(28)
The detailed derivation process can be found in Appendix C.
From the startlingly concise result, we can easily find that the optimal DAE radius is only determined by the size of the cell, i.e., the radius of the cell, and the CA-PB antenna height.
Note that h C is essentially determined by the safety level of radiation power density and total transmit power. This can be explained by (9).
B. Path-Loss Exponent 4
Similarly to that when α = 2, we formulate an optimization problem to get the optimal DAE radius for α = 4 as follows
P2 : max r Υ 2 (r) s.t. 0 ≤ r ≤ R(29)
where Υ 2 (r) = K0
2R 2 R 2 −h 2 D −r 2 + √ R 4 +R 2 (2h 2 D −2r 2 )+(r 2 +h 2 D ) 2 h 2 D √ R 4 +R 2 (2h 2 D −2r 2 )+(r 2 +h 2 D ) 2 .
We reformulate the above optimization problem into finding the desired real root in the range ( h 2 C 2 , R 2 ) for the next eightorder polynomial equation
p(x) = 0 (30) where p(x) = 256x 8 − 768R 2 x 7 + 128(6R 4 + h 4 C )x 6 + (224h 4 C R 2 − 256R 6 )x 5 − 192R 4 h 4 C x 4 − 32R 2 h 4 C (R 4 + 2h 4 C )x 3 −8h 8 C (4R 4 +h 4 C )x 2 −10R 2 h 12 C x−h 16 C .
The proof can be referred to Appendix D. It is easy to show that p( h 2 C 2 ) < 0 and p(R 2 ) > 0, so there must be at least one real root for
x ∈ ( h 2 C 2 , R 2 ) under the condition √ 2R ≤ h C < R.
However, it is nontrivial to prove the uniqueness of real root of the above equation. We admit that we can not prove it directly. Next, we present some alternative methods to help to bracket the real roots of the above equation. Note that for √ 2R ≤ h C < R, only the coefficients of the eightorder and six-order terms are positive, the other coefficients are negative. According to Descartes ′ rule of signs [22], the number of positive real roots of the above single-variable polynomial is either equal to the number of sign differences between consecutive nonzero coefficients, or is less than it by an even number. Multiple roots of the same value are counted separately. So it is easy to argue that (30) has one or three positive real roots. We further determine the number of real roots in the range x ∈ ( h 2 C 2 , R 2 ) of (30) by the Sturm ′ s T heorem [23].
First, we get the Sturm Sequence of p(x) as:
p 0 (x) = p(x), p 1 (x) = p ′ (x), p 2 (x) = −rem(p 0 , p 1 ) = p 1 (x)q 0 (x) − p 0 (x), p 3 (x) = −rem(p 1 , p 2 ) = p 2 (x)q 1 (x) − p 1 (x)
, · · · , 0 = −rem(p m−1 , p m ). rem(p i , p j ) and q i are the remainder and the quotient of the polynomial long division of p i by p j , and m is the minimal number of polynomial divisions (never greater than deg(p), the degree of p) needed to obtain a zero remainder. Then, let σ(ς) denote the number of sign changes (ignoring zeroes) in the Sturm Sequence [p 0 (ς), p 1 (ς), p 2 (ς), . . . , p m (ς)]. Finally, according to Sturm's Theorem, the number of distinct real roots of p(x) in the halfopen interval (
h 2 C 2 , R 2 ] is σ( h 2 C 2 ) − σ(R 2 )
. Sturm's Theorem can help us to quickly determine how many real roots of p(x) are existed in the range ( h 2 C 2 , R 2 ] for numerical computation rather than the symbolic computation.
C. Algorithm of Optimizing DAE Radius for Path-Loss Exponent 4
The optimal DAE radius can be calculated by the numerical iterative method as follows. First, use the Sturm's Theorem to determine the number of real roots of (30) in the range ( h 2 C 2 , R 2 ). Then, find all the real roots of p(x) in the range ( h 2 C 2 , R 2 ). Finally, we can get the optimal DAE radius r ⋆ . case 1: If there is only one real root in ( h 2 C 2 , R 2 ), denoted as x 1 , then we argue that the optimal DAE radius is
r ⋆ = √ x 1 (31)r ⋆ = arg max ri,i≤κ Υ 2 (r i )(32)
where
r i = √ x i , i ≤ κ.
The detailed numerical solving process of the optimal DAE radius r ⋆ is summarized in Algorithm 1.
V. SIMULATION RESULTS AND DISCUSSION
In this section, we present simulation results and discussion. Specifically, for CA-PB and DA-PB, we give the simulation results of antenna height, average harvested DC power, average WPT efficiency as well as their theoretic values. Parameters used in the simulations are presented in Table I unless stated otherwise. The transmit power of PB in our simulation experiments is referred to [8]. Note that for the maximal transmit power P = 200 W and antenna height of CA-PB h C = 7.75 m, the maximal radiation power density in the charging cell is 0.265 W/m 2 , and much lower than 10 W/m 2 , the safety level limited by the FCC. Thus the parameters in our simulation experiments are reasonable.
Algorithm 1 Finding the optimal r ⋆ using bisection method based on Sturm's Theorem 1: Obtain Sturm Sequence [p 0 (x), p 1 (x), p 2 (x), . . . , pm(x)] and determine the number of real roots of p(x) in (
h 2 C 2 , R 2 ), i.e., n = σ( h 2 C 2 ) − σ(R 2 ).
2: If n = 1, we get a 1 = h 2 C 2 , b 1 = R 2 , then skip to step 5 to find the real root x 1 , thus r ⋆ = √ x 1 .
3: Else (n = 2 or 3), then isolate the interval ( h 2 C 2 , R 2 ) of real roots, resulting in n distinct intervals (a 1 , b 1 ), . . . , (an, bn), each of which has only one real root and there is no intersection among different intervals.
Go to step 5 to find all the real roots {x i , i ≤ n} in (
A. Antenna Height of PB
As is shown in Fig.2, we illustrate the antenna height of DA-PB when DAE radius becomes larger. Markers in Fig.2 are obtained by exhaustive search of equation (12) while lines are plotted by (14). It is demonstrated that the closed-form result of antenna height is extremely close to the value obtained by exhaustive search of equation (12) for N = 100 (large scale antenna array). This verifies the closed-form result of antenna height (14). On one hand, the antenna height of DA-PB is a decreasing function of DAE radius; On the other hand, the height of CA-PB in our simulations can make Friis Equation satisfied, i.e., It is worth mentioning that lower h C will surely improve the efficiency of WPT, but it must be elaborately designed with transmit power in order to satisfy safety radiation.
B. Average Harvested DC Power
In Fig.3, we present the simulation results in comparison with the theoretical values. Simulation results are obtained by random realizations of fast fading channel and user locations while theoretical results are obtained by (20) while h D is given by exhaustive search of equation (12). It is obvious that simulation results are perfectly consistent with our derived theoretical values. On one hand, it is found that the average harvested DC power for both CA-PB and DA-PB are proportional to the transmit power which can be demonstrated by (17) and (20), respectively; On the other hand, by using DA-PB, the average harvested DC power is larger.
We can see from Fig.4 that the result in (23) when N → ∞ is extremely consistent with the simulation result when N equals to 100. Obviously, Fig.4 shows that the ergodic harvested DC power is higher when r MS is close to DAE radius r. What's more, for either r MS > r or r MS < r, the ergodic harvested DC power is a convex function of r MS . As is expected, the smaller path-loss exponent is, the higher ergodic harvested DC power users can harvest.
In Fig.5, we present the analytical results (i.e., h D in (20) are given by (14)) as well as simulation results and theoretical values for the average harvested DC power when N becomes larger. Many interesting phenomena can be found from the figure. First, for path-loss exponents 2 and 4, the average harvested DC powers by using DA-PB are greater than that by using CA-PB; Second, when the number of DAEs is about 80, the result we derive under the assumption that N → ∞ is extremely close to the simulation result, which indicates that the analytical result can be applied in large scale antenna arrays; Finally, the average harvested DC power by using CA-PB is invariant, while the average power harvested by using DA-PB increases when N goes up. This phenomenon shows that by using multi-antennas, the performance gain of our proposed DA-PB can be improved further. In contrast, there is no performance gain when CA-PB uses multiple omnidirectional antennas. As a matter of fact, the antenna height h C also has an effect on the average harvested DC power. The result in Fig.6 illustrates the effect. Specifically, larger h C means larger average distances between PB antennas and users, which decreases the average harvested DC power. Even though all the values of average harvested DC power decrease when h C gradually increases, DA-PB strictly outperforms CA-PB for any arbitrary h C .
C. Average Efficiency of PB
In order to verify the optimal DAE radius, we present the simulation results in Fig.7. Specifically, the magenta hollow circles denote the theoretical values of average efficiency of WPT when antenna number is 100, while the blue solid line and dashed line denote analytical results for path-loss exponents 2 and 4, respectively. For the path-loss exponent 2, the black solid circle is the optimal DAE radius obtained by (28) while for the path-loss exponent 4, the black solid diamond means the optimal DAE radius using Algorithm 1. It is obvious that the optimal radii are consistent with the simulation results. Obviously, the DA-PB is strictly better than CA-PB for any DAE radius. Note that the efficiency is lower than one percent, this can be explained as follows.
In this paper, in order to satisfy the Friis Equation as well as use simplified path-loss formula, we assume h C ≥ √ 2R. However, h C could be smaller in practice as long as to restrict the transmit power to satisfy the safety radiation. Thus the average efficiency of WPT could be larger in practice.
Compared to CA-PB, DA-PB has other advantages. In Fig.8, with the average harvested DC power being fixed as 0 dBm (i.e., 1 mW), we find that the transmit power can be dramatically saved by using DA-PB. There is an optimal DAE radius in order to minimize the transmit power. Compared with the case when using CA-PB, for the path-loss exponent 2, it is easy to find that 3 dB of transmit power can be saved, while more than 15 dB can be saved when path-loss exponent is 4. This again demonstrates that DA-PB is better than CA-PB. As we can see from Fig.9, the cumulative distribution function (CDF) of WPT efficiency of CA-PB is significantly steeper than that of DA-PB for path-loss exponent 2 and 4. This indicates that there is a larger area that users can harvest more power by using DA-PB than that by using CA-PB. The efficiency of CA-PB is extremely lower compared with DA-PB. For example, when path-loss exponent is 2, the probabilities of efficiency being larger than 0.5 percent are 0.2 for DA-PB and 0.05 for CA-PB, respectively. This phenomenon can be explained as follows. First, CA-PB with longer average propagation distance means higher propagation path-loss which reduces the WPT efficiency; Second, by using DA-PB, the average distance between DAEs and users is shortened, which decreases the path-loss of the power transfer and eventually increases the WPT efficiency. Note that the WPT efficiency can be further improved by lowing h C so long as to restrict the transmit power to satisfy the safety radiation.
VI. CONCLUSION
In this paper, we consider a novel antenna deployment of PB, i.e., DA-PB. We derive the antenna height of DA-PB to protect users from being hurt by RF electromagnetic radiation. Besides, we get the average harvested DC power per user in the charging cell. In order to maximize the average efficiency of DA-PB, we get the optimal DAE radius of circularly distributed PB antennas. Finally, simulation results verify the theoretical results and show that the proposed DA-PB indeed achieves larger average harvested DC power per user and average efficiency of WPT than conventional CA-PB. These useful observations can give operators valuable directions when exploiting PBs in WPT or future Wireless Powered Communications Network (WPCN).
APPENDIX A
The radiation power density at the coordinate of (ν, 0, 0) is
Ψ d (ν) = N i=1 P 4πN ν −r cos 2π(i−1) N 2 + r sin 2π(i−1) N 2 +h 2 D = P 4πN N 2π N i=1 1 r 2 + ν 2 − 2rν cos 2π(i−1) N + h 2 D 2π N (33) so ν ⋆ = arg max ν∈[0,R] Ψ d (ν)(34)
It's very hard to get ν ⋆ from (34). So we can not give a closed-form expression of h D for arbitrary N from (12). For N → ∞, the radiation power density is
lim N →∞ Ψ d (ν) = P 8π 2 2π 0 1 r 2 + ν 2 − 2rν cos θ + h 2 D dθ = P 4π 1 (r 2 + ν 2 + h 2 D ) 2 − 4r 2 ν 2(35)r 2 + ν 2 + h 2 D 2 − 4r 2 ν 2(36)Let t = ν 2 , we have f (t) = t 2 + 2(h 2 D − r 2 )t + (r 2 + h 2 D ) 2 . With f ′ (r) = 0, we get t ⋆ = r 2 − h 2 D (37) case 1: If t ⋆ > 0, we argue that ν ⋆ = √ t ⋆ = r 2 − h 2 D
, thus the maximal radiation power density is P 8πrhD . According to (12), we have
h D = h 2 C 2r(38)
case 2: If t ⋆ ≤ 0, we argue that ν ⋆ = 0. Similarly to Case 1, we get
h D = h 2 C − r 2 (39) Q = 2 R 0 π 0 ρ (ρ 2 + r 2 + h 2 D − 2ρr cos θ) dθ I dρ = R 0 2πρ (ρ 2 + r 2 + h 2 D ) 2 − 4ρ 2 r 2 dρ t=ρ 2 = === = R 2 0 π (t 2 + 2(h 2 D − r 2 )t + (r 2 + h 2 D ) 2 dt = π arcsinh R 2 + h 2 D − r 2 2rhD − arcsinh h 2 D − r 2 2rhD (42) h11(r) = 2r (R 2 +h 2 C −2r 2 ) 2 + (2h 2 C −4r 2 ) 2 h 2 C −r 2 R 2 −h 2 C + R 2 R 2 − 2r 2 + h 2 C − 2r 2 2 + (2h 2 C − 4r 2 ) 2 h 2 C − r 2 (46) h12(r) = h 4 C 32r 7 4r 2 R 2 +h 4 C −4r 4 2 + 16r 4 h 4 C + 4r 2 R 2 +h 4 C −4r 4 (4r 2 R 2 +h 4 C −4r 4 ) 2 + 16r 4 h 4 C − h 4 C +4r 4 (4r 2 R 2 +h 4 C −4r 4 ) 2 + 16r 4 h 4 C − h 4 C +4r 4 4r 2 R 2 +h 4 C −4r 4 + 4r 2 h 4 C (47) h21(r) = 2r R 2 + h 2 C R 4 + R 2 (2h 2 C − 4r 2 ) + h 4 C + R 2 + h 2 C h 2 C − r 2 2R 2 R 4 + R 2 (2h 2 C − 4r 2 ) + h 4 C +2R 2 h 2 C − r 2 + R 2 R 2 − 2r 2 + h 4 C(54)
From the above, we conclude
ν ⋆ = 0, 0 ≤ r ≤ hC √ 2 , r 2 − h 2 C 2r 2 , hC √ 2 ≤ r ≤ R.(40)
and
h D = h 2 C − r 2 , 0 ≤ r ≤ hC √ 2 , h 2 C 2r , hC √ 2 ≤ r ≤ R.(41)
Thus this ends the proof.
APPENDIX B It is difficult to give a closed-form expression of Q for arbitrary path-loss exponent α, but we get a closed-form result when α takes the typical value 2 and 4. Specifically, for the special case α = 2, Q is derived as (42) where ( [20], (3.661.4)) and ( [20], (2.261)) were exploited to derive I and the last equation in (42), respectively. With arcsinh(x) = ln x + √ x 2 + 1 , and after some algebraic manipulations, we finally get
Q = π ln R 2 + h 2 D − r 2 + (R 2 + h 2 D − r 2 ) 2 + 4r 2 h 2 D 2h 2 D
(43) For α = 4, similar derivation procedure can be followed to get Q. Thus this ends the proof.
APPENDIX C For the special case α = 2, the optimization problem P1 can be reduced to the following problem
max r f 1 (r) s.t. 0 ≤ r ≤ R(44)
where f 1 (r) = ln
R 2 +h 2 D −r 2 + √ (R 2 +h 2 D −r 2 ) 2 +4r 2 h 2 D 2h 2 D
and h D is given by (14). For the convenience of calculation, let a R 2 + h 2 D − r 2 ,b h 2 D ,c 2rh D , thus the first-order derivative of f 1 (r) is given by f ′ 1 (r) = a ′ √ a 2 + c 2 +aa ′ +cc ′ b− a √ a 2 + c 2 +a 2 +c 2 b ′ a + √ a 2 + c 2 b √ a 2 + c 2 (45) With the nominator being always larger than zero, we only consider the numerator. case 1: When r ∈ 0, hC √ 2 , denote the numerator as (46). For any h C ∈ [ √ 2R, R), it is easy to show that h 11 (r) > 0 always holds. Therefore, for r ∈ 0, hC √ 2 , f ′ 1 (r) > 0 always holds. Note that there is a minimal value of f 1 (r) when r = 0, so we discard it and only focus on r > 0 from now on. case 2: When r ∈ hC √ 2 , R , denote the numerator as (47). Discarding the positive terms and after some algebraic manipulations, we get
I 1 (r) = 4r 2 R 2 −2r 2 (4r 2 R 2 +h 4 C −4r 4 ) 2 +16r 4 h 4 C +4r 2 R 2 + h 4 C − 4r 4 + 4r 2 h 4 C(48)
With the variable substitution x = r 2 , let I 1 (x) = 0. We get
4x 2 − 2R 2 x − h 4 C = 0(49)
Note that x is larger than zero, so
x 1 = R 2 + R 4 + 4h 4 C 4(50)
For any h C ∈ [ √ 2R, R), it is easy to show that x 1 ∈ h 2 C 2 , R 2 . Therefore, the uniqueness of root of equation
(r) | r→ h C √ 2 − = f ′ 1 (r) | r→ h C √ 2 + > 0, so the f ′ 1 (r) is continuous at r = hC √ 2 .
On one hand, with f ′ 1 (r) > 0 for r ∈ 0, hC √ 2 , which has been proved above, we argue that the optimal DAE radius must lie in the range hC √ 2 , R . On the other hand, f ′ 1 (r) | r→R − < 0, f 1 (R) is certainly not the maximal value. Therefore
r ⋆ = √ x 1 = 1 2 R 2 + R 4 + 4h 4 C(51)
This ends the proof.
APPENDIX D For the special case α = 4, similar derivation procedure can be followed to get the optimal DAE radius. The optimization problem P2 can be reduced to the following problem and h D is given by (14). Let a h 2 D ,b R 4 + R 2 (2h 2 D − 2r 2 ) + (r 2 + h 2 D ) 2 ,c R 2 −h 2 D −r 2 , thus the first-order derivative of f 2 (r) is given by
f ′ 2 (r) = (c ′ + b ′ ) ab − (c + b) (a ′ b + ab ′ ) (ab) 2(53)
case 1: When r ∈ 0, hC √ 2 , denote the numerator as (54). Similar to α = 2, for r ∈ 0, hC √ 2 , it is easy to prove that f ′ 2 (r) > 0 always holds. So we discard it and only focus on r > 0 from now on.
With the variable substitution x = r 2 , let I 2 (x) = 0. We get
256x 8 −768R 2 x 7 +128(6R 4 +h 4 C )x 6 +(224h 4 C R 2 −256R 6 )x 5 − 192R 4 h 4 C x 4 −32R 2 h 4 C (R 4 + 2h 4 C )x 3 −8h 8 C (4R 4 + h 4 C )x 2 − 10R 2 h 12 C x − h 16 C = 0(57)
Similar to α = 2, it can be proved that the optimal real root must lie in the range ( h 2 C 2 , R 2 ). Therefore, the optimal DAE radius must be one of the square-roots of the above eight-order equation real roots in ( h 2 C 2 , R 2 ). This ends the proof.
and G. Zhao are with School of Electronics and Information Engineering, Xi'an Jiaotong University, Xi'an, 710049 China. E-mail: {[email protected], [email protected]}. This work was supported in part by Natural Science Basic Research Plan in Shaanxi Province of China (Program No.2015JQ6234), the National Natural Science Foundation of China (NSFC) under Grant (No.61461136001), and the Fundamental Research Funds for the Central Universities.
Fig. 1 .
1CA-PB with multi-antennas (b) Our proposed DA-PB with multi-antennas System model.
case 2 :
2If there are κ (κ = 2 or 3) real roots in ( h 2 C 2 , R 2 ), denoted as {x i , i ≤ κ}, the optimal radius is
: a = a i , b = b i , tolerance ǫ > 0. 8: While |a − b| > ǫ 9: Begin 10: If p( a+b 2 ) = 0, then skip to step 15. 11: Elseif p(a)p( a+b 2 ) > 0, then a = a+b 2 . 12: Else, then b = a+b 2 . 13: Endif 14: End while loop 15: x i = a+b 2 . 16: End for loop
h 2 C
22R ≥ 1. For the convenience of comparison, we give the results for different antenna heights of CA-PB.
Fig. 2 .
2The antenna height of DA-PB versus DAE radius r, where N = 100. PB (The.), α=2 DA−PB (Sim.), α=2 CA−PB (The.), α=2 CA−PB (Sim.), α=2 DA−PB (The.), α=4 DA−PB (Sim.), α=4 CA−PB (The.), α=4 CA−PB (Sim.), α=4
Fig. 3 .
3The average harvested DC power per user versus transmit power P , where N = 100, r = 2R 3 .
Fig. 4 . α=2 Fig. 5 .
4α=25The ergodic harvested DC power of one user versus the distance between the user and the cell center r M S for different path-loss exponents α = 2, 3, 4, where N = 100, r = 2R 3 , P = 20 W. PB (theo), α=2 DA−PB (sim), α=2 CA−PB (theo), α=2 CA−PB (sim), α=2 DA−PB (theo), α=4 DA−PB (sim), α=4 CA−PB (theo), α=4 CA−PB (sim), α=4 DA−PB (analytical result), α=4 DA−PB (analytical result), The average harvested DC power per user versus antenna number N , where r = 2R 3 , P = 20 W.
Fig. 6 .
6The average harvested DC power per user versus antennas height h C , where N = 100, r = 2R 3 , P = 20 W. of WPT (%) CA−PB theoretical result, α=2 DA−PB analytical result, α=2 DA−PB, theoretical result α=2 optimal r, α=2 CA−PB theoretical result, α=4 DA−PB analytical result, α=4 DA−PB, theoretical result α=4 optimal r, α=4
Fig. 7 .
7The average efficiency of WPT versus DAE radius r, where N = 100.
Fig. 8 .
8The transmit power versus the DAE radius r with average harvested DC power being 1 mW. PB, α=2 DA−PB, α=2 CA−PB, α=4 DA−PB, α=4
Fig. 9 .
9The cumulative distribution function of WPT efficiency for users in the charging cell, where r = 2R 3 .
ACKNOWLEDGMENT The work is supported by Natural Science Basic Research Plan in Shaanxi Province of China (Program No.2015JQ6234) and the National Natural Science Foundation of China (NSFC) under Grant (No.61461136001).
+R 2 (2h 2 D −2r 2 )+(r 2 +h 2 D ) 2
On one hand, in order to satisfy the Friis Equation, we haveh D ≥ d ref ,where d ref is a reference distance for the antenna far field. According to(14), we getNote that Q is a variable related to path-loss exponent, antenna
height of DA-PB, and the DAE radius. So we can optimize
the location of DAEs to maximize the average efficiency of
WPT for DA-PB.
IV. LOCATION OPTIMIZATION OF CIRCULAR PB
DISTRIBUTED ANTENNAS
TABLE I PARAMETER
ISETTING USED IN THE SIMULATION EXPERIMENTSSymbol
Definition
Value
Unit
h C
Height of CA-PB Antennas
7.75
m
r
Radius of UCL Distributed Antennas
20
m
R
Radius of Circular Coverage
30
m
Is
Reverse Saturation Current of Schottky Diode
1
mA
N
Number of Power Beacon Antennas
100
P
Transmit Power of the Power Beacon
20-200
W
c
Constant Scaling Factor
1
V T
Thermal Voltage
28.85
mV
α
Path-Loss Exponent
2 or 4
ρ
Quality Factor of Schottky Diode
1
ξ
Coefficient of Energy Conversion
0.85
σ 2
h
Average Multi-Path Gain
1
, R has been demonstrated. It is easy to show that f ′ 1h22(r) = A
h 4
C
2r 3 R 4 + R 2 h 4
C
2r 2 − 2r 2 + r 2 +
h 4
C
4r 2
2
3
2
+
4r 2 R 2 h 4
C −8r 4 h 4
C
8r 5
R 4 +R 2 h 4
C
2r 2 −2r 2 + r 2 +
h 4
C
4r 2
2
−
h 4
C (4r 2 R 2
− h 4
C − 4r 4 )
32r 4
R 2
−
h 4
C
r 3 − 4r + 2 r 2 +
h 4
C
4r 2
2r −
h 4
C
2r 3
(55)
f ′
1 (r) = 0 in the range hC
√
2
case 2: When r ∈ hC √ 2 , R , denote the numerator as (55),where A = R 4 +R 2 h 4C
2r 2 −2r 2 + r 2 +
h 4
C
4r 2
2 − 1
2
. Discard-
ing the positive terms and after some algebraic manipulations,
we get
I 2 (r) = 16r 4 R 4 + 8r 2 R 2 (h 4
C − 4r 4 ) + (4r 4 + h 4
C ) 2 3
2
+ 16r 4 R 4 +8r 2 R 2 (h 4
C −4r 4 )+(4r 4 +h 4
C ) 2 4r 2 R 2 −8r 4
− 4r 2 R 2 −h 4
C −4r 4 −4r 2 R 2 (h 4
C +4r 4 )+16r 8 −h 8
C
According to the IEEE standard C95.1-2005, the safety radiation level of human exposure to RF electromagnetic fields from 2 GHz to 100 GHz is 10 W/m 2 (i.e., 1 mW/cm 2 ) ([8] and[19], p. 27).
Transporting information and energy simultaneously. L R Varshney, Proc. IEEE ISIT. IEEE ISITToronto, ON, CanadaL. R. Varshney, "Transporting information and energy simultaneously," in Proc. IEEE ISIT, Toronto, ON, Canada, pp. 1612-1616, Jul. 2008.
Shannon meets Tesla: wireless information and power transfer. P Grover, A Sahai, Proc. IEEE ISIT. IEEE ISITAustin, TX, USAP. Grover and A. Sahai, "Shannon meets Tesla: wireless information and power transfer," in Proc. IEEE ISIT, Austin, TX, USA, pp. 2363-2367, Jun. 2010.
MIMO broadcasting for simultaneous wireless information and power transfer. R Zhang, C K Ho, IEEE Trans. Wireless Commun. 125R. Zhang and C. K. Ho, "MIMO broadcasting for simultaneous wireless information and power transfer," IEEE Trans. Wireless Commun., vol. 12, no. 5, pp. 1989-2001, May 2013.
Wireless information and power transfer: Architecture design and rate-energy tradeoff. X Zhou, R Zhang, C K Ho, IEEE Trans. Commun. 6111X. Zhou, R. Zhang and C. K. Ho, "Wireless information and power transfer: Architecture design and rate-energy tradeoff," IEEE Trans. Commun., vol. 61, no. 11, pp. 4754-4767, Nov. 2013.
Iterative dynamic power splitting for multi-relay networks with wireless energy harvesting. C Zhang, L Hu, IEEE Signal Process. Lett. 2212C. Zhang and L. Hu, "Iterative dynamic power splitting for multi-relay networks with wireless energy harvesting," IEEE Signal Process. Lett., vol. 22, no. 12, pp. 2274-2278, Dec. 2015.
Wireless networks with RF energy harvesting: A contemporary survey. X Lu, P Wang, D Niyato, D I Kim, Z Han, IEEE Commun. Surveys Tuts. 172X. Lu, P. Wang, D. Niyato, D. I. Kim and Z. Han, "Wireless networks with RF energy harvesting: A contemporary survey," IEEE Commun. Surveys Tuts., vol. 17, no. 2, pp. 757-789, May 2015.
On the deployment of energy sources in wireless-powered cellular networks. H Tabassum, E Hossain, IEEE Trans. Wireless Commun. 639H. Tabassum, and E. Hossain, "On the deployment of energy sources in wireless-powered cellular networks, IEEE Trans. Wireless Commun., vol. 63, no. 9, pp. 3391-3404, Sep. 2015.
On the efficiency of far-field wireless power transfer. M Xia, S Aïssa, IEEE Trans. Signal Process. 6311M. Xia and S. Aïssa, "On the efficiency of far-field wireless power transfer," IEEE Trans. Signal Process., vol. 63, no. 11, pp. 2835-2847, May 2015.
Adaptively directional wireless power transfer for large-sacle sensor networks. Z Wang, L Duan, R Zhang, IEEE J. Sel. Areas Commun. 345Z. Wang, L. Duan, and R. Zhang, "Adaptively directional wireless power transfer for large-sacle sensor networks," IEEE J. Sel. Areas Commun., vol. 34, no. 5, pp. 1785-1800, May 2016.
On sum rate of multi-user distributed antenna system with circular antenna layout. J Gan, Y Li, S Zhou, J Wang, Proc. 2007 IEEE VTC -Fall. 2007 IEEE VTC -FallJ. Gan, Y. Li, S. Zhou, and J. Wang, "On sum rate of multi-user distributed antenna system with circular antenna layout," in Proc. 2007 IEEE VTC -Fall, pp. 596-600, 2007.
Distributed antenna systems with randomness. J Zhang, J G Andrews, IEEE Trans. Wireless Commun. 79J. Zhang, and J. G. Andrews, "Distributed antenna systems with random- ness," IEEE Trans. Wireless Commun., vol. 7, no. 9, pp. 3636-3646, Sep. 2008.
Performance analysis and location optimization for massive MIMO systems with circularly distributed antenna. A Yang, Y Jing, C Xing, Z Fei, IEEE Trans. Wireless Commun. 1410A. Yang, Y. Jing, C. Xing, and Z. Fei, "Performance analysis and loca- tion optimization for massive MIMO systems with circularly distributed antenna," IEEE Trans. Wireless Commun., vol. 14, no. 10, pp. 5659- 5671, Oct. 2015.
Antenna location design for generalized distributed antenna systems. X Wang, P Zhu, M Chen, IEEE Commun. Lett. 135X. Wang, P. Zhu, and M. Chen, "Antenna location design for generalized distributed antenna systems," IEEE Commun. Lett., vol. 13, no. 5, pp. 315-317, May 2009.
R L Boylestad, L Nashelsky, Electronic Devices and Circuit Theory. Boston, MA, USAPearson11th edR. L. Boylestad and L. Nashelsky, Electronic Devices and Circuit Theory, 11th ed. Boston, MA, USA: Pearson, 2013.
Wireless Communications. A Goldsmith, Cambridge Univ. PressCambridge, U.K.A. Goldsmith, Wireless Communications. Cambridge, U.K.: Cambridge Univ. Press, 2005.
Enabling wireless power transfer in cellular networks: Architecture, modeling and deployment. K Huang, V K Lau, IEEE Trans. Wireless Commun. 132K. Huang and V. K. Lau, "Enabling wireless power transfer in cellular networks: Architecture, modeling and deployment," IEEE Trans. Wire- less Commun., vol. 13, no. 2, pp. 902-912, Feb. 2014.
Waveform optimization for wireless power transfer with nonlinear energy harvester modeling. B Clerckx, E Bayguzina, D Yates, P D Mitcheson, Proc. 2015 IEEE International Symposium on Wireless Communication Systems (ISWCS). 2015 IEEE International Symposium on Wireless Communication Systems (ISWCS)Brussels, BelgiumB. Clerckx, E. Bayguzina, D. Yates, and P. D. Mitcheson, "Waveform optimization for wireless power transfer with nonlinear energy harvester modeling," in Proc. 2015 IEEE International Symposium on Wireless Communication Systems (ISWCS), Brussels, Belgium, pp. 25-28, Aug. 2015.
IEEE Recommended Practice for Measurements and Computations of Radio Frequency Electromagnetic Fields With Respect to Human Exposure to Such Fields, 100 kHz-300 GHz. IEEE Standard. 95IEEE Recommended Practice for Measurements and Computations of Radio Frequency Electromagnetic Fields With Respect to Human Exposure to Such Fields, 100 kHz-300 GHz, IEEE Standard C95.3-2002, Dec. 2002.
IEEE Standards for Safety Levels With Respect to Human Exposure to Radio Frequency Electromagnetic Fields, 3 kHz to 300 GHz. IEEE Standard C95. IEEE Standards for Safety Levels With Respect to Human Exposure to Radio Frequency Electromagnetic Fields, 3 kHz to 300 GHz, IEEE Standard C95.1-2005, Oct. 2005.
I S Gradshteyn, I M Ryzhik, Tabel of Integrals, Series and Products. New York, NY, USAAcademic7th edI. S. Gradshteyn and I. M. Ryzhik, Tabel of Integrals, Series and Products, 7th ed. New York, NY, USA: Academic, 2007.
A P Prudnikov, Y A Brychkov, O I Marichev, Elementary Function. New York, NY, USAGordon and Breach Sci. Publishers1A. P. Prudnikov, Y. A. Brychkov, and O. I. Marichev, Integrals and Series. Volume 1: Elementary Function. New York, NY, USA: Gordon and Breach Sci. Publishers, 1986.
J V Uspensky, Theory of Equations. New YorkMcGraw-HillJ. V. Uspensky, Theory of Equations. New York: McGraw-Hill, 1948.
Using sturm sequences to bracket real roots of polynomial equations. D G Hook, P R Mcaree, Academic Press Professional, IncSan Diego, CA, USAGraphics Gems ID. G. Hook, and P. R. McAree, "Using sturm sequences to bracket real roots of polynomial equations," Graphics Gems I, Academic Press Professional, Inc. San Diego, CA, USA, pp. 416-422, 1990.
| []
|
[
"On a problem of Sierpiński",
"On a problem of Sierpiński"
]
| [
"Jin-Hui Fang [email protected] ",
"Yong-Gao Chen [email protected] ",
"\nDepartment of Mathematics\nSchool of Mathematical Sciences and Institute of Mathematics\nNanjing University of Information Science & Technology\n210044NanjingP. R. CHINA\n",
"\nNanjing Normal University\n210046NanjingP. R. CHINA\n"
]
| [
"Department of Mathematics\nSchool of Mathematical Sciences and Institute of Mathematics\nNanjing University of Information Science & Technology\n210044NanjingP. R. CHINA",
"Nanjing Normal University\n210046NanjingP. R. CHINA"
]
| []
| For any integer s ≥ 2, let µ s be the least integer so that every integer ℓ > µ s is the sum of exactly s integers > 1 which are pairwise relatively prime. In this paper we solve an old problem of Sierpiński by determining all µ s . As a corollary, we show that p 2 + p 3 + · · · + * Corresponding author 2010 Mathematics Subject Classification: 11A41,11A67. | 10.4064/aa156-4-5 | [
"https://arxiv.org/pdf/1110.4714v2.pdf"
]
| 119,649,734 | 1110.4714 | 2eba72149b64dc3bbbc1e030301b2bccbd5878a7 |
On a problem of Sierpiński
20 May 2012
Jin-Hui Fang [email protected]
Yong-Gao Chen [email protected]
Department of Mathematics
School of Mathematical Sciences and Institute of Mathematics
Nanjing University of Information Science & Technology
210044NanjingP. R. CHINA
Nanjing Normal University
210046NanjingP. R. CHINA
On a problem of Sierpiński
20 May 2012and phrases: Sierpiński's problemconsecutive primespairwise relatively prime
For any integer s ≥ 2, let µ s be the least integer so that every integer ℓ > µ s is the sum of exactly s integers > 1 which are pairwise relatively prime. In this paper we solve an old problem of Sierpiński by determining all µ s . As a corollary, we show that p 2 + p 3 + · · · + * Corresponding author 2010 Mathematics Subject Classification: 11A41,11A67.
p s+1 − 2 ≤ µ s ≤ p 2 + p 3 + · · · + p s+1 + 1100 and the set of integers s ≥ 2 with µ s = p 2 + p 3 + · · ·+ p s+1 + 1100 has the asymptotic density 1, where p i is the i-th prime.
Introduction
Let s ≥ 2 be an integer. Denote by µ s the least integer so that every integer ℓ > µ s is the sum of exactly s integers > 1 which are pairwise relatively prime. In 1964, Sierpiński [5] asked a determination of µ s . Let p 1 = 2, p 2 = 3, . . . be the sequence of consecutive primes. In 1965, P. Erdős [3] proved that there exists an absolute constant C with µ s ≤ p 2 +p 3 +· · ·+p s+1 +C. It is easy to see that p 2 + p 3 + · · · + p s+1 − 2 is not the sum of exactly s integers > 1 which are pairwise relatively prime. So µ s ≥ p 2 + p 3 + · · · + p s+1 − 2.
Let µ s = p 2 + p 3 + · · · + p s+1 + c s . Then −2 ≤ c s ≤ C. It is easy to see that
c 2 = −2.
Let U be the set of integers of the form p k 2 2 + p k 3 3 + · · · + p k 11 11 − p 2 − p 3 − · · · − p 11 ≤ 1100, where k i (2 ≤ i ≤ 11) are positive integers. U can be given explicitly by Mathematica (one may refer to the Appendix). Let V s be the set of integers of the form p i 1 + · · · + p i l − p j 1 − · · · − p j l ≤ 1100, where 2 ≤ j 1 < · · · < j l ≤ s + 1 < i 1 < · · · < i l . It is clear that 0 ∈ U and 0 ∈ V s (l = 0). Define U + V s = {u + v | u ∈ U, v ∈ V s }. Then U + V s is finite.
In this paper the following results are proved. The main results have been announced at ICM2010. Theorem 1. Let s ≥ 2 be any given positive integer. Then c s = max{2n | 2n ≤ min{1100, p s+2 }, n ∈ Z, 2n / ∈ U + V s }.
µ s = s+1 i=2 p i + 1100.
In particular, the set of integers s ≥ 2 with
µ s = s+1 i=2 p i + 1100
has the asymptotic density 1.
We pose a problem here. Basing on the proof of Theorem 2 in Section 4, we pose the following conjecture.
Conjecture 1. For s ≥ 3, every integer l > p 2 + p 3 + · · · + p s+2 is the sum of exactly s distinct primes.
This conjecture would follow from the following " Every odd integer n ≥ p s−1 +p s +p s+1 +p s+2 can be written as the sum of three prime numbers q 1 < q 2 < q 3 with q 1 ≥ p s−1 ". Since p s−1 < n/4, by well-known results on the odd Goldbach problem with almost equal primes, this statement is true for all sufficiently large s. (p t i i −p i ). For any even number 2m > 3858, there exists a prime p u such that p 2 u −p u ≤ 2m−1102 < p 2 u+1 −p u+1 . Then we use the induction hypothesis on 2m−(p 2 u −p u ). By these arguments we know that every even number n ≥ 1102 can be represented as
∞ i=2 (p t i i − p i ),
where t i are positive integers. One can verify that 1100 cannot be represented as
∞ i=2 (p t i i − p i ),
where t i are positive integers. (2) Denote by µ ′ s the least integer, which has the same parity as s, so that every integer ℓ > µ ′ s , which has the same parity as s, can be expressed as the sum of s distinct integers > 1 which are pairwise relatively prime.
Let µ ′ s = p 2 + · · · + p s+1 + τ ′ s . Then τ ′ s is even. For 2n > min{1100, p s+2 }, if min{1100, p s+2 } < 2n ≤ 1100, then s ≤ 182. By calculation we know that s+1 i=2 p i + 2n can be expressed as the sum of s distinct odd primes. Now we assume that 2n > 1100. If 2n is "large", then we can choose a "large" prime q such that p s+2 + 2n − q > τ ′ s . By the induction hypothesis, p 2 + · · · + p s+1 + (p s+2 + 2n − q) can be expressed as the sum of s distinct integers > 1, which are pairwise relatively prime. Thus p 2 + · · · + p s+1 + p s+2 + 2n can be expressed as the sum of s + 1 distinct integers > 1 which are pairwise relatively prime; if 2n is "small", then by
(1) (we take some t i = 1) 2n = s+2 i=2 (p t i i − p i ).
Thus
p 2 + · · · + p s+1 + p s+2 + 2n = s+2 i=2 p t i i .
We can easily convert the case p 2 + · · · + p s+1 + p s+2 + 2n + 1 into p 1 + p 2 + · · · + p s+1 + (p s+2 + 2n − 1) and use the induction hypothesis.
Recall that µ ′ s is the least integer, which has the same parity as s, so that every integer ℓ > µ ′ s , which has the same parity as s, can be expressed as the sum of s distinct integers > 1 which are pairwise relatively prime, and τ ′ s = µ ′ s − (p 2 + · · · + p s+1 ) is even. The following Theorem 2 is a step in the proof of Theorem 1, and not an independent result.
Theorem 2. τ ′ s = max{2n | 2n ≤ min{1100, p s+2 }, n ∈ Z, 2n / ∈ U + V s }.
Preliminary Lemmas
In this paper, p, q i are all primes. First we introduce the following lemmas.
Lemma 1. [2, Lemma 4] For x > 24 there exists a prime in (x, 3 2 x].
Lemma 2. Every even number n ≥ 1102 can be represented as
∞ i=2 (p t i i − p i ), where t i are positive integers. The integer 1100 cannot be represented as ∞ i=2 (p t i i − p i ), where t i are positive integers.
Proof. The proof is by induction on even numbers n. For any sets X, Y of
integers, define X + Y = {x + y : x ∈ X, y ∈ Y }. Let U 4 = {0, 3 2 − 3, 3 3 − 3, 3 4 − 3, 3 5 − 3, 3 6 − 3, 3 7 − 3} +{0, 5 2 − 5, 5 3 − 5, 5 4 − 5} + {0, 7 2 − 7, 7 3 − 7}, U i = U i−1 ∪ (U i−1 + {p 2 i − p i }), i = 5, 6, · · · .
By Mathematica, we can produce each U i and verify that [1102, 3858] ∩ 2Z ⊆ U 12 and 1100 / ∈ U 12 .
Thus, if n is an even number with 1102 ≤ n ≤ 3858, then n can be represented as
∞ i=2 (p t i i − p i ),
where t i are positive integers. Now assume that for any even number n with 1102 ≤ n < 2m (2m > 3858), n can be represented as
∞ i=2 (p t i i − p i ), where t i are positive integers. Since 2m − 1102 > 3858 − 1102 = 53 2 − 53, there exists a prime p u ≥ 53 with p 2 u − p u ≤ 2m − 1102 < p 2 u+1 − p u+1 .(1)
Then
1102 ≤ 2m − (p 2 u − p u ) < 2m.
By the induction hypothesis, we have
2m − (p 2 u − p u ) = ∞ i=2 (p t i i − p i ),
where t i are positive integers. Hence
2m = ∞ i=2 (p t i i − p i ) + (p 2 u − p u ).(2)
Now we prove that t u = 1. If this is not true, then t u ≥ 2 and 2m ≥
2(p 2 u − p u ). By (1) we have 2(p 2 u − p u ) − 1102 ≤ 2m − 1102 < p 2 u+1 − p u+1 < p 2 u+1 − p u . Thus 2p 2 u − p u − 1102 < p 2 u+1 .
By p u ≥ 53 and Lemma 1 we have p u+1 ∈ (p u , 3 2 p u ]. Since
3 2 p u ≤ 2p 2 u − p u − 1102, we have p 2 u+1 ≤ 2p 2 u − p u − 1102,
a contradiction. So t u = 1. By (2), we have 2m can be represented as Suppose that 1100 can be expressed as
∞ i=2 (p t ′ i i − p i ),∞ i=2 (p t i i − p i ), where t i are positive integers. Then p t i i − p i ≤ 1100 for all i. If t i ≥ 2, then p 2 i − p i ≤ 1100. Thus p i < 37. So i < 12. If t i ≥ 3, then p 3 i − p i ≤ 1100. Thus p i ≤ 7 = p 4 . By p t 2 2 − p 2 ≤ 1100 we have t 2 ≤ 6. By p t 3 3 − p 3 ≤ 1100 we have t 3 ≤ 4. By p t 4 4 − p 4 ≤ 1100 we have t 4 ≤ 3.
Hence 1100 ∈ U 12 , a contradiction. Therefore 1100 cannot be expressed as
p i + 2n = s i=1 m i ,
where 1 < m 1 < · · · < m s and (m i , m j ) = 1 for 1 ≤ i, j ≤ s, i = j. By comparing the parities we know that these s integers m i must all be odd. If one of these s integers has at least two distinct prime factors, then the sum of these s integers is at least 3×5+p 4 +· · ·+p s+2 = p 2 +· · ·+p s+1 +p s+2 +7.
This contradicts 2n ≤ p s+2 . This completes the proof of Lemma 3.
Proof of Theorem 2
For s ≥ 2, let
H(s) = {p j − p i : 2 ≤ i ≤ s + 1 < j ≤ 185} {p u + p v − p s − p s+1 : s ≤ u ≤ 105, u < v ≤ 180}.
By Mathematica, for 2 ≤ s ≤ 182 we find that [p s+2 , 1100] ∩ 2Z ⊆ H(s).
Thus, for p s+2 < 2n ≤ 1100, Let h s be the largest even number 2n ≤ 1100 such that s+1 i=2 p i +2n cannot be expressed as the sum of s distinct integers > 1 which are pairwise relatively prime. Noting that p s+2 > 1100 for s ≥ 183, by the above arguments we have h s ≤ min{1100, p s+2 } for all s ≥ 2.
We will use induction on s to prove that τ ′ s = h s for all s ≥ 2. For every even number ℓ > 6, we have φ(ℓ) > 2, where φ(ℓ) is the Euler's totient function. Hence there exists an integer n with 2 ≤ n ≤ ℓ − 2 and (n, ℓ) = 1. So ℓ = n + (ℓ − n), (n, ℓ − n) = 1, n ≥ 2, ℓ − n ≥ 2.
Thus τ ′ 2 = −2 = h 2 . Suppose that τ ′ s = h s . Now we prove that τ ′ s+1 = h s+1 . Let ℓ be an integer which has the same parity as s + 1. Write
ℓ = s+2 i=2 p i + 2n.
Then 2n is an even number. By the definition of τ ′ s+1 and h s+1 , it is enough to prove that if 2n > 1100, then s+2 i=2 p i + 2n can be expressed as the sum of s + 1 distinct integers > 1 which are pairwise relatively prime.
Assume that 2n > 1100. Write 2t = 2n − τ ′ s . By τ ′ s = h s ≤ p s+2 we have p s+2 + 2t = p s+2 + 2n − τ ′ s ≥ 2n > 1100. By Lemma 1 there exists an odd prime q with 2 3 (p s+2 + 2t) < q < p s+2 + 2t. Then
ℓ − q > ℓ − p s+2 − 2t = s+1 i=2 p i + τ ′ s . Since ℓ − q ≡ s (mod 2),
by the induction hypothesis, we have ℓ − q = n 1 + · · · + n s , where 1 < n 1 < · · · < n s and (n i , n j ) = 1 for 1 ≤ i, j ≤ s, i = j . By ℓ − q ≡ s (mod 2) and (n i , n j ) = 1 for 1 ≤ i, j ≤ s, i = j, we have 2 ∤ n i for
1 ≤ i ≤ s.
If q > n s , we are done. Now we assume that q ≤ n s . By ℓ − q = n 1 + · · · + n s , we have ℓ ≥ 2q + p 2 + · · · + p s > 4 3 p s+2 + 8 3 t + p 2 + · · · + p s .
By (3) and
ℓ = s+2 i=2 p i + 2t + τ ′ s , we have 1 3 p s+2 − p s+1 + 2 3 t < τ ′ s .(4)
Noting that τ ′ s ≤ p s+2 , by (4) we have
2n = 2t + τ ′ s < 4τ ′ s + 3p s+1 − p s+2 < 6p s+2 .(5)
Since 2n > 1100, by Lemma 2 we have
2n = ∞ i=2 (p t i i − p i ), t i ≥ 1, i = 2, 3, . . . .(6)
For i ≥ s + 3, by (5) and (6) we have
p t i s+3 − p s+3 ≤ p t i i − p i ≤ 2n < 6p s+2 .
Since p s+3 − 1 ≥ p 5 − 1 = 10, we have t i = 1 for all i ≥ s + 3. Hence
ℓ = s+2 i=2 p i + 2n = s+2 i=2 p i + s+2 i=2 (p t i i − p i ) = s+2 i=2 p t i i .
Thus we have proved that if ℓ = s+2 i=2 p i + 2n cannot be expressed as the sum of s + 1 distinct integers > 1 which are pairwise relatively prime, then 2n ≤ 1100. By the definition of h s+1 and τ ′ s+1 , we have τ ′ s+1 = h s+1 . Therefore, τ ′ s = h s for all s ≥ 2. Now we have proved that τ ′ s = h s is the largest even number 2n ≤ 1100 such that s+1 i=2 p i + 2n cannot be expressed as the sum of s distinct integers > 1 which are pairwise relatively prime and τ ′ s = h s ≤ min{1100, p s+2 }. In order to prove Theorem 2, it is enough to prove that τ ′ s / ∈ U + V s and if 2n is an even number with τ ′ s < 2n ≤ min{1100, p s+2 }, then 2n ∈ U + V s . Let 2n be an even number with τ ′ s < 2n ≤ min{1100, p s+2 }. Now we prove that 2n ∈ U + V s . By Lemma 3 and the definition of τ ′ s , we have p 2 + · · · + p s+1 + 2n = p α 1 l 1 + · · · + p αs ls , where 2 ≤ l 1 < · · · < l s and α i ≥ 1(1 ≤ i ≤ s). If l 1 ≥ s + 2, then
l i ≥ s + 1 + i(1 ≤ i ≤ s).
Thus l s ≥ 2s + 1 ≥ 5 and p ls ≥ p 5 = 11. Hence 2n = p α 1 l 1 + · · · + p αs ls − (p 2 + · · · + p s+1 ) ≥ p s+2 + · · · + p 2s+1 − (p 2 + · · · + p s+1 ) ≥ p s+2 + · · · + p 2s + 11 − (p 2 + · · · + p s+1 ) > p s+2 , a contradiction with 2n ≤ min{1100, p s+2 }. So l 1 ≤ s + 1. Let r be the largest index with l r ≤ s + 1. If r = s, then l i = i + 1(1 ≤ i ≤ s). Thus 2n = (p α 1 2 − p 2 ) + · · · + (p αs s+1 − p s+1 ).
If r < s, let {2, 3, . . . , s + 1} = {l 1 , . . . , l r } {j 1 , . . . , j s−r } with j 1 < · · · < j s−r . Hence 2n = (p α 1 l 1 − p l 1 ) + · · · + (p αr lr − p lr ) + p α r+1 l r+1 + · · · + p αs ls − p j 1 − · · · − p j s−r . (8) (7) and (8) we have
For 1 ≤ i ≤ r, if α i ≥ 2, then byp l i (p l i − 1) ≤ 2n ≤ 1100.
Thus p l i ≤ 31 and l i ≤ 11. Hence
(p α 1 l 1 − p l 1 ) + · · · + (p αr lr − p lr ) ∈ U.(9)
For
r < i ≤ s, if α i ≥ 2, then p α r+1 l r+1 + · · · + p αs ls − p j 1 − · · · − p j s−r ≥ p 2 s+2 + (s − r − 1)p s+3 − (s − r)p s+1 > p s+2 ≥ 2n,
a contradiction. So α i = 1 for all r < i ≤ s. By (8) we have p α r+1 l r+1 + · · · + p αs ls − p j 1 − · · · − p j s−r ≤ 2n ≤ 1100.
Hence p α r+1 l r+1 +· · ·+p αs ls −p j 1 −· · ·−p j s−r = p l r+1 +· · ·+p ls −p j 1 −· · ·−p j s−r ∈ V s . (10) By (7) -(10) we have 2n ∈ U + V s .
In order to prove Theorem 2, it suffices to prove that τ ′ s / ∈ U + V s .
Suppose that τ ′ s ∈ U + V s . Then
τ ′ s = 11 i=2 (p β i i − p i ) + p i 1 + · · · + p i l − p w 1 − · · · − p w l ,
where β i (2 ≤ i ≤ 11) are positive integers and w 1 < · · · < w l ≤ s + 1 < i 1 < · · · < i l . Let
11 i=2 (p β i i − p i ) = m i=1 (p d i e i − p e i ),
where 2 ≤ e 1 < · · · < e m ≤ 11 and d i ≥ 2(1 ≤ i ≤ m). Since
p em (p em − 1) ≤ p dm em − p em ≤ τ ′ s ≤ p s+2 ,
we have e m ≤ s + 1. If w 1 ≤ e m , then
τ ′ s = m i=1 (p d i e i − p e i ) + p i 1 + · · · + p i l − p w 1 − · · · − p w l ≥ p dm em − p em − p w 1 + p s+2 ≥ p em (p em − 2) + p s+2 > p s+2 ,
a contradiction with τ ′ s ≤ min{1100, p s+2 }. Hence e m < w 1 . Thus 2 ≤ e 1 < · · · < e m < w 1 < · · · < w l ≤ s + 1 < i 1 < · · · < i l . Then
p 2 + · · · + p s+1 + τ ′ s = m i=1 p d i e i + p f 1 + · · · + p f s−m−l + p i 1 + · · · + p i l .
Since
Proofs of Theorem 1 and Corollary 1
It is easy to see that c 2 = −2 and {0, 2, 4, 6} ∈ V 2 . Thus, by 0 ∈ U, all even numbers 2n with −2 < 2n ≤ min{1100, p 2+2 } are in U + V 2 . So Theorem 1 is true for s = 2.
Now we assume that s > 2.
In order to prove Theorem 1, by Theorem 2 it is enough to prove that for any odd number 2k + 1 > τ ′ s , p 2 + · · · + p s+1 + 2k + 1 can be expressed as the sum of s distinct integers > 1 which are pairwise relatively prime.
Since τ ′ s ≥ −2, we have k ≥ −1. If k = −1, then p 2 + · · · + p s+1 + 2k + 1 = p 1 + p 3 + p 4 + · · · + p s+1 .
If k = 0, then p 2 + · · · + p s+1 + 2k + 1 = p 2 1 + p 3 + p 4 + · · · + p s+1 .
Now we assume that k ≥ 1. By Theorem 2 we have p s+1 + 2k − 1 > τ ′ s−1 . Hence p 2 + · · · + p s + (p s+1 + 2k − 1) = n 1 + · · · + n s−1 , where 1 < n 1 < · · · < n s−1 and (n i , n j ) = 1 for 1 ≤ i, j ≤ s − 1, i = j.
By p 2 + · · · + p s + (p s+1 + 2k − 1) ≡ s − 1 (mod 2) and (n i , n j ) = 1 for 1 ≤ i, j ≤ s − 1, i = j, we have 2 ∤ n i for 1 ≤ i ≤ s − 1. Thus p 2 + · · · + p s + (p s+1 + 2k + 1) = 2 + n 1 + · · · + n s−1 is the required form.
This completes the proof of Theorem 1.
Proof of Corollary 1. Suppose that p s+2 − p s+1 > 1100. Then V s = {0}.
Since 1100 / ∈ U, we have 1100 / ∈ U + V s . By Theorem 1 we have c s = 1100.
This completes the proof of the first part of Corollary 1.
The second part now follows from the fact that the number of primes p ≤ x, such that p + k is prime, is bounded above by c x log 2 x , where c depends only on k ( Brun [1], Sándor, Mitrinović and Crstici [4, p.238], Wang [6]).
This completes the proof of the second part of Corollary 1.
Final Remarks
Let A = ([2, 1100] ∩ 2N) \ U and for t < s, let
V s (t) = {p s+2+i − p s+1−j | 0 ≤ i, j ≤ t} ∪{p s+2+i + p s+2+j − p s+1−u − p s+1−v | 0 ≤ i < j ≤ t, 0 ≤ u < v ≤ t}.
Let a(s, t) = max(A \ (U + V s (t))).
If a(s, t) < min{p s+2+t − p s+1 , p s+2 − p s+1−t , p s+3 + p s+2 − p s+1 − p s }, then a(s, t) = max(A \ (U + V s )).
Noting that A = ([2, 1100] ∩ 2N) \ U, by Theorem 1 we have c s = a(s, t).
Taking t = 5, by Mathematica we find that c 500 = 16, c 900 = 14, c 1000 = 8, c 2000 = 22, etc.
Problem 1 .
1Find the least positive integer s with µ s = s+1 i=2 p i + 1100.
where t ′ i are positive integers. Therefore, every even number n ≥ 1102 can be expressed as the form ∞ i=2 (p t i i − p i ), where t i are positive integers.
i i − p i ), where t i are positive integers. This completes the proof of Lemma 2.
Lemma 3 .
3If 2n < p s+2 and s+1 i=2 p i + 2n is the sum of exactly s integers > 1 which are pairwise relatively prime, then s+1 i=2 p i + 2n can be expressed as the sum of powers of s distinct primes.
+ 2n can be expressed as the sum of s distinct odd primes.
Let {f 1 , . . . , f s−m−l } = {2, . . . , s + 1} \ {e 1 , . . . , e m , w 1 , . . . , w l }.
e 1 , . . . , e m , f 1 , . . . , f s−m−l , i 1 , . . . , i l are pairwise distinct, this contradicts the definition of τ ′ s . Hence τ ′ s / ∈ U + V s . This completes the proof of Theorem 2.
Remark 1. As examples, by Theorem 1 we have c 500 = 16, c 900 = 14, c 1000 = 8, c 2000 = 22 (see the last section). Corollary 1. If p s+2 − p s+1 > 1100, then
Hence, Conjecture 1 is true for all sufficiently large s.Now we give a sketch proof of Theorem 1. For the details, see Section 4.
(1) Find a "long" interval [1102, 3858] such that each even number in this
interval can be represented as
∞
i=2
AcknowledgementsThe authors are supported by the National Natural Science Foundation of6, 20, 24, 26, 42, 44, 48, 62, 66, 68, 78, 86, 98, 110, 116, 120, 126, 130, 134, 136, 140, 144, 152, 154, 156, 158, 162, 168, 172, 176, 178, 180, 182, 186, 188, 196, 198, 200, 204, 208, 218, 222, 224, 230, 234, 236, 240, 242, 250, 254, 260, 266, 272, 276, 278, 282, 286, 290, 292, 296, 298, 300, 302, 308,
. V Brun, Le Crible D'eratosthene, Le Théorème De Goldbach, Skr. Vid. Selsk. Kristiania I. 3V. Brun, Le crible d'Eratosthene et le théorème de Goldbach, Skr. Vid. Selsk. Kristiania I 3(1920), 1-36.
The analogue of Erdős-Turán conjecture in Z m. Y G Chen, J. Number Theory. 128Y. G. Chen, The analogue of Erdős-Turán conjecture in Z m , J. Number Theory 128 (2008), 2573-2581.
On a problem of Sierpiński. P Erdős, Acta Arith. 11P. Erdős, On a problem of Sierpiński, Acta Arith. 11(1965), 189-192.
Handbook of Number Theory I. J Sándor, D S Mitrinović, B Crstici, SpringerJ. Sándor, D. S. Mitrinović and B. Crstici, Handbook of Number The- ory I, Springer 2006.
Sur les suites d'entiers deuxá deux premiers entre eux. W Sierpiński, Enseignement Math. 10W. Sierpiński, Sur les suites d'entiers deuxá deux premiers entre eux, Enseignement Math. 10(1964), 229-235.
On the representation of large integer as a sum of a prime and an almost prime. Y Wang, Sci. Sinica. 11Y. Wang, On the representation of large integer as a sum of a prime and an almost prime, Sci. Sinica 11 (1962), 1033-1054.
| []
|
[
"Perturbation theory of the mass enhancement for a polaron coupled to acoustic phonons",
"Perturbation theory of the mass enhancement for a polaron coupled to acoustic phonons"
]
| [
"Zhou Li \nDepartment of Physics\nUniversity of Alberta\nT6G 2G7EdmontonAlbertaCanada\n",
"Carl J Chandler \nDepartment of Physics\nUniversity of Alberta\nT6G 2G7EdmontonAlbertaCanada\n",
"F Marsiglio \nDepartment of Physics\nUniversity of Alberta\nT6G 2G7EdmontonAlbertaCanada\n"
]
| [
"Department of Physics\nUniversity of Alberta\nT6G 2G7EdmontonAlbertaCanada",
"Department of Physics\nUniversity of Alberta\nT6G 2G7EdmontonAlbertaCanada",
"Department of Physics\nUniversity of Alberta\nT6G 2G7EdmontonAlbertaCanada"
]
| []
| We use both a perturbative Green's function analysis and standard perturbative quantum mechanics to calculate the decrease in energy and the effective mass for an electron interacting with acoustic phonons. The interaction is between the difference in lattice displacements for neighbouring ions, and the hopping amplitude for an electron between those two sites. The calculations are performed in one, two, and three dimensions, and comparisons are made with results from other electron-phonon models. We also compute the spectral function and quasiparticle residue, as a function of characteristic phonon frequency. There are strong indications that this model is always polaronic on one dimension, where an unusual relation between the effective mass and the quasiparticle residue is also found. | 10.1103/physrevb.83.045104 | [
"https://arxiv.org/pdf/1011.1259v1.pdf"
]
| 119,163,543 | 1011.1259 | 75e84ef89d3adaca84039cb1d11bde44f12b6241 |
Perturbation theory of the mass enhancement for a polaron coupled to acoustic phonons
4 Nov 2010 (Dated: November 8, 2010)
Zhou Li
Department of Physics
University of Alberta
T6G 2G7EdmontonAlbertaCanada
Carl J Chandler
Department of Physics
University of Alberta
T6G 2G7EdmontonAlbertaCanada
F Marsiglio
Department of Physics
University of Alberta
T6G 2G7EdmontonAlbertaCanada
Perturbation theory of the mass enhancement for a polaron coupled to acoustic phonons
4 Nov 2010 (Dated: November 8, 2010)
We use both a perturbative Green's function analysis and standard perturbative quantum mechanics to calculate the decrease in energy and the effective mass for an electron interacting with acoustic phonons. The interaction is between the difference in lattice displacements for neighbouring ions, and the hopping amplitude for an electron between those two sites. The calculations are performed in one, two, and three dimensions, and comparisons are made with results from other electron-phonon models. We also compute the spectral function and quasiparticle residue, as a function of characteristic phonon frequency. There are strong indications that this model is always polaronic on one dimension, where an unusual relation between the effective mass and the quasiparticle residue is also found.
I. INTRODUCTION
When electrons interact strongly with phonons, the electrons acquire a polaronic character, i.e. they move around the lattice much more sluggishly than noninteracting electrons would, because a polarization cloud must accompany them as they move. A measure of the strength of the coupling between the electron and the phonons is the degree to which the ground state energy is lowered. For example, previous studies for the Holstein model 1 have indicated that the decrease in energy is proportional to the bare coupling strength (λ) in strong coupling, 2 independent of the value of the phonon frequency. On the other hand, in weak coupling, while the proportionality to λ remains, there is some dependence on phonon frequency, and in fact, the decrease in energy is greater for higher phonon frequency. 2,3 A much more indicative measure of the polaronic character of an electron is the effective mass. In the Holstein model a glimpse of polaronic tendencies, even within perturbation theory, can be attained by examining the effective mass, particularly in one dimension. Usually an increasing effective mass is accompanied by a decrease in quasiparticle residue, although this is not always the case, as described below.
The Holstein model describes electrons interacting with optical phonons; the coupling is via the electron charge density, and, in this sense, the Holstein model serves as a paradigm for electron-phonon interactions just like the celebrated Hubbard model 4 is the simplest description of electron-electron interactions. Many of the basic features of this model are now fairly well understood -see Ref. [5 and 6] along with more recent work in Ref. [3 and 7]. However, just as important is the electron interaction with acoustic phonons; typically the ionic motions couple to the electron motion, as opposed to its charge density. A very simple model to describe this kind of electron-phonon interaction within a tightbinding framework is given by
H = − i,j t ij c † iσ c jσ + h.c. + i p 2 xi 2M + p 2 yi 2M + 1 2 K i,j u xi − u xj 2 + u yi − u yj 2 ,(1)
where angular brackets denote nearest neighbours only, and
t ij = t − α(u xi − u xj )δ i,j±âx − α(u yi − u yj )δ i,j±ây . (2)
This Hamiltonian has been written specifically for two dimensions, but the generalization to three dimensions (or back to one dimension) is evident from Eqs. (1) and (2). The operators and parameters are as follows: c † iσ (c iσ ) creates (annihilates) an electron at site i with spin σ. The x-components for the ion momentum and displacement are given by p xi , and displacement u xi , respectively (similarly for the y-components), and the ions have mass M and spring constant K connecting nearest neighbours only. The electron-ion coupling is linearized in the components of the displacement, and we choose to include only longitudinal coupling.
This Hamiltonian is commonly known as the Su-Schrieffer-Heeger (SSH) model, 8,9 because it was used for seminal work describing excitations in polyacetylene by these authors. However, it was also introduced and studied a decade earlier by Barisić, Labbé, and Friedel 10 to describe superconductivity in transition metals, so we will refer to it as the BLF-SSH model. Much of the work done on this model is in the adiabatic approximation, i.e. the phonons are treated classically. 8,9 This was followed by an examination of quantum fluctuations through quantum Monte Carlo and renormalization group studies, 11 and these authors focused on half-filling. They found that the lattice ordering (in one dimension) was reduced by quantum fluctuations.
Very little work has been done, however, in the quantum regime for a single electron. Capone and coworkers studied a model similar to this one, except that they utilized optical phonons instead of acoustic ones. 12,13 This leads to some significant differences, about which we will comment below. In the past decade Zoli has studied the BLF-SSH polaron using perturbation theory, and found, for example, a perturbative regime in one dimension where polaron effects are absent. 14 This result happened to agree with the conclusions of Capone et al. 12 in the perturbative regime of the CSG model. 13 In this paper we focus on 2nd order perturbation theory, and find results in disagreement with Ref. [14]. These results also disagree qualitatively with the results from the CSG model. That is, in one dimension, for example, perturbation theory breaks down as the characteristic phonon frequency decreases. In two dimensions there is a modest mass enhancement for all characteristic phonon frequencies, while in three dimensions the mass enhancement approaches unity in the adiabatic limit. We also note that the quasiparticle residue does not necessarily follow the trend of the inverse effective mass, as the characteristic phonon frequency varies. This paper is organized as follows. In the following section we outline the calculation, both using perturbation theory, and using Green function techniques. For some of our work (especially in one dimension), the calculation can be done analytically, and we derive these results where applicable. In Section III we show some numerical results and compare our results with previous work and other electron-phonon models. We close in the final section with a summary. The main conclusion is that, as far as one can tell from weak coupling perturbation theory, the BLF-SSH model has a stronger tendency to form a polaronic state than is the case with the Holstein model. In one dimension this is most evident in the effective mass, and not at all evident in the quasiparticle residue.
II. PERTURBATION THEORY
A. Hamiltonian
The Hamiltonian Eq. (1), Fourier-transformed to wavevector space, and utilizing phonon creation and annihilation operators, is written (again in 2D),
H = kσ ǫ kσ c † kσ c kσ + qh ω(q) a † xq a xq + a † yq a yq + kk ′ σ g x (k, k ′ ) a xk−k ′ + a † x−(k−k ′ ) c † kσ c k ′ σ + kk ′ σ g y (k, k ′ ) a yk−k ′ + a † y−(k−k ′ ) c † kσ c k ′ σ . (3)
Here,
ǫ k ≡ ǫ(k x , k y ) = −2t[cos (k x ) + cos (k y )](4)
is the dispersion relation for non-interacting electrons with nearest neighbour hopping, and ω(q) ≡ ω 0 sin 2 (q x /2) + sin 2 (q y /2)
is the phonon dispersion for acoustic phonons with nearest neighbour spring constants K, and ω 0 ≡ 4K/M is the characteristic phonon frequency. The phonon creation and annihilation operators are given by a † xq and a xq , respectively, and similarly for those in the y-direction. The coupling "constants" are given by
g x (k, k ′ ) ≡ iα 2 M N ω(k − k ′ ) sin (k ′ x ) − sin (k x ) ,(6)
with a similar expression for the y direction, and M is the mass of the ion and N is the number of lattice sites.
B. Green's function analysis
Carrying out a Green's function analysis using the free electron and phonon parts of the Hamitonian as the unperturbed part, gives, for the self energy of a single electron to lowest (2nd) order in the coupling α,
Σ(k, ω + iδ) = − k ′ |g x (k, k ′ )| 2 + |g y (k, k ′ )| 2 G 0 (k ′ , ω + iδ − ω(k − k ′ )),(7)
where G 0 (k, ω + iδ) ≡ ω + iδ − ǫ k −1 is the noninteracting electron retarded propagator. One way to determine the effect of interactions on the electron dispersion is to compute the renormalized energy for the ground state (here, k x = k y = 0), and the effective mass. The effective mass has long been used as the primary indicator for polaronic behaviour 5,6 , and though within 2nd order perturbation we can only get an indication of this crossover, we use it here nonetheless. The renormalized energy is given by the solution for the pole location in the interacting electron Green's function,
G(k, ω + iδ) ≡ ω + iδ − ǫ k − Σ(k, ω + iδ) −1 , E k = ǫ k + ReΣ(k, E k ).(8)
To determine the effective mass, defined by the expectation that E k ≡h 2 k 2 /(2m * ), we take two derivatives 15 of Eq. (8), and, using the fact that (dE k /dk)| k=0 = 0, we obtain
m * m = 1 − ∂Σ(k,ω) ∂ω | ω=E k 1 + 1 2t ∂ 2 Σ(k,ω) ∂k 2 | ω=E k = 1 − ∂Σ(k, ω) ∂ω | ω=E k − 1 2t ∂ 2 Σ(k, ω) ∂k 2 | ω=E k . (9)
Here we have used the fact that the band mass given by the electron dispersion in Eq. (4) is m = 1/(2t). Note that it is common (and advisable) to replace the substitutions for ω required in Eq. (9) with ǫ k , rather than with E k . This is due to the fact that the former substitution keeps the evaluation for every term at O(α 2 ), whereas the latter substitution includes some (inconsistently) higher order contributions. The former substitution is known as Rayleigh-Schrodinger perturbation theory while the latter is known as Brillouin-Wigner perturbation theory. 16 This means that we will use the following equation,
m * m = 1 − ∂Σ(k, ω) ∂ω | ω=ǫ k − 1 2t ∂ 2 Σ(k, ω) ∂k 2 | ω=ǫ k ,(10)
to define the effective mass.
In contrast the quasiparticle residue is defined as the weight that remains in the δ-function-like portion of the spectral weight. The spectral weight is defined as
A(k, ω) ≡ − 1 π ImG(k, ω + iδ) = − 1 π Im 1 ω + iδ − ǫ k − Σ(k, ω + iδ)
. (11) For a given momentum, as the energy of the pole given by Eq. (8) is approached, the imaginary part of the self energy tends towards zero; this produces a δ-function contribution in Eq. (11) , at the pole energy, but with weight z k defined by
z k = 1 1 − ∂Σ(k,ω) ∂ω | ω=E k .(12)
The relationship amongst these various quantities -effective mass in Eq. (9), effective mass in Eq. (10), and quasiparticle residue in Eq. (12) -is discussed further in the Appendix.
C. Standard perturbation theory
Eq. (9) requires a numerical evaluation of Eq. (7), and then the required derivatives can be (numerically) determined. Because the positions of the singularities in Eq. (7) are difficult to determine in advance, it is customary to introduce a small (numerical) imaginary part corresponding to the infinitesimal δ, and then the numerical integration is more stable. This trick remains problematic, as we discuss further below. Alternatively, we can simply perform a 2nd order perturbation theory expansion, as outlined in every undergraduate quantum mechanics textbook. The result is
E (2) k = 2α 2 M 1 N k ′ sin k ′ x − sin k x 2 + sin k ′ y − sin k y 2 ω(k − k ′ ) [ǫ k − ǫ k ′ − ω(k − k ′ )] ,(13)
where we remember that the first order (in α) contribution is of course zero, and the superscript (2) indicates the 2nd order contribution. Comparison with Eq. (7) shows that this corresponds to Rayleigh-Schrodinger perturbation theory with the self energy, evaluated at ω = ǫ k corresponding to the 2nd order energy correction. Eq. (13) can be evaluated numerically, and then two derivatives with respect to k are required. However, the same numerical problems mentioned above will arise; fortunately, at least in one dimension, Eq. (13) can be evaluated analytically, whereas we were unable to do the same with Eq. (7).
III. RESULTS AND DISCUSSION
A. Analytical results in 1D
The result of an analytical evaluation 17 of Eq. (13) is, in one dimension,
E (2) (k) = − 32t π λ BLFω0 −2 cos k + πω 0 + C k (ω 0 ) ,(14)whereω 0 ≡ ω 0 /(4t)
, and a dimensionless coupling parameter λ BLF is defined, in analogy to the dimensionless coupling parameter defined in the Holstein model, as
λ BLF ≡ α 2 M ω 2 0 1 W ,(15)
where here the bandwidth W = 4t for one dimension. Note that this coupling parameter has nothing to do physically with the coupling parameter defined in the Holstein model, so we will treat them as completely independent. 18 The function C k (ω 0 ) must be evaluated separately in the two regimes:
C k (ω 0 ) = 2 ω 2 0 − 1 h(k)+h(−k)−2h(π/2) ,ω 0 > 1,(16)where h(k) = tan −1 ω 0 tan k 2 + 1 ω 2 0 − 1(17)
and
C k (ω 0 ) = 1 −ω 2 0 s(k) + s(−k) − 2s(π/2) ,ω 0 < 1,(18)
where
s(k) = log ω 0 tan k 2 + 1 + 1 −ω 2 0 ω 0 tan k 2 + 1 − 1 −ω 2 0 .(19)
Eq. (14) is readily evaluated at k = 0 to determine the ground state energy. Evaluating the second derivative with respect to wave vector k is equally straightforward, and determination at k = 0 yields the rather simple result for the effective mass,
m * m = 1 + 32 π λ BLF ω 0 ,(20)
valid for all values ofω 0 . 19
B. Comparison with other models
An analytical result is readily available for the Holstein model; there, the ground state energy (in 1D) was given by 2
E H = −2t 1 + λ H ω Ẽ ω E + 1 ,(21)
whereω E ≡ ω E /(4t) is the Einstein phonon frequency normalized to the bandwidth, and, as explained earlier, the dimensionless coupling constant λ H cannot be compared directly to the corresponding quantity for the BLF-SSH model. The effective mass is given by
m * m H = 1 + λ H 4 √ω E 1 + 2ω E 1 +ω E 3/2 .(22)
In both cases, as the characteristic phonon frequency approaches zero (adiabatic limit) the ground state energy approaches the non-interacting value; however, the effective mass diverges in this same limit. So, while the first statement would appear to justify perturbation theory in this limit, the second statement clearly indicates a breakdown in the adiabatic limit. It is known in both cases that the adiabatic approximation leads to a polaron-like solution for all coupling constants, 20,21 and clearly these two observations are consistent with one another. In fact, the divergence is stronger in the BLF-SSH model, and goes beyond the inverse square-root behaviour observed for the Holstein model and attributed to the diverging electron density of states in one dimension; 12 this indicates that the BLF-SSH model, at least in the adiabatic limit in one dimension, has a stronger tendency for polaron formation than the Holstein model. Interestingly, in the model studied by Capone et al. 12 , where optical phonons were used, the opposite behaviour was obtained; they found that the effective mass ratio approached unity as the characteristic phonon energy approached zero. 22 In the opposite limit Capone et al. 12 found an effective mass ratio that did not approach unity as the characteristic phonon frequency increased (antiadiabatic limit). In the BLF-SSH model, however, this ratio does approach unity as the phonon frequency increases beyond the electron bandwidth, in one dimension, in agreement with the Holstein result in all dimensions. As we will see below, however, in the BLF-SSH model in two and three dimensions the effective mass ratio remains above unity in the anti-adiabatic limit. This is not surprising, since here the interaction modulates the hopping, and we expect a non-zero correction in this limit. 22 In the adiabatic limit, the BLF-SSH mass ratio approaches a constant value in two dimensions, and falls to unity in three dimensions, both in agreement with the behaviour in the Holstein model.
Our results disagree with those of Zoli 14 for reasons that are not entirely clear. We have utilized both the straightforward perturbation theory method (analyti- , normalized to λ (or λH ) vs. characteristic phonon frequency ω0 (this is ωE for the Holstein model), for both the BLF-SSH and Holstein models, in one, two, and three dimensions, as indicated. Alternatively, the ordinate is simply the second order (in g) correction to the ground state energy within Rayleigh-Schrodinger perturbation theory. In all cases the magnitude of the correction increases with increasing ω0. At low ω0 the magnitudes of the the results are ordered 3D, 2D, 1D (lowest to highest) whereas at high frequency the ordering is just the opposite. All six cases have non-zero limiting values as ω0 → ∞, given in Table 1. cally and numerically), and the Green's function formalism (numerically). In the latter case we required a numerically small imaginary part for the frequency significantly smaller than the value quoted in Ref. (14) (we used δ = 10 −9 whereas he used δ = 10 −4 . However, as is clear from our analytical result, Eq. (20), our effective mass diverges at low phonon frequency, and decreases monotonically to unity as the phonon frequency increases. The result in Ref. (14) peaks sharply near ω 0 ≈ 1, and, as noted above, decreases to unity at low phonon frequency.
C. Numerical results
In Fig. 1 we plot the reduction in the ground state energy due to the second order correction (for the BLF-SSH model, this is given by Eq. (13)), normalized to λ (or λ H ). This is also written as Σ(k = 0, ω = ǫ k )/λ, where the self energy is given by the expression in Eq. (7). Also plotted for comparison are the corresponding quantities for the Holstein model. Note that both models share a few features in common: (i) they both go to zero as the characteristic phonon energy decreases to zero, regardless of the dimensionality, (ii) they all approach a non-zero negative (and finite) value as the characteristic phonon frequency grows, and (iii) they cross one another in strength as a function of dimensionality as ω 0 increases, i.e. at low phonon frequencies the self energy has the highest magnitude for one dimension, whereas for high phonon frequency the highest magnitude is achieved in both models for three dimensional systems. Also note that the BLF-SSH results are well separated from Holstein results. In particular, there appears to be more 'bang for the buck' with the BLF-SSH model, i.e. for a given value of λ and the same characteristic phonon frequency, the energy reduction is almost an order of magnitude higher for the BLF-SSH model as compared with the Holstein model. Again, we remind the reader that the value of λ in the Holstein model has nothing to do with the value of λ in the BLF-SSH model, so this comparison is unwarranted.
For this reason we will use the value for the self energy, in weak coupling, as the phonon frequency increases to infinity, as the energy scale that provides a measure of the energy lowering expected for a given model and a given dimensionality. These numbers, mostly determined analytically, are provided in Table I. In Fig. 2 we plot the effective mass ratio (minus unity), normalized to the self energy evaluated for infinite characteristic phonon frequency. This normalization is important to divide out enhancements that are solely due to definitions. Moreover, in this way, we are determining the mass enhancement for a given 'coupling strength', where this strength is now a measure of the energy lowering caused by a certain amount of coupling to phonons, regardless of the origin of that coupling. This plot now makes clear that the BLF-SSH model, within weak coupling perturbation theory, has more 'polaronic' tendency than the Holstein model. Note in particular that the divergence (in 1D) at low characteristic phonon frequency is much stronger for the BLF-SSH model, as Eq. (20) already indicated. Thus, as discussed above, we anticipate that in the adiabatic approximation, in 1D, the system will always be polaronic, regardless of the coupling strength, in agreement with the result of the Holstein model, 20 and in disagreement with the result from the hybrid model defined in Ref. 12. Otherwise, the behaviour of the effective mass in the two models is very similar, as a function of characteris- The electron effective mass, normalized to the 2nd order correction to the energy for the anti-adiabatic limit, vs. characteristic phonon frequency, ω0, for both the BLF-SSH and Holstein models, in one, two, and three dimensions, as indicated. In 1D the effective mass diverges for both models, though the divergence is stronger for the BLF-SSH model, as indicated by Eq. (20). In 2D the effective mass approaches a constant as ω0 → 0 for both models, while in 3D the effective mass ratio approaches unity in the same limit. At the opposite extreme, both 1D results give m * /m → 1 as ω0 → ∞, while in both 2D and 3D the effective mass remains above unity in this limit. Note that in all three dimensions, for a given reduction in energy as given by the 2nd order correction to the energy, the BLF-SSH model results in significantly higher effective masses.
tic phonon frequency, for the various dimensions shown. The effective mass can be made arbitrarily close to unity, for any non-zero phonon frequency, for sufficiently weak coupling. Preliminary numerical calculations indicate a free electron-like to polaron crossover, 23 similar to what was found for the Holstein model.
D. Spectral function
It is interesting to examine the spectral function, defined by Eq. (11) (see also the discussion in the Appendix). For simplicity we show the result in one dimension, in Fig. 3, for the ground state (k = 0) as a function of frequency.
The results for two or three dimensions do not differ in any significant way from these results. The results for three different characteristic phonon frequencies are shown. In each case a quasiparticle δ-function is present Quasiparticle residue, z0 vs. ω0/t for both the BLF-SSH and Holstein models. in all three dimensions. Note that while the result for the Holstein model tends to be inversely proportional to the effective mass, this is not the case for the BLF-SSH model at low phonon frequency, and in 1D and 2D. In one dimension in particular, the effective mass diverges, while z0 also turns upward. (here artificially broadened so as to be visible), followed by an incoherent piece; the incoherent part has energies ranging approximately from −2t < ω < +2t + ω 0 . The quasiparticle residue, z 0 must be determined numerically, and is given in the figure caption for each of the cases considered (see also Fig. 4). We have verified that the remaining weight (the spectral functions each have weight unity) is present in the incoherent part. The result shown is not too different from what is found in the Holstein model; the singularities from the 1D electron density of states are now smeared out in the incoherent piece, as a result of the coupling and phonon energy having some frequency dependence. We show in Fig. 4, as a function of ω 0 , the quasiparticle residue for both the Holstein and BLF-SSH models. The Holstein results tend to follow the inverse of the result for the inverse effective mass; this is as expected. This is not the case with the BLF-SSH, but for more subtle reasons than the fact that the self energy is now momentum dependent. The more important effect, which shows up in both 1D and 2D results, is that the quasiparticle weight requires an evaluation of the frequency derivative of the self energy at the energy of the pole, whereas the effective mass in Rayleigh-Schrodinger perturbation theory requires the same derivative at the non-interacting ground state energy. Most noteworthy is that the quasiparticle residue shows a clear upturn at low characteristic phonon frequencies, while the inverse effective mass clearly approaches zero (see Fig. 2) as this characteristic frequency is taken to zero.
To see this more clearly we show in Fig. 5 a comparison of the residue (upper panel) vs. effective mass (lower panel), as a function of ω 0 , for two (weak) strengths of electron phonon coupling. At high phonon frequency, as the former decreases, the latter increases with decreasing phonon frequency, but at low phonon frequency, the two properties no longer behave in inverse fashion with respect to one another.
IV. SUMMARY
The BLF-SSH model appears to have very strong polaronic tendencies, stronger than those of, say, the Holstein model, especially in one dimension. This conclusion is based on the 2nd order perturbative calculation performed in this paper, but also has corroborative evidence from calculations in the strong coupling regime. In one dimension we have been able to obtain an analytical solution for the ground state energy and the effective mass. The conclusion concerning polaronic behaviour is an important one, as much of what we know about polarons arises from Holstein-like models. 25 In particular, for a coupling strength that leads to a fixed amount of energy lowering (in 2nd order), the effective mass can become an order of magnitude larger than the bare mass, a clear indicator that perturbation theory breaks down. This occurs in the BLF-SSH model at much weaker coupling than in the Holstein model. We have also noted that the relationship between effective mass and quasiparticle residue breaks down in one and two dimensions for the BLF-SSH model, not because of the momentum dependence in the self energy, but because the two properties involve evaluation of the frequency derivative of the self energy at different energies. Future work will address the strong coupling regime.
ACKNOWLEDGMENTS
This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC), by ICORE (Alberta), by Alberta Ingenuity, and by the Canadian Institute for Advanced Research (CIfAR). CC was supported by an NSERC USRA and ZL was supported by an Alberta Ingenuity Fellowship.
Appendix A: Perturbation Theory
It is sometimes stated that for a momentumindependent self energy, the quasiparticle residue is equal to the inverse of the effective mass. This follows simply by comparing Eqs. (9) and (12). On the other hand, we have argued that Eq. (10) is more appropriate for the effective mass, in which case this statement appears not to be true. A resolution of this difficulty is straightforward for the Holstein model, which we outline below, but, interestingly, not possible for the BLF-SSH model, at least in one dimension. The essential difference appears to be that in the Holstein model the (phonon) excitations are gapped, whereas they are not in the BLF-SSH model because of the low-lying acoustic modes at small momentum transfer. In this appendix we focus attention on one dimension, where some subtleties arise.
For the Holstein model the computation of the self energy in weak coupling is straightforward. 2 We obtain
Σ H (ω) = 2tω E λ H sgn(ω − ω E ) (ω − ω E ) 2 − (2t) 2 . (A1)
The location of the quasiparticle pole at zero momentum (ground state) is then given by
ω + 2t = − 2tω E λ H (ω − ω E ) 2 − (2t) 2 ,(A2)
which can readily be determined numerically. Denoting the solution by writing ω ≡ −2t − E b (so E b is the 'binding' energy below the bottom of the band), we can then use this in the spectral function, Eq. (11), to determine the residue z 0 in the quasiparticle peak at ω = −2t − E b :
A(k = 0, ω) = z 0 δ(ω + 2t + E b ) + incoherent part. (A3)
Straightforward calculation gives
z 0 = 1/ 1 + 2λ HωE 1 + 2ω E + 2Ẽ b (1 + 2ω E + 2Ẽ b ) 2 − 1 3/2 ,(A4)
which is not in agreement with the inverse of Eq. (22), except when λ H is truly very small.
HereẼ b ≡ E b /(4t).
In particular, for arbitrarily small λ H , ∂Σ(ω)/∂ω| ω=−2t , which is used in Eq. (22), diverges as ω E → 0, leading to a divergent effective mass (and therefore associated residue of zero). On the other hand, from Eq. (A2) one readily sees
lim ωE →0 E b = t λω E /t 2/3 ,(A5)
from which Eq. (A4) yields the result lim ωE→0 z 0 = 2/3,
surprisingly a universal number. The actual weight in the quasiparticle peak of the spectral function given by Eq. (11) for any given (even very small) value of λ H actually tracks Eq. (A4), and not the inverse of Eq. (22). Interestingly, for the Holstein model, one can take a different tact towards calculating the spectral function: using perturbation theory to compute the perturbed wave function, which is then inserted into the calculation for the matrix elements required in the definition of the spectral function, 25 one obtains
A pert (k = 0, ω ) = z pert 0 δ(ω + 2t + λ H ω E (1 + 2ω E ) 2 ) − 1 ) + 1 π 2tω E λ H (ω + 2t) 2 θ(2t − |ω − ω E |) (2t) 2 − (ω − ω E ) 2 . (A7)
Note that there is no difficulty in integrating over this function, as the divergence in the denominator (1/(ω + 2t) 2 ) is not within (or bordering) the range of frequency given by the Heaviside function restriction in the numerator. This is due to the finite phonon frequency, ω E .
From this expression fulfillment of the sum rule determines that
z pert 0 = 1/ 1 + 2λ HωE 1 + 2ω E (1 + 2ω E ) 2 − 1 3/2 ,(A8)
which is in agreement with the inverse of Eq. (22). The message is that, as long as we use the expression given by Eq. (11) for the spectral function, the area under the quasiparticle peak will correspond to Eq. (A4), which is not the inverse of the effective mass, even if the self energy is independent of momentum.
In the BLF-SSH model, the self energy is evaluated numerically through Eq. (7). An attempt to follow the procedure just outlined, which leads to Eqs. (A7) and (A8) for this model fails; this is because the minimum phonon frequency is zero, so the restriction corresponding to the Heaviside function in Eq. (A7) yields −2t < ω < 2t + ω 0 ; this in turn makes the divergence at ω = −2t non-integrable. One can only (in 1D) define the spectral function through Eq. (11), in which case the inverse of the effective mass differs from the quasiparticle pole for two reasons: the usual reason that the explicit momentum dependence now plays a role (see Eq. (10)), and, in addition, the derivative of the self energy with respect to frequency is evaluated at ω = −2t for the effective mass, whereas it is evaluated at the frequency corresponding to the pole for the quasiparticle residue.
FIG. 1 .
1Electron self energy for the ground state (k = 0)
FIG. 2. The electron effective mass, normalized to the 2nd order correction to the energy for the anti-adiabatic limit, vs. characteristic phonon frequency, ω0, for both the BLF-SSH and Holstein models, in one, two, and three dimensions, as indicated. In 1D the effective mass diverges for both models, though the divergence is stronger for the BLF-SSH model, as indicated by Eq. (20). In 2D the effective mass approaches a constant as ω0 → 0 for both models, while in 3D the effective mass ratio approaches unity in the same limit. At the opposite extreme, both 1D results give m * /m → 1 as ω0 → ∞, while in both 2D and 3D the effective mass remains above unity in this limit. Note that in all three dimensions, for a given reduction in energy as given by the 2nd order correction to the energy, the BLF-SSH model results in significantly higher effective masses.
FIG. 3 .
3Spectral function for the BLF-SSH model, for λ = 0.2 for three different characteristic phonon frequencies, as a function of frequency. All three spectra are similar as one would find for the Holstein model, and consist of quasiparticle peak with weight z0 = 0.766, 0.727, 0.724, for ω0/t = 0.
FIG. 5 .
5Comparison of the quasiparticle residue (upper panel) with the electron effective mass (lower panel) as a function of ω0/t, for the BLF-SSH model in one dimension. The behaviour noted inFig. 4is clear here. Moreover, note the scales; while the effective mass ratio is very large (≈ 4) for λ = 0.01 and small values of ω0/t, the quasiparticle residue remains within 15% of unity.
TABLE I .
Ilimω 0 →∞ Σ(k = 0, ω = ǫ k )/(λt)Dim. BLF-SSH Holstein
1D
-16
-2
2D
-23.3
-4
3D
-30.2
-6
T. Holstein, Ann. Phys. (New York) 8, 325 (1959). 2 F. Marsiglio, Physica C244 21, (1995). 3 Zhou Li, D. Baillie, C. Blois, and F. Marsiglio, Phys. Rev. B81, 115114 (2010).
. J Hubbard, Proc. Roy. Soc. A. 276237ibid.J. Hubbard, Proc. Roy. Soc. A 276, 238 (1963); ibid., 277, 237 (1964).
H Fehske, S A Trugman, Springer Series in Material Sciences 103 pp. A. S. AlexandrovSpringer VerlagDordrechtPolarons in Advanced MaterialsH. Fehske and S.A. Trugman, in Polarons in Advanced Materials edited by A. S. Alexandrov, Springer Series in Material Sciences 103 pp. 393-461, Springer Verlag, Dor- drecht (2007).
A S Alexandrov, Springer Series in Materials Science. A.S. AlexandrovDordrechtSpringer Verlag103Polarons in Advanced MaterialsA.S. Alexandrov, in Polarons in Advanced Materials, edited by A.S. Alexandrov, Springer Series in Materials Science, 103, pp. 257-310, Springer Verlag, Dordrecht (2007).
. A Alvermann, H Fehske, S A Trugman, Phys. Rev. 81165113A. Alvermann, H. Fehske, and S.A. Trugman, Phys. Rev. B81, 165113 (2010).
. W P Su, J R Schrieffer, A J Heeger, Phys. Rev. Lett. 421698W.P. Su, J.R. Schrieffer, and A.J. Heeger, Phys. Rev. Lett.42, 1698 (1979).
. W P Su, J R Schrieffer, A J Heeger, Phys. Rev. 222099W.P. Su, J.R. Schrieffer, and A.J. Heeger, Phys. Rev. B22, 2099 (1980).
. S Barisić, J Labbé, J Friedel, Phys. Rev. Lett. 25919S. Barisić, J. Labbé, and J. Friedel, Phys. Rev. Lett.25, 919 (1970);
. S Barisić, ; S Barisić, Phys. Rev. 5941Phys. Rev.S. Barisić, Phys. Rev. B5, 932 (1972), S. Barisić, Phys. Rev. B5, 941 (1972).
. J E Hirsch, E Fradkin, Phys. Rev. Lett. 49402J.E. Hirsch and E. Fradkin, Phys. Rev. Lett.49, 402 (1982);
. E Fradkin, J E Hirsch, Phys. Rev. 271680E. Fradkin and J.E. Hirsch, Phys. Rev. B27, 1680 (1983).
. M Capone, W Stephan, M Grilli, Phys. Rev. 564484M. Capone, W. Stephan and M. Grilli, Phys. Rev. B56, 4484 (1997).
appeared on the cond-mat archive. In the first case in particular, the authors carry out a study of what they call the Su-Schrieffer-Heeger model. In fact, they study a model in which the coupling of the electrons is to optical phonons; the model resembles the BLF-SSH model insofar as the ions couple to the electronic motion (as opposed to electron charge density). Moreover, this model was studied earlier in Ref. (12), so we will refer to it as the CSG model to avoid confusion. Capone et al. 12 did not recognize that the ground state momentum in the CSG model would become non-zero for sufficiently strong coupling. D J J Marchand, G De Filippis, V Cataudella, M Berciu, N Nagaosa, N V Prokof'ev, A S Mishchenko, P C E Stamp, ; M Berciu, H Fehske, arXiv:1010.3207arXiv:1010.4250After this work was essentially complete, two preprints,. in fact they refer to this region as 'non-physical'. In any event, the two new references appear to be unaware of the work of Capone et al. As we demonstrate in this paper, some very important differences result from the use of acoustic phononsAfter this work was essentially complete, two preprints, D.J.J. Marchand, G. De Filippis, V. Cataudella, M. Berciu, N. Nagaosa, N.V. Prokof'ev, A.S. Mishchenko, and P.C.E. Stamp, arXiv:1010.3207, and M. Berciu and H. Fehske, arXiv:1010.4250, appeared on the cond-mat archive. In the first case in particular, the authors carry out a study of what they call the Su-Schrieffer-Heeger model. In fact, they study a model in which the coupling of the electrons is to optical phonons; the model resembles the BLF-SSH model insofar as the ions couple to the electronic motion (as op- posed to electron charge density). Moreover, this model was studied earlier in Ref. (12), so we will refer to it as the CSG model to avoid confusion. Capone et al. 12 did not recognize that the ground state momentum in the CSG model would become non-zero for sufficiently strong cou- pling; in fact they refer to this region as 'non-physical'. In any event, the two new references appear to be unaware of the work of Capone et al. As we demonstrate in this paper, some very important differences result from the use of acoustic phonons.
. M Zoli, Phys. Rev. 6612303M. Zoli, Phys. Rev. B66, 012303 (2002);
Commun. Solid St, 122531Solid St. Com- mun. 122, 531 (2002);
. Physica. 384274Physica C384, 274 (2003).
The effective mass will in general be anisotropic, so the direction in which these derivatives are to be taken would have to be specified. we nonetheless retain this simple notation and avoid some cumbersome indicesThe effective mass will in general be anisotropic, so the direction in which these derivatives are to be taken would have to be specified; we nonetheless retain this simple no- tation and avoid some cumbersome indices.
G D Mahan, Many-Particle Physics. New YorkKluwer Academic/Plenum Publishers3rd EditionG.D. Mahan, Many-Particle Physics, 3rd Edition (Kluwer Academic/Plenum Publishers, New York, 2000).
to the motion of the electron, for a given 'strength' of coupling as given by α in this model, we expect the overall contribution here to grow as the number of nearest neighbours increases. However, since in this model the coupling is. whereas the same is not true for the Holstein modelHowever, since in this model the coupling is to the motion of the electron, for a given 'strength' of coupling as given by α in this model, we expect the overall contribution here to grow as the number of nearest neighbours increases, whereas the same is not true for the Holstein model.
Within perturbation theory, and with preliminary exact calculations in the strong coupling regime, the ground state remains fixed at k = 0 over all coupling strengths. in contrast to what happens in the CSG modelWithin perturbation theory, and with preliminary exact calculations in the strong coupling regime, the ground state remains fixed at k = 0 over all coupling strengths, in con- trast to what happens in the CSG model. 13
. V V Kabanov, O Y Mashtakov, Phys. Rev. 476060V.V. Kabanov and O.Y. Mashtakov, Phys. Rev. B47, 6060 (1993).
. C Chandler, F Marsiglio, unpublishedC. Chandler and F. Marsiglio, unpublished.
we utilize limiting procedures for the characteristic phonon frequency while keeping the effective spring constant fixed. This means, for example. As explained in Ref. that as the phonon frequency approaches zero the ion mass increases to keep the product M ω 2 0 constantAs explained in Ref. (12), we utilize limiting procedures for the characteristic phonon frequency while keeping the effective spring constant fixed. This means, for example, that as the phonon frequency approaches zero the ion mass increases to keep the product M ω 2 0 constant.
. C Zhou Li, F Chandler, Marsiglio, unpublishedZhou Li, C. Chandler and F. Marsiglio, unpublished.
We include Frohlich models in this category, as the lattice displacement couples to the charge density, as occurs in the Holstein model. the interaction is simply long rangeWe include Frohlich models in this category, as the lattice displacement couples to the charge density, as occurs in the Holstein model; the interaction is simply long range.
| []
|
[
"Actin filaments growing against a barrier with fluctuating shape",
"Actin filaments growing against a barrier with fluctuating shape"
]
| [
"Raj Kumar Sadhu \nDepartment of Theoretical Sciences\nS. N. Bose National Centre for Basic Sciences\nBlock JD, Sector III, Salt Lake700106KolkataIndia\n",
"Sakuntala Chatterjee \nDepartment of Theoretical Sciences\nS. N. Bose National Centre for Basic Sciences\nBlock JD, Sector III, Salt Lake700106KolkataIndia\n"
]
| [
"Department of Theoretical Sciences\nS. N. Bose National Centre for Basic Sciences\nBlock JD, Sector III, Salt Lake700106KolkataIndia",
"Department of Theoretical Sciences\nS. N. Bose National Centre for Basic Sciences\nBlock JD, Sector III, Salt Lake700106KolkataIndia"
]
| []
| We study force generation by a set of parallel actin filaments growing against a non-rigid obstacle, in presence of an external load. The filaments polymerize by either moving the whole obstacle, with a large energy cost, or by causing local distortion in its shape which costs much less energy. The nonrigid obstacle also has local thermal fluctuations due to which its shape can change with time and we describe this using fluctuations in the height profile of a one dimensional interface with Kardar-Parisi-Zhang dynamics. We find the shape fluctuations of the barrier strongly affects the force generation mechanism. The qualitative nature of the force-velocity curve is crucially determined by the relative time-scale of filament and barrier dynamics. The height profile of the barrier also shows interesting variation with the external load. Our analytical calculations within mean-field theory show reasonable agreement with our simulation results. | 10.1103/physreve.93.062414 | [
"https://arxiv.org/pdf/1606.07553v1.pdf"
]
| 11,367,124 | 1606.07553 | 01493a27bd27d48051efc22455a118ce76fc29a8 |
Actin filaments growing against a barrier with fluctuating shape
24 Jun 2016
Raj Kumar Sadhu
Department of Theoretical Sciences
S. N. Bose National Centre for Basic Sciences
Block JD, Sector III, Salt Lake700106KolkataIndia
Sakuntala Chatterjee
Department of Theoretical Sciences
S. N. Bose National Centre for Basic Sciences
Block JD, Sector III, Salt Lake700106KolkataIndia
Actin filaments growing against a barrier with fluctuating shape
24 Jun 2016
We study force generation by a set of parallel actin filaments growing against a non-rigid obstacle, in presence of an external load. The filaments polymerize by either moving the whole obstacle, with a large energy cost, or by causing local distortion in its shape which costs much less energy. The nonrigid obstacle also has local thermal fluctuations due to which its shape can change with time and we describe this using fluctuations in the height profile of a one dimensional interface with Kardar-Parisi-Zhang dynamics. We find the shape fluctuations of the barrier strongly affects the force generation mechanism. The qualitative nature of the force-velocity curve is crucially determined by the relative time-scale of filament and barrier dynamics. The height profile of the barrier also shows interesting variation with the external load. Our analytical calculations within mean-field theory show reasonable agreement with our simulation results.
I. INTRODUCTION
Cell motility plays an important role in a wide variety of biological processes like morphogenesis, wound healing or tumor invasion [1][2][3][4]. Actins and microtubules are cytoskeletal proteins whose polymerization and depolymerization can generate significant forces, without any assistance of molecular motors, and propel the cell forward. In presence of a biological barrier, these filaments elongate and generate a pushing force against the barrier and in many in vitro studies this force has been measured explicitly by applying an external load on the barrier in the opposite direction. With increasing load, the velocity of the barrier decreases and the functional nature of dependence of velocity on the applied force is an important characteristic of the force generation mechanism. The maximum polymerization force generated by the filaments is known as 'stall force' and is measured as the minimum load required in order to stall the barrier motion completely. There has been a surge of experimental as well as theoretical research activities to determine the stall force and the force-velocity characteristic of the cytoskeletal filaments in the last few years.
Interestingly, the qualitative nature of the force-velocity curve was found to depend on the details of the experimental set-up. A convex force-velocity characteristic was reported for actin quoted polystyrene beads [5] and magnetic colloidal particles pushed by unbranched parallel actin filaments [6,7]. On the other hand, a concave force-velocity curve was obtained for branched actin network [8], where velocity remains almost constant for small load and drops rapidly at large load. An even more complex force-velocity relationship was measured for lamellipodial protrusion in a keratocyte, where velocity showed rapid decay for very small load, followed by a plateau at moderate load and another rapid decay close to stalling [9,10]. Although multiple filaments are expected to generate larger force than single filament [5,9,11], in [12] the stall force of approximately eight actin filaments was measured and found to be in the piconewton range, close to a single filament stall force [13], indicating absence of co-operation among the filaments.
To investigate the force-velocity relationship theoretically, several different models have been proposed. Force generation by a single actin filament growing against a barrier has been explained using a simple Brownian ratchet mechanism where thermal fluctuations of the barrier creates a gap between the barrier and the filament tip, making it possible for the filament to grow by adding one monomer in the gap [14]. This mechanism predicts a convex force-velocity curve. This simple model has been subsequently generalized where details of interaction between the monomers and the barrier has been considered [15] and flexibility of the filament has been included [16]. In all these cases existence of a convex force-velocity relationship has been verified. However, when the Brownian ratchet mechanism was extended for multiple filaments, the nature of the force-velocity curve was found to crucially depend on how the details of the interaction and load-sharing among the filaments were modeled [17][18][19][20]. Certain models even showed a crossover from convex to concave force-velocity curve, as some model parameters are varied [21][22][23].
Inside a cell, actin filaments grow against the plasma membrane which is not a rigid object but elastically deformable [24]. Even in vitro, when the filaments push against an obstacle as they polymerize, the obstacle may in general have local shape deformations. In [25] a flexible plasma membrane was explicitly modeled and it was shown that thermal fluctuation of this flexible obstacle substantially enhances the growth velocity of a filopodial protrusion. It was argued that in the case of a flexible membrane, a filament only has to overcome the local bending energy in order to polymerize (whereas for a rigid obstacle the full load must be overcome) and this gives rise to a larger velocity for a given load. Effect of a flexible plasma membrane on actin network growth was experimentally demonstrated in [26] when reconstituted actin networks in vitro were assembled onto synthetic lipid bilayers and it was found that the membrane elasticity causes formation of bundled filament protrusion from branched filament networks.
Motivated by this, we carry out a study to probe the detailed quantitative aspects of interaction between a set of growing filaments and an obstacle whose position as well as shape can fluctuate with time. To keep our description simple, we model the obstacle by a one dimensional non-rigid object whose local thermal fluctuations can alter its shape and using a lattice gas model, we describe it by a Kardar-Parisi-Zhang (KPZ) interface [27]. In presence of an external load, the obstacle tends to move in the direction opposite to that of polymerization. In order to polymerize, the filaments must push against the barrier, either causing a local change in its profile (which requires less energy) or causing a global movement of the whole barrier (which involves a large energy cost). We are interested to find out how presence of the fluctuating barrier affects the dynamics of the actin filaments, and how the presence of the filaments affects the shape of the barrier.
Our numerical simulations and analytical calculations show that there is a rich interplay between the polymerization dynamics of the filaments and the shape fluctuations of the barrier. For small and intermediate values of the external force, the barrier motion is governed by its global movement, and for large force, the local fluctuations become important. These local movements cost less energy and can continue even when the force is significantly large. As a result, the stall force in our system is much higher than that for a rigid barrier [18]. Moreover, these local movements may be caused by filament polymerization or by independent thermal fluctuations of the barrier and hence the stall force may also depend on the properties of the barrier. Indeed for a single filament, the stall force is found to increase with the size of the barrier. For N filaments stall force is independent of the barrier size and scales linearly with N . The barrier shape is also affected by the growing filaments and the scaling behavior of its height profile shows continuous variation as a function of the external load.
There are two time-scales in our system, one associated with the (de)polymerization of the filaments and the other with the thermal fluctuations of the barrier. Our results show that the choice of these time-scales may crucially determine the nature of the force-velocity curve. This is because the local movements of the barrier make increasingly important contribution to its velocity as the thermal fluctuations become faster. Even for small or intermediate load, therefore, the barrier velocity is not governed by its global movement alone and this changes the qualitative nature of dependence of velocity on load. The stall force is also found to decrease for faster barrier dynamics.
This paper is organized as follows. In section II, we describe our model. Our results for the single filament and multiple filaments are presented in sections III and IV, respectively, and conclusions are in section V.
II. DESCRIPTION OF THE MODEL
Our model consists of N parallel filaments growing against a barrier with a fluctuating height profile (see Fig. 1). We model the filaments as rigid polymers, made of rod-like monomers of length d, such that a (de)polymerization event (decreases) increases the length of the filament by an amount d. The barrier is modeled as a one dimensional surface. In our lattice model, the discrete surface elements are represented as lattice bonds of length λ, which can have two possible orientations, ±π/4. We denote these two cases by symbols / and \ and call them upslope and downslope bonds, respectively. Height at any particular lattice site i is defined as h i = δ/2 i−1 j=1 tan θ j , where θ j is the orientation of the j-th bond and δ = √ 2λ. The total number of such bonds is L. One / followed by a \ forms a local hill and in the reverse order \/ they form a local valley. The local height of the surface fluctuates due to transition between these hills and valleys. When a local hill (valley) at a given site flips to a valley (hill), the height of that particular site decreases (increases) by an amount δ. We assume δ is equal to the monomer length d. As explained below, this assumption means that height fluctuation of the surface creates a gap which is just enough for insertion of a monomer. Towards the end of the paper, we briefly discuss the case of δ = d.
A filament whose tip is in contact with the barrier, is called a bound filament and in the absence of any such contact, it is called a free filament. The surface site where a bound filament can form a contact, is called a binding site. When a bound filament polymerizes, it creates space for insertion of another monomer by pushing the barrier up and in this process performs work against the external load (which tends to push the barrier down). When the bound filament pushes against a local valley, that valley flips to a hill and the height of the binding site increases by an amount d (Fig. 1A). However, polymerization of a bound filament, which is not in contact with a local valley, requires a global movement of the whole barrier, as shown in Fig. 1B, when height of all the L sites are increased by an amount d. Assuming F/L is the load per site, the energy cost for the first process is just F d/L, and for the second process it is F d. Following the rule of local detailed balance, we assign rates U 0 exp(−βF d/L) and U 0 exp(−βF d) to these two types of polymerization processes, respectively. Here, β is the inverse temperature and U 0 is the free filament polymerization rate that does not involve any barrier movement and hence is independent of F . We also assume the depolymerization rate is same for both free and bound filaments and is denoted as W 0 . When a bound filament depolymerizes, it loses contact with the barrier and becomes a free filament. In certain configurations, when there is only one bound filament, its depolymerization results in an unsupported barrier.
Apart from being pushed by the filaments, the barrier can also show thermal fluctuations, when local hills can flip to valleys and vice versa. However, due to presence of the filaments, these transitions can sometimes get blocked. For example, if a bound filament is in contact with a hill, then that particular hill cannot flip to a valley, until the filament depolymerizes and a gap is created for a local downward movement of the barrier. When both forward and reverse transitions are allowed, their rates rates satisfy local detailed balance
R + R − = e −βF d/L , where R + is the rate at
which local surface height can increase (i.e. a valley flips to a hill) and R − be the reverse transition rate. Note that in the absence of any external load F , the transition between hills and valleys become symmetric at all sites other than the binding sites and the surface has a local Edwards-Wilkinson dynamics [28]. For non-zero F , hill to valley transitions are generally favored (except, possibly, at the binding site) and the barrier behaves like a KPZ surface with a downward bias. We assume periodic boundary condition for the surface and an equal number of upslope and downslope bonds, i.e. no overall tilt. In one Monte Carlo step, we attempt to perform N filament updates (polymerization or depolymerization) and S independent (unaided by the filaments) surface updates. By changing the value of S we can tune the relative time-scale between filament dynamics and barrier dynamics. For smaller (larger) S value, the barrier dynamics is slower (faster) than the filament dynamics. A relative time-scale between the surface and filament dynamics can also be introduced by rescaling R + and R − , but we have used R − = U 0 and R + = U 0 e −βF d/L throughout and controlled the relative time-scale by S instead. We start with an initial configuration where all N filaments have unit length, containing one monomer each and the upslope and downslope bonds are placed alternatingly (a flat surface). We let [1,29] and the polymerization rate U0 is proportional to the free monomer concentration with a proportionality constant k0 = 11.6 µm −1 s −1 [1,29]. We have used a monomer concentration C = 0.24 µm, which gives U0 = 2.784s −1 . The monomer size is d = 2.7 nm [1,23]. At room temperature the parameter βd = 0.65 pN −1 . Discrete points show simulation data and continuous lines show analytical results.
the system evolve for a long time, according to above dynamical rules. All our measurements are performed in the steady state.
III. RESULTS FOR SINGLE FILAMENT
For a single filament, we first present the results for S = L and later we consider the effect of variation of S. We define the velocity V of the barrier as the rate of change of the average height of the surface after the system has reached steady state. We present the force-velocity curve in Fig. 2A. This curve has a convex shape where velocity decays rapidly for small force, and for large force it decays slowly. In fact for small and intermediate values of force, the velocity falls off exponentially ( Fig. 2A inset) and close to stalling it shows deviation from the exponential form. We explain below that the exponential dependence originates from the global movement of the barrier (as shown in Fig. 1A) which dominates V for small and moderate F range. In Fig. 2B we show the variation of stall force F s with the barrier size L. Stall force increases with L, although logarithmically slowly. Note that the stall force is often interpreted as the maximum polymerization force generated by the filament and therefore it is somewhat surprising that it depends on the size of the barrier. We show below that in our system the local fluctuations of the barrier, which depend on L, make substantial contribution towards its net velocity and this becomes particularly significant in the stalling regime.
In our system there are two possible barrier movements: global and local. In a global movement, a bound filament polymerizes by pushing the whole barrier up, such that the average height changes by an amount d. The rate at which this process happens is U 0 exp(−βF d). Let this process contribute a velocity V 1 to the barrier in the steady state, which can be written as
V 1 = p 0 dU 0 exp(−βF d).(1)
Here, p 0 is the probability that the filament is in contact with the barrier. Note that here we have ignored the possibility that the bound filament is pushing against a valley (in that case no global movement takes place, only a local flip is sufficient for polymerization). In fact we have verified in our simulation (data presented in Fig. A-1B ) that the probability of finding a valley at the binding site is indeed small. To write V 1 as a function of F we still need to calculate p 0 . Define p i as the probability that the distance between the filament tip and the binding site is i. Clearly, i = 0 corresponds to the contact probability. It is easy to see that for i > 0, the probability p i satisfies master equation for a biased random walker:
dp i dt = W 0 p i−1 + U 0 p i+1 − (W 0 + U 0 )p i(2)
and for i = 0 one has dp 0 dt
= U 0 p 1 − W 0 p 0 .(3)
Here, we have ignored any change in p i due to height fluctuations at the binding site. For fast barrier dynamics, when height fluctuations increase, this assumption breaks down. In the steady state, these equations yield a recursion relation p i = W0 U0 i p 0 for positive i. This recursion relation, along with the normalization condition i p i = 1 yields the expression p 0 = (1 − W 0 /U 0 ), which is independent of F . So the final expression for V 1 becomes
V 1 = d(U 0 − W 0 ) exp(−βF d).(4)
To calculate the velocity due to local height fluctuations of the barrier, we consider a local valley (hill) flipping to a hill (valley) which increases (decreases) the average height by an amount d/L. As discussed in section 2, the transition rates at the binding site is different from the rest of the system, since a hill to valley transition may be blocked, if a filament is in contact. Then the barrier velocity due to local height fluctuations can be written as
V 2 = dU 0 L (1 + p 0 )p v (0) + L−1 i=1 p v (i) e −βF d/L − (1 − p 0 )p h (0) − L−1 i=1 p h (i)(5)
where p v (i) and p h (i) denote the probabilities to find a valley and a hill, respectively at a distance i from the binding site. In the above equation, the first term on the right-hand-side represent the situation where a valley at the binding site flips to a hill, due to thermal fluctuations or due to being pushed by the filament. The second term present flipping of a valley to a hill at all the other sites. The third term describe the case when there is a hill at the binding site which can flip to a valley when no filament is in contact. The fourth term describe flipping of a hill to a valley in rest of the system. The probabilities p v (i) and p h (i) can be calculated within a mean field approximation by considering a KPZ surface with the binding site acting as a 'defect site' (see Appendix A for details), where the transition rates are different from the rest of the system. Our calculations show that p v (i) and p h (i) have a rather weak dependence on F and their difference [p v (i) − p h (i)] is independent of i and scales as 1/L. For large L, the total velocity of the barrier V = V 1 + V 2 can be written as
V (F ) = d(U 0 − W 0 ) e −βF d + dU 0 L p v (0)(1 + p 0 ) − (1 − p 0 )p h (0) + L−1 i=1 {p v (i)(1 − βdF L ) − p h (i)}(6)
where we have retained terms upto order 1/L and ignored higher order terms. In Fig. 2A we compare our calculation with simulation results and obtain reasonably good agreement. For small F , the first term in Eq. 6 dominates the velocity and as F increases, local fluctuations become more important. The last term in Eq. 6, within the braces, which represents the velocity due to hill-valley fluctuations at all sites, except the binding site, is the most dominant term in the local movement. In the stalling region, the positive contribution from the global movement and the negative contribution from the local fluctuations cancel each other, where the first and last terms of Eq. 6 determine the major balance. The stall force F s can be obtained by graphically solving the above transcendental equation after putting its left hand side zero. This gives stall force as a function of L and we compare this variation with simulation results in Fig. 2B. We find good agreement for large L but as expected, for small L there are deviations. Note that the stall force in our system is substantially higher than that for a rigid barrier [18]. Since the local movements cost much less energy, they can continue even when the load is high.
A. Effect of faster and slower barrier dynamics
We find the nature of the force-velocity curve depends on the relative time-scale of the barrier and filament dynamics. For faster barrier dynamics, the local fluctuations of the barrier increases and as a result their contribution to the net velocity is also higher. This means even for small force, the velocity is not dominated by the global movement (first term in Eq. 6) alone. In addition, our simple expression for the contact probability p 0 = (1 − W 0 /U 0 ), which was derived neglecting the local fluctuations at the binding site, does not remain valid for fast barrier dynamics and p 0 increases with F in this case (see our data in Fig. B-1). As a result, the velocity does not decay exponentially for small force, but follows a slower decay. For a given value of F, in the small or intermediate range, as the barrier dynamics becomes faster, the velocity becomes higher and the convex nature of the curve is gradually lost. Moreover, since stalling phenomenon in our system can be described as a balance between global and local velocities of the barrier (see Eq. 6), larger contribution from local movement implies this balance is reached at a smaller value of force. Therefore, for faster barrier dynamics we have a smaller stall force. We present our data in Figs. 3A and 3B. Our data in Fig. 3B imply that in the limit of infinitely slow barrier dynamics, when the barrier can be considered as an effectively rigid object, the stall force diverges. Note that even in this limit, our model remains different from the rigid barrier case studied in [18], where at least one filament is always bound to the barrier. For N = 1 this would mean whenever there is a depolymerization, the barrier also moves down, along with the filament tip. On the contrary, we allow unsupported barrier in our system and when the barrier is effectively rigid, it shows only global movement which is always in the upward direction. The force velocity curve is perfectly exponential in this case and zero velocity is reached at F → ∞ limit.
B. Variation of the shape of the barrier with load
We have seen above how the barrier fluctuations affect the growth of the filament. The barrier properties are also altered in this process. As the load increases, the height profile of the barrier shows larger variation across the system. We characterize it by measuring the scaling of average height with distance from the binding site: h(r) − h(0) ∼ r α , where h(r) is the height of a site at a distance r from the binding site. In Fig. 4 we plot α as a function of the external force, which shows that for small force α increases slowly, around the stalling force there is a sharp increase and finally for very large force, α saturates to unity. Note that large value of α indicates presence of large hills and valleys in the system. α = 1 corresponds to a phase separation of upslope and downslope bonds in the system which gives rise to one single large hill, the highest point being the binding site. This situation is similar to the case of an elastic membrane, when the membrane tension is large and the membrane is stretched.
IV. RESULTS FOR MULTIPLE FILAMENTS
In the case of N filaments in the system, we mainly consider the case when the ratio N/L is small. We assume the binding sites are uniformly placed on the lattice, at a distance L/N . Between the segment of two successive binding sites, the same considerations as in a single filament case apply. We assume these segments are independent and apply our results for the single filament case for each segment.
To start with, we consider the velocity of the barrier due to its global movement V 1 = p 0 N dU 0 exp(−βF d). As before, p 0 is the probability to find a filament in contact with the barrier and p 0 N is the average number of bound filaments in the system. Here, we have neglected any correlation between the binding sites. To calculate p 0 , we write down master equations for average number N i of filaments at a distance i from the corresponding binding sites. The steady state solutions of these equations can be obtained recursively for different values of i (see appendix C for details). For N filaments we have
p 0 = (1 − W 0 /U 0 ) 1 + (N − 1) exp(−βF d) .(7)
For large F , the contact probability becomes same as the single filament case. For small F , the contact probability is approximately 1/N times the single-filament value, indicating that for small F , at most one filament is in contact with the barrier. For the local movement of the barrier, we need to calculate the probability to find hills and valleys. As discussed above, for each segment between two successive binding sites, we use our results for p v (i) and p h (i) for the single filament case (with the modification that i in this case varies from 0 to (L/N − 1)). The velocity due to local fluctuations then becomes
V 2 = N dU 0 L p v (0)(1 + p 0 ) + L/N −1 i=1 p v (i) e −βF d/L − (1 − p 0 )p h (0) − L/N −1 i=1 p h (i) (8)
The total velocity to leading order in 1/L and N/L becomes
V (F ) = d (U 0 − W 0 ) 1 + (N − 1)e −βF d N e −βF d + dU 0 N L {p v (0)(1 + p 0 ) − p h (0)(1 − p 0 )} + L/N −1 i=1 p v (i) 1 − βF d L − p h (i) (9)
The stall force can be obtained by solving the above transcendental equation graphically for V (F ) = 0 and we compare the analytical stall force with our simulation results in Fig. 5A inset. We find that the stall force is independent of L in this case and scales with N , which can be easily seen from Eq. 9. Since the value of the stall force is rather large in this case, one can neglect global movement of the barrier close to the stalling regime. In addition, p 0 ≈ (1 − W 0 /U 0 ) for large force, and (p v (i) − p h (i)) is of order N/L. Using these in Eq. 9 it directly follows that the stall force for N filaments is independent of L and scales as N . We also investigate the effect of the time-scale of the barrier dynamics on the force-velocity dependence (Fig. 5B) and we find qualitatively the same effect as in N = 1 case.
V. CONCLUSIONS
In this paper, we have studied force generation by a set of parallel filaments polymerizing against a barrier. A similar question has been addressed in many recent works where the barrier was modeled as a rigid wall, which may have a motion like a thermal ratchet [14,15,30,31], or may be a passive obstacle which can move only when pushed by the filaments [18,20,23,[32][33][34]. In this paper, we have considered a barrier with thermal fluctuations but instead of modeling it as a rigid wall, we allow for its shape fluctuations. In [35] a similar aspect was studied where the barrier was modelled by a one dimensional Edwards-Wilkinson type membrane under tension, which was being locally pushed by a set of growing filaments. The uncorrelated drive from the filaments gives rise to a KPZ type behavior in the correlated height fluctuations of the membrane, but this is associated with very slow crossover. Interestingly, the steady-state fluctuations of the driven membrane shows a non-monotonic behavior with the driving rate, where the strongly driven and weakly driven regimes are separated by a minimum in the width of the membrane profile. Although the filaments only impart local drive to the membrane, and no global movement of the membrane is considered in [35], the velocity still shows an exponential dependence on the membrane tension, whereas in our model the exponential dependence is caused by the global movement and the local fluctuations generate a velocity that decreases roughly linearly with the external load.
One interesting result obtained in our system is the dependence of the qualitative shape of the F -V curve on the relative time-scale between the filament polymerization and barrier fluctuation. For slow barrier dynamics, the curve has a convex shape and V shows an exponential decay for small and moderate F . But for fast barrier dynamics when the local fluctuations become more important, there is significant deviation from exponential dependence. A similar effect was reported in [21] for a hybrid mesoscopic model that combines the microscopic dynamics of semi-flexible actin filaments and the viscous retrograde flow of actin network modeled as a macroscopic gel. It was shown that the force-velocity curve can be both convex and concave, depending on the characteristic time-scale of recoil of the gel-like network. It is remarkable that our simple lattice gas model can reproduce this same effect, which underlines the importance of the relative time-scale of obstacle and filament dynamics on the force generation mechanism.
Throughout this paper, we have considered the case δ = d, when the local movement of the barrier occur in steps whose size is equal to that of a monomer. We have verified (data not shown here) that many of our qualitative conclusions remain valid for δ ≪ d. In other words, even when the shape fluctuations of the barrier occur over much smaller length scales, their effect cannot be ignored. We find that the stall force continues to show dependence on the barrier properties. The relative time-scales between the filament and barrier dynamics affects the F − V curve in the same way. However, the quantitative value of the stall force increases as smaller δ values are considered.
Finally, our simple model shows that a non-rigid obstacle can produce remarkable effects on force generation of parallel actin filaments. Our results underline the importance of the local shape distortions of an obstacle and indicate that more research with detailed modeling of this aspect is required. Many of our conclusions are generic and can be expected to remain valid in systems where different descriptions of a non-rigid obstacle are used. This also opens up the possibility of observing some of these effects in experiment. For example, the change of shape of the barrier with external load can be monitored in an experiment and our prediction that the height variation across the barrier increases with load, can be explicitly verified. The key feature of a fluctuating barrier is that one component of velocity comes from the local fluctuations and a direct measurement of this component will surely give insights into the effects of barrier fluctuations. Our model shows that for multiple filaments close to stalling regime, velocity is dominated by these local movements and we also predict the scaling behavior of this velocity with filament density and barrier size. It would be interesting to verify these predictions in experiments, which would not only shed light on the qualitative nature of the local fluctuations but would also provide insights about their quantitative behavior.
VI. ACKNOWLEDGEMENTS
The computational facility used in this work was provided through Thematic Unit of Excellence on Computational Materials Science, funded by Nanomission, Department of Science and Technology, India.
Appendix A: Calculation of pv(i) and p h (i) for single filament
The shape of the barrier changes due to transition between local hills and valleys. The probability to find a hill at a site s located at a distance i from the binding site is p h (i) and it can be written as ρ i (1 − ρ i+1 ), where ρ i is the probability that the bond preceding the site s has π/4 orientation and (1 − ρ i+1 ) is the probability that the bond immediately after the site s has −π/4 orientation. Here, we have used mean-field theory and neglected correlation between the bonds. The probability to find a valley at site s can similarly be written as (1 − ρ i )ρ i+1 . The transition rate from a hill to a valley is R − and the reverse process occurs with rate R + . For i = 0, R + /R − = exp(−βF d/L). However, when i = 0, or, in other words, the site s is the binding site itself, then although valley to hill transition is not affected, the reverse transition can take place only when the filament is not in contact with the binding site. We therefore make the simplifying assumption that the effect of the filament can be included by merely rescaling the hill to valley transition rate at the binding site by the probability that the filament is in contact. In section III we calculate the contact probability p 0 = 1 − W 0 /U 0 ≃ 1/2. The master equations describing the time-evolution of ρ i can then be written as
dρ i dt = (1 − ρ i )(R − ρ i−1 + R + ρ i+1 ) − ρ i [R − (1 − ρ i+1 ) + R + (1 − ρ i−1 )], for 2 ≤ i ≤ L − 1 (A-1)
and at the binding site,
dρ 1 dt = (1 − ρ 1 )[R − (1 − p 0 )ρ L + R + ρ 2 ] − ρ 1 [R − (1 − ρ 2 ) + R + ρ 1 (1 − ρ L )], (A-2)
where we have applied periodic boundary condition, which also gives
dρ L dt = (1 − ρ L )(R − ρ L−1 + R + ρ 1 ) − ρ L [R − (1 − ρ 1 )(1 − p 0 ) + R + (1 − ρ L−1 )]. (A-3)
We solve the above equations in the steady state when the left hand sides vanish. To leading order in 1/L, we find ρ i = a + bi/L, where a and b are related via the condition L i=1 ρ i = L/2 and b satisfies the quadratic equation
βF d 2L − p 0 4 1 − 2 L b 2 + 1 − βF d 4L − p 0 2 1 − 1 L b + 1 4 βF d L − p 0 = 0, (A-4)
one of whose roots can be discarded from the condition that 0 ≤ ρ i ≤ 1 for all i. For a given F , therefore, ρ i varies linearly with the distance from the binding site with a gradient 1/L. For F = 0, we have a = ( √ 2 − 1) and b = (3 − 2 √ 2). For 0 ≤ F ≤ F s , the range of variation of a and b are rather small and occur at third or higher decimal places. Therefore, ρ i does not change significantly with F . Our simulation data in Fig. A-1A show similar qualitative behavior, although close to the binding site there is deviation of ρ i from linearity. The quantitative values of a and b however, do not match with simulations. We attribute this mismatch to the mean field theoretic assumptions used in our calculation.
We calculate p v (i) and p h (i) from ρ i and compare with simulation in Fig. A-1B. Notice that from our analytical expression for ρ i , it follows immediately that (p v (i) − p h (i)) is independent of i and ∼ b/L. This has important consequence for our calculation of V 2 in section III. Moreover, the probability that the filament is in contact with a valley is given by p v (0)p 0 and our numerical results in Fig. A-1B show that this probability is rather small. For slow barrier dynamics, we find reasonable agreement. But for fast barrier dynamics, our analytical prediction does not remain valid anymore and p0 increases with F . The simulation parameters are as in Fig. 2.
Here, we have assumed that the distance i between the filament tip and the binding site can change only due to polymerization and depolymerization dynamics and the global movement of the whole barrier due to polymerization of bound filaments. We have neglected local height fluctuations occurring at the binding sites. As we show below, this approximation works reasonably well as long as the filament density N/L is small and the time-scale of barrier fluctuation is comparable to, or slower than the filament dynamics. For very fast motion of the barrier, the height fluctuations at the binding sites become more frequent and this assumption breaks down.
Solving the Eqs. C-1, C-2, C-3 in the steady state, we obtain the recursion relation
FIG. 1 :
1Schematic representation of our model. (A): Polymerization of a bound filament by causing a local change in barrier height with rate U0e − βF d L . (B): A bound filament polymerizes by causing global movement of the whole barrier with rate U0e −βF d . (C): A free filament polymerizes and depolymerizes with rates U0 and W0, respectively. Since these processes do not involve any barrier motion, these rates are independent of F . (D): Thermal fluctuation of the barrier: a local valley can flip to a hill with rate R+ and the reverse process occurs with rate R−. We use local detailed balance, R+/R− = exp(−βF d/L), except at the binding sites, where hill to valley transition may be blocked due to the presence of a filament.
FIG. 2 :
2Force-velocity characteristic and stall force for a single filament. (A): Force-velocity curve has a convex shape. Inset shows exponential decay of the barrier velocity for small and intermediate F , when the global motion of the barrier dominates. Close to stalling the local fluctuations become important. We have used L = 512 here. (B): Stall force increases with the barrier size L. In both the panels, we have used S/L = 1. The free filament depolymerization rate W0 = 1.4 s −1
FIG. 3 :
3Force-velocity characteristic for a single filament depends on the relative time-scale between the filament and the barrier dynamics. (A): Velocity of the barrier vs scaled force for different values of S/L. For large S/L, the convex nature of force-velocity characteristic is lost. As S/L increases, the local fluctuations of the barrier become more important and even for small F , the barrier velocity is not governed by the global movement alone, and hence V does not decay exponentially anymore. Here, we have used L = 64. (B): Stall force decreases as a function of S/L. Since local movements of the barrier become more important for large S/L, the balance between global and local movements is reached at a smaller force. Note however, that the x-axis is plotted in a log-scale, indicating a weak dependence of stall force on the time-scale. Here we have used L = 256. The other parameters are same as in Fig. 2.
FIG. 4 :
4Variation of α as a function of external load. Close to the stalling force, α shows a sharp increase. Here, we have used S/L = 1 and L = 256 (red triangle) and 128 (blue circle). Other simulation parameters are same as inFig. 2.
FIG. 5 :
5Force-velocity characteristic for multiple filaments. (A): Velocity shows very slow decay for large F , when global movement can be neglected and V can be assumed to be governed by local fluctuations alone. Here, we have used L = 512 and N = 32. Inset shows stall force as a function of N for two different L values. We find stall force scales linearly with N and remains independent of L. The continuous lines show analytical results. (B): Dependence of force-velocity characteristic on the time-scale of the barrier dynamics. In this case we find same qualitative effect as in the single filament case. Here, we have used N = 16 and L = 128.
FIG. A- 1 :
1Average shape of the barrier for single filament. Discrete points show simulation results and continuous lines show analytical predictions. (A): Probability ρi to find an upslope bond as a function of scaled distance i/L from the binding site. ρi = 1/2 for i = L/2 and for larger i, we have ρi = 1 − ρ i−L/2 . The open symbols correspond to F = 0 and the close symbols correspond to F = 4pN . Symbols * and • are for L = 128 and × and ✷ are for L = 256. These data show that, except close to the binding site, ρi increases linearly with i with a gradient ∼ 1/L. We also find that ρi remains almost same for these F values. The continuous lines are analytical predictions, where green solid line is for F = 0 and blue dashed line is for F = 4pN . (B): Probability pv(i) to find a valley at a distance i from the binding site. For i = 0 the probability is substantially smaller compared to the rest of the system, which means it is rather unlikely to find a valley at the binding site. The symbols * and ∆ represent F = 0pN and 4pN , respectively. We have used L = 512 here. (C) and (D): [pv(i) − p h (i)] shows a sharp jump at i = 0 and then remains constant at a value that scales as 1/L. The open symbols correspond to F = 0 and the closed symbols correspond to F = 4pN . Symbols * and • are for L = 256 and × are ✷ are for L = 512. Appendix B: Variation of contact probability for a single filament with load for fast and slow barrier dynamics Appendix C: Calculation of contact probability for multiple filamentsLet N i be the average number of filaments at a distance i from the respective binding sites. By definition, N 0 is the average number of bound filaments and the contact probability is p 0 = N 0 /N . The time-evolution equations for N i can be written asdN 0 dt = U 0 N 1 − {(N 0 − 1)U 0 e −βF d + W 0 }N 0 , (C-1) dN 1 dt = {(N 0 − 1)U 0 e −βF d + W 0 }N 0 + U 0 N 2 − (N 0 U 0 e −βF d + W 0 + U 0 )N 1 , (C-2) dN i dt = (N 0 U 0 e −βF d + W 0 )N i−1 + U 0 N i+1 − (N 0 U 0 e −βF d + W 0 + U 0 )N i for i ≥ 2.(C-3) probability p0 as a function of F for single filament. Our analytical calculation yields p0 = (1−W0/U0) ≃ 0.5.
0 U 0 e −βF d + W 0 − U 0 e −βF d ) normalization relation, N i = N we get N 0 = N (U 0 − W 0 ) U 0 − U 0 e −βF d + N U 0 e −βF d (C-6)and the contact probability has the form p 0 = (U0−W0)U0+(N −1)U0e −βF d .InFig. C-1we compare this result with simulation and find reasonable agreement.
number of bound filaments N0 as a function of force F . For slow barrier dynamics, our analytical prediction in Eq. C-6 agree well with numerics. But as the barrier dynamics becomes faster, deviations are observed. Here we have used L = 256, N = 32. Other simulation parameters are same as inFig. 2.
Mechanics of motor proteins and the cytoskeleton. J Howard, Sinauer AssociatesSunderland, MAJ. Howard, Mechanics of motor proteins and the cytoskeleton, Sunderland, MA: Sinauer Associates (2001).
Actin, a central player in cell shape and movement. T D Pollard, J A Cooper, Science. 3261208T. D. Pollard and J. A. Cooper, Actin, a central player in cell shape and movement, Science 326, 1208 (2009).
Actin dynamics, architecture and mechanics in cell motility. L Blanchoin, R B Paterski, C Sykes, J Plastino, Physiol Rev. 94235L. Blanchoin, R. B. Paterski, C. Sykes and J. Plastino, Actin dynamics, architecture and mechanics in cell motility, Physiol Rev 94, 235 (2014).
Collective cell migration in morphogenesis, regeneration and cancer. P Friedl, D Gilmour, Nat. Rev. Mol. Cell Biol. 10445P. Friedl and D. Gilmour, Collective cell migration in morphogenesis, regeneration and cancer, Nat. Rev. Mol. Cell Biol. 10, 445 (2009).
Forces generated during actin-based propulsion: A direct measurement by micromanipulation. Y Marcy, J Prost, M F Carlier, C Sykes, Proc. Natl. Acad. Sci. U.S.A. 1015992Y. Marcy, J. Prost, M. F. Carlier, and C. Sykes, Forces generated during actin-based propulsion: A direct measurement by micromanipulation, Proc. Natl. Acad. Sci. U.S.A. 101, 5992 (2004).
Force-velocity measurements of a few growing actin filaments. C Brangbour, O Du Roure, E Helfer, D Démoulin, A Mazurier, M Fermigier, M F Carlier, J Bibette, J Baudry, PLoS Biol. 91000613C. Brangbour, O. du Roure , E. Helfer, D. Démoulin, A. Mazurier, M. Fermigier, M. F. Carlier, J. Bibette and J. Baudry, Force-velocity measurements of a few growing actin filaments, PLoS Biol. 9, e1000613 (2011).
Power transduction of actin filaments ratcheting in vitro against a load. D Démoulin, M F Carlier, J Bibette, J Baudry, Proc. Natl. Acad. Sci. U.S.A. 11117845D. Démoulin, M. F. Carlier, J. Bibette, and J. Baudry, Power transduction of actin filaments ratcheting in vitro against a load, Proc. Natl. Acad. Sci. U.S.A. 111, 17845 (2014).
Loading history determines the velocity of actin-network growth. S H Parekh, O Chaudhuri, J A Theriot, D A Fletcher, Nat. Cell Biol. 71219S. H. Parekh, O. Chaudhuri, J. A. Theriot and D. A. Fletcher, Loading history determines the velocity of actin-network growth, Nat. Cell Biol. 7, 1219 (2005).
Direct measurement of the lamellipodial protrusive force in a migrating cell. M Prass, K Jacobson, A Mogilner, M Radmacher, J. Cell. Biol. 174767M. Prass, K. Jacobson, A. Mogilner and M. Radmacher, Direct measurement of the lamellipodial protrusive force in a migrating cell, J. Cell. Biol. 174, 767 (2006).
Actin filament elasticity and retrograde flow shape the force-velocity relation of motile cells. J Zimmermann, C Brunner, M Enculescu, M Goegler, A Ehrlicher, J Käs, M Falcke, Biophys. J. 102287J. Zimmermann, C. Brunner, M. Enculescu, M. Goegler, A. Ehrlicher, J. Käs and M. Falcke, Actin filament elasticity and retrograde flow shape the force-velocity relation of motile cells, Biophys. J. 102, 287 (2012).
Compression forces generated by actin comet tails on lipid vesicles. P A Giardini, D A Fletcher, J A Theriot, Proc. Natl. Acad. Sci. U.S.A. 1006493P. A. Giardini, D. A. Fletcher and J. A. Theriot, Compression forces generated by actin comet tails on lipid vesicles, Proc. Natl. Acad. Sci. U.S.A. 100, 6493 (2003).
Direct measurement of force generation by actin filament polymerization using an optical trap. M J Footer, J W J Kerssemakers, J A Theriot, M Dogterom, Proc. Natl. Acad. Sci. U.S.A. 1042181M. J. Footer, J. W. J. Kerssemakers, J. A. Theriot and M. Dogterom, Direct measurement of force generation by actin filament polymerization using an optical trap, Proc. Natl. Acad. Sci. U.S.A. 104, 2181 (2007).
Insertional assembly of actin filament barbed ends in association with formins produces piconewton forces. D R Kovar, T D Pollard, Proc. Natl. Acad. Sci. U.S.A. 10114725D. R. Kovar and T. D. Pollard, Insertional assembly of actin filament barbed ends in association with formins produces piconewton forces, Proc. Natl. Acad. Sci. U.S.A. 101, 14725 (2004).
Cellular motions and thermal fluctuations: the Brownian ratchet. C S Peskin, G M Odell, G F Oster, Biophys. J. 65316C. S. Peskin, G. M. Odell and G. F. Oster, Cellular motions and thermal fluctuations: the Brownian ratchet, Biophys. J. 65, 316 (1993).
Force-velocity relation for growing biopolymers. A E Carlsson, Phys. Rev. E. 627082A. E. Carlsson, Force-velocity relation for growing biopolymers, Phys. Rev. E 62, 7082 (2000).
Growth of a semi-flexible polymer close to a fluctuating obstacle: application to cytoskeletal actin fibres and testing of ratchet models. N J Burroughs, D Marenduzzo, J. Phys.: Condens. Matter. 18357N. J. Burroughs and D. Marenduzzo, Growth of a semi-flexible polymer close to a fluctuating obstacle: application to cytoskeletal actin fibres and testing of ratchet models, J. Phys.: Condens. Matter 18, S357 (2006).
Performance of a population of independent filaments in lamellipodial protrusion. T E Schaus, G G Borisy, Biophys. J. 951393T. E. Schaus, G. G. Borisy, Performance of a population of independent filaments in lamellipodial protrusion, Biophys. J. 95, 1393 (2008).
Condensation of actin filaments pushing against a barrier. K Tsekouras, D Lacoste, K Mallick, J F Joanny, New J. Phys. 13103032K. Tsekouras, D. Lacoste, K. Mallick and J. F. Joanny, Condensation of actin filaments pushing against a barrier, New J. Phys. 13, 103032 (2011).
Stall force of polymerizing microtubules and filament bundles. J Krawczyk, J Kierfeld, Europhys. Lett. 9328006J. Krawczyk and J. Kierfeld, Stall force of polymerizing microtubules and filament bundles, Europhys. Lett. 93, 28006 (2011).
Collective force generated by multiple biofilaments can exceed the sum of forces due to individual ones. D Das, D Das, R Padinhateeri, New J. Phys. 1663032D. Das, D. Das and R. Padinhateeri, Collective force generated by multiple biofilaments can exceed the sum of forces due to individual ones, New J. Phys. 16, 063032 (2014).
Mesoscopic model of actin-based propulsion. J Zhu, A Mogilner, PLoS Comp. Biol. 81002764J. Zhu and A. Mogilner, Mesoscopic model of actin-based propulsion, PLoS Comp. Biol. 8, e1002764 (2012).
Load sharing in the growth of bundled biopolymers. R Wang, A E Carlsson, New J. Phys. 16113047R. Wang and A. E. Carlsson, Load sharing in the growth of bundled biopolymers, New J. Phys 16, 113047 (2014).
Branching influences force-velocity curve and length fluctuations in actin networks. D K Hansda, S Sen, R Padinhateeri, Phys. Rev. E. 9062718D. K. Hansda, S. sen and R. Padinhateeri, Branching influences force-velocity curve and length fluctuations in actin networks, Phys. Rev. E 90, 062718 (2014).
Dynamics of membranes driven by actin polymerization. N S Gov, A Gopinathan, Biophys J. 90454N. S. Gov and A. Gopinathan, Dynamics of membranes driven by actin polymerization, Biophys J 90, 454 (2006).
Mechanics and Dynamics of Actin-Driven thin membrane protrusions. E Atilgan, D Wirtz, S X Sun, Biophys J. 9065E. Atilgan, D. Wirtz and S. X. Sun, Mechanics and Dynamics of Actin-Driven thin membrane protrusions, Biophys J 90, 65 (2006).
Membrane-induced bundling of actin filaments. A P Liu, D L Richmond, L Maibaum, S Pronk, P L Geissler, D A Fletcher, Nat. Phys. 4789A. P. Liu, D. L. Richmond, L. Maibaum, S. Pronk, P. L. Geissler and D. A. Fletcher, Membrane-induced bundling of actin filaments, Nat. Phys. 4, 789 (2008).
Dynamic scaling of growing interfaces. M Kardar, G Parisi, Y-C Zhang, Phys. Rev. Lett. 56889M. Kardar, G. Parisi and Y-C. Zhang, Dynamic scaling of growing interfaces, Phys. Rev. Lett. 56, 889 (1986).
The surface statistics of a granular aggregate. S F Edwards, D R Wilkinson, Proc. R. Soc. London. 38117S. F. Edwards and D. R. Wilkinson, The surface statistics of a granular aggregate, Proc. R. Soc. London 381, 17 (1982).
Rate constants for the reactions of ATP-and ADP-actin with the ends of actin filaments. T D Pollard, J. Cell. Biol. 1032747T. D. Pollard, Rate constants for the reactions of ATP-and ADP-actin with the ends of actin filaments, J. Cell. Biol. 103, 2747 (1986).
The physics of lamellipodial protrusion. A Mogilner, G Oster, Euro. Biophys. J. 2547A. Mogilner and G. Oster, The physics of lamellipodial protrusion, Euro. Biophys. J. 25, 47 (1996).
Cell motility driven by actin polymerization. A Mogilner, G Oster, Biophys. J. 713030A. Mogilner and G. Oster, Cell motility driven by actin polymerization, Biophys. J. 71, 3030 (1996).
Force induced dynamical properties of multiple cytoskeletal filaments are distinct from that of single filament. D Das, D Das, R Padinhateeri, PLoS One. 912114014D. Das, D. Das and R. Padinhateeri, Force induced dynamical properties of multiple cytoskeletal filaments are distinct from that of single filament, PLoS One 9 (12), e114014 (2014).
The polymerizing ratchet model explains the force velocity relation for growing microtubule. A Mogilner, G Oster, Eur. Biophys. J. 28235A. Mogilner and G. Oster, The polymerizing ratchet model explains the force velocity relation for growing microtubule, Eur. Biophys. J. 28, 235 (1999).
Force generation by actin polymerization II: the elastic ratchet and tethered filaments. A Mogilner, G F Oster, Biophys. J. 841591A. Mogilner and G. F. Oster, Force generation by actin polymerization II: the elastic ratchet and tethered filaments, Biophys. J. 84, 1591 (2003).
Dynamics of a driven surface. S L Narasimhan, A Baumgaertner, J. Chem. Phys. 13334702S. L. Narasimhan and A. Baumgaertner, Dynamics of a driven surface, J. Chem. Phys. 133, 034702 (2010).
| []
|
[
"Deep learning based fence segmentation and removal from an image using a video sequence",
"Deep learning based fence segmentation and removal from an image using a video sequence"
]
| [
"Sankaraganesh Jonna \nDepartment of Computer Science and Engineering\n\n",
"Krishna K Nakka \nDepartment of Electrical Engineering\n\n\nIndian Institute of Technology Kharagpur\nIndia\n",
"Rajiv R Sahay \nDepartment of Electrical Engineering\n\n\nIndian Institute of Technology Kharagpur\nIndia\n"
]
| [
"Department of Computer Science and Engineering\n",
"Department of Electrical Engineering\n",
"Indian Institute of Technology Kharagpur\nIndia",
"Department of Electrical Engineering\n",
"Indian Institute of Technology Kharagpur\nIndia"
]
| []
| Conventiona approaches to image de-fencing use multiple adjacent frames for segmentation of fences in the reference image and are limited to restoring images of static scenes only. In this paper, we propose a de-fencing algorithm for images of dynamic scenes using an occlusionaware optical flow method. We divide the problem of image de-fencing into the tasks of automated fence segmentation from a single image, motion estimation under known occlusions and fusion of data from multiple frames of a captured video of the scene. Specifically, we use a pre-trained convolutional neural network to segment fence pixels from a single image. The knowledge of spatial locations of fences is used to subsequently estimate optical flow in the occluded frames of the video for the final data fusion step. We cast the fence removal problem in an optimization framework by modeling the formation of the degraded observations. The inverse problem is solved using fast iterative shrinkage thresholding algorithm (FISTA). Experimental results show the effectiveness of proposed algorithm. | 10.1007/978-3-319-49409-8_68 | [
"https://arxiv.org/pdf/1609.07727v2.pdf"
]
| 6,064,465 | 1609.07727 | b0d2850ed17a22dbe9745e00d286f19d2fa82cd9 |
Deep learning based fence segmentation and removal from an image using a video sequence
Sankaraganesh Jonna
Department of Computer Science and Engineering
Krishna K Nakka
Department of Electrical Engineering
Indian Institute of Technology Kharagpur
India
Rajiv R Sahay
Department of Electrical Engineering
Indian Institute of Technology Kharagpur
India
Deep learning based fence segmentation and removal from an image using a video sequence
Image inpaintingde-fencingdeep learningconvolutional neural networksoptical flow
Conventiona approaches to image de-fencing use multiple adjacent frames for segmentation of fences in the reference image and are limited to restoring images of static scenes only. In this paper, we propose a de-fencing algorithm for images of dynamic scenes using an occlusionaware optical flow method. We divide the problem of image de-fencing into the tasks of automated fence segmentation from a single image, motion estimation under known occlusions and fusion of data from multiple frames of a captured video of the scene. Specifically, we use a pre-trained convolutional neural network to segment fence pixels from a single image. The knowledge of spatial locations of fences is used to subsequently estimate optical flow in the occluded frames of the video for the final data fusion step. We cast the fence removal problem in an optimization framework by modeling the formation of the degraded observations. The inverse problem is solved using fast iterative shrinkage thresholding algorithm (FISTA). Experimental results show the effectiveness of proposed algorithm.
Introduction
Images containing fences/occlusions occur in several situations such as photographing statues in museums, animals in a zoo etc. Image de-fencing involves the removal of fences or occlusions in images. De-fencing a single photo is strictly an image inpainting problem which uses data in the regions neighbouring fence pixels in the frame for filling-in occlusions. The works of [1,2,3,4] addressed the image inpainting problem wherein a portion of the image which is to be inpainted is specified by a mask manually. As shown in Fig. 1 (a), in the image de-fencing problem it is difficult to manually mark all fence pixels since they are numerous and spread over the entire image. The segmented binary fence mask obtained using the proposed algorithm is shown in Fig. 1 (b). These masks are used in our work to aid in occlusion-aware optical flow computation and background image reconstruction. In Fig. 1 (c), we show the inpainted image corresponding to Fig. 1 (a) obtained using the method of [2]. The de-fenced image obtained using the proposed algorithm is shown in Fig. 1 (d). As can be seen from Fig. 1 (c), image inpainting does not yield satisfactory results when the image contains fine textured regions which have to be filled-in. However, using a video panned across a fenced scene can lead to better results due to availability of additional information in the adjacent frames.
Although, there has been significant progress in the area of lattice detection [5,6] and restoration of fenced images/videos [6,7,8,9,10], segmentation of fence or occlusion from a single image and de-fencing scenes containing dynamic elements are still challenging problems. Most of the existing works assume global motion between the frames and use images of static scene elements only [8,9,10]. Initial work related to image de-fencing has been reported by Liu et al. [7], wherein fence patterns are segmented via spatial regularity and the fence occlusions are filled-in using an inpainting algorithm [2]. Recent attempts for image de-fencing [9,10] use the parallax cue for fence pattern segmentation using multiple frames from a video. However, these works [9,10] constrain the scene elements to be static. Another drawback of [9] is that if the scene does not produce appreciable depth parallax fence segmentation is inaccurate. A very recent image de-fencing algorithm [6] exploits both color and motion cues for automatic fence segmentation from dynamic videos.
The proposed algorithm for image de-fencing uses a video captured by panning a camera relative to the scene and requires the solution of three subproblems. The first task is automatic segmentation of fence pixels in the frames of the captured video. Importantly, unlike existing works [6,7,8,9,10], we propose a machine learning algorithm to segment fences in a single image. We propose to use a pre-trained convolutional neural network (CNN) for fence texel joint detection to generate automatic scribbles which are fed to an image matting [11] technique to obtain the binary fence mask. Note that sample portions of images marked with yellow colored squares shown in Fig. 1 (a) are treated as fence texels in this work. To the best of our knowledge, we are the first to detect fence texels using a pre-trained CNN coupled with an SVM classifier. Secondly, we estimate the pixel correspondence between the reference frame and the ad-ditional frames using a modified optical flow algorithm which incorporates the knowledge of location of occlusions in the observations. It is to be noted that existing optical flow algorithms find the relative shift only between pixels visible in two frames. Accurate registration of the observations is critical in de-fencing the reference image since erroneous pixel matching would lead to incorrect data fusion from additional frames. The basic premise of our work is that image regions occluded by fence pixels in the reference frame are rendered visible in other frames of the captured video. Therefore, we propose an occlusion-aware optical flow method using fence pixels located in the first step of our image de-fencing pipeline to accurately estimate background pixel correspondences even at occluded image regions. Finally, we fuse the information from additional frames in order to uncover the occluded pixels in the reference frame using an optimization framework. Since natural images are sparse, we use the fast iterative shrinkage thresholding algorithm (FISTA) to solve the resulting ill-posed inverse problem assuming l 1 norm of the de-fenced image as the regularization prior.
Prior Work
The problem of image de-fencing has been first addressed in [7] by inpainting fence pixels of the input image. The algorithm proposed in [12] used multiple images for de-fencing, which significantly improves the performance due to availability of occluded image data in additional frames. The work of [12] used a deformable lattice detection method proposed in [5] for fence detction. Unfortunately, the method of [5] is not a robust approach and fails for many real-world images. Khasare et al. [8] proposed an improved multi-frame de-fencing technique by using loopy belief propagation. However, there are two issues with their approach. Firstly, the work in [8] assumed that motion between the frames is global. This assumption is invalid for more complex dynamic scenes where the motion is non-global. Also, the method of [8] used an image matting technique proposed by [11] for fence segmentation which involves significant user interaction. A video de-fencing algorithm [9], proposed a soft fence segmentation method where visual parallax serves as the cue to distinguish fences from the unoccluded pixels. Recently, Xue et al. [10] jointly estimated the foreground masks and obstruction-free images using five frames taken from a video. Apart from the image based techniques, Jonna et al. [13] proposed a multimodal approach for image de-fencing wherein they have extracted the fence masks with the aid of depth maps corresponding to the color images obtained using the Kinect sensor. Very recently, our works [14,15] addresses the image de-fencing problem. However, the drawback of both the methods [14,15] is that they do not estimate occlusion-aware optical flow for data fusion.
The proposed algorithm for image de-fencing addresses some of the issues with the existing techniques. Firstly, we propose a machine learning algorithm using CNN-SVM for fence segmentation from a single image unlike existing works [6,9,10], which need a few frames to obtain the fence masks. Importantly, unlike the works of [9,10], the proposed algorithm does not assume that the scene is static but we can handle scenes containing dynamic elements. For this purpose, we propose a modified optical flow algorithm for estimation of pixel correpondence between the reference frame and additional frames after segmenting occlusions.
Methodology
We relate the occluded image to the original de-fenced image using a degradation model as follows,
O m y m = y obs m = O m [F m x + n m ](1)
where y m are observations containing fences obtained from the captured video, O m are the binary fence masks, F m models the relative motion between frames, x is the de-fenced image and n m is Gaussian noise. As described in section 1, the problem of image de-fencing was divided into three sub-problems, which we elaborate upon in the following sub-sections.
Pre-trained CNN-SVM for fence texel joint detection
The important property of most outdoor fences is their symmetry about the fence texel joints. Referring to Fig. 1 (a), we observe that fence texels appear repetitively throughout the entire image. Convolutional neural nets (CNN), originally proposed by [16], can be effectively trained to recognize objects directly from images with robustness to scale, rotation, translation, noise etc. Recently, Krizhevsky et al. [17] proved the utility of CNNs for object detection and classification in the ILSVRC challenge [18]. Since real-world fence texels exhibit variations in color, shape, noise, etc., we are motivated to use CNNs for segmenting these patterns robustly. Convolutional neural networks belong to a class of deep learning techniques which operate directly on an input image extracting features using a cascade of convolutional, activation and pooling layers to finally predict the image category.
The key layer in CNN is the convolutional layer whose filter kernels are learnt automatically via backpropagation. The commonly used non-linear activation functions are sigmoid, tanh, rectified linear unit (ReLU) and maxout [19] etc. The pooling layers sub-sample the input data. Overfitting occurs in neural networks when the training data is limited. Recently, a technique called Dropout [20] has been proposed which can improve the generalization capability of CNNs by randomly dropping some of the neurons.
However, since CNNs use supervised learning they need huge labeled datasets and long training time. A possible solution to this problem is to use transfer learning [21,22], wherein pre-trained models are used to initialize the weights and fine-tune the network on a different dataset. One can also preserve the pre-trained filter kernels and re-train the classifier part only. In this work, we used a CNN pre-trained on ImageNet [18] as a feature extractor by excluding the softmax layer. The architecture of the CNN in Fig. 2 trained on ImageNet contains five convolutinal layers followed by three fully-connected layers and a softmax classifier. Max-pooling layers follow first, second and fifth convolutional layer.
In Fig. 3 (a), we show the 96 filter kernels of dimensions 11 × 11 × 3 learned by the first convolutional layer on input images. In this work, we propose to use CNN as a generic feature extractor followed by a support vector machine classifier (CNN-SVM). A given RGB input image is resized to 224 × 224 × 3 and fed to the proposed CNN-SVM a feature vector of size 4096 is extracted from the seventh fully-connected layer. An SVM classifier has been trained to detect fence texels using on these features of dimension 4096 extracted by the pre-trained CNN from a dataset of 20, 000 fence texel joints and 40, 000 non-fence texel sub-images. In Figs. 3 (b) and (c), we show samples of fence texel texels and non-fence texels, respectively. During the testing phase, a sliding window is used to densely scan the test image shown in Fig. 4 (a) from left to right and top to bottom with a stride of 5 pixels. The overall workflow of the proposed fence segmentation algorithm is shown in Fig. 4. Detected fence texels are joined by straight edges as shown in Fig. 4 (b). In Fig. 4 (c) we show the response obtained by Canny edge detection [23] algorithm after dilating the preliminary fence mask shown in Fig. 4 (b) and treated as background scribbles. The combination of both foreground and background scribbles is shown in Fig. 4 (d), wherein foreground scribbles are obtained by erosion operation on the image in Fig. 4 (b). We fed these automatically generated scribbles to the method of [11] and obtain the alpha map in Fig. 4 (e).
Finally, the binary fence mask shown in Fig. 4 (f) is generated by thresholding the alpha map obtained from [11].
Occlusion aware optical flow
The image alignment problem becomes more complex when real-world videos contain dynamic objects. Handling motion boundaries and occlusions in videos for optical flow computation is still challenging. Internal occlusions due to the layered dynamic objects and external occlusions such as fences make the problem tougher. In some practical applications of computer vision such as view synthesis, image de-fencing, etc we need to compute the correspondence of all pixels between two images despite occlusions. Many algorithms for estimating optical flow are proposed in the literature [24,25,26,27], which are based on modifications of the basic variational framework proposed by Horn et al. [28] addressing its various shortcomings. Recently, significant progress has been made in order to compute dense optical flow in a robust manner [25,29,30]. The state-of-the-art optical flow algorithms [24,25] integrate descriptor matching between two images in a variational framework. It is due to a robust function in the variational framework that the algorithm in [24] can handle small internal occlusions. However, it fails to tackle large external occlusions. The algorithm of [29] computes dense correspondence between images by performing sparse-dense interpolation under contour and motion boundary assumption. An occlusion aware optical flow algorithm is proposed by [31], wherein occlusions in images are handled using a three-step procedure. Initially, the method in [31] estimates occlusion-ignorant optical flow. Subsequently, occlusions are computed using this unreliable optical flow. Finally, the algorithm in [31] corrects the optical flow using estimated occlusions.
The basic cue behind the proposed image de-fencing algorithm is that occluded image data in the reference frame is uncovered in additional frames of the captured video. Relative motion among observations needs to be estimated to fuse the information uncovered in the additional images for filling in occlusions in the reference frame. State-of-the-art optical flow algorithms estimate the flow of visible areas between two images. However, as described above, there are occlusions in images due to depth changes, dynamic scene elements and external hindrances such as fences/barricades. If we apply the conventional optical flow algorithms to register two images containing fence occlusions we encounter two difficulties while aligning corresponding fence and background pixels. Firstly, large motion discontinuities exist at the spatial location of fences due to abrupt depth changes which corrupt the estimated optical flow. Secondly, it is to be noted that the background pixels hidden behind the fence assume the flow of fence pixels instead of their own ground truth motion. Hence, in this work we modify the motion associated with fence pixels to that of surrounding background pixel motion in order to reveal the occluded pixel information in the warped adjacent frame.
In this paper, we re-formulate the optical flow algorithm of [32] to fit our application of image de-fencing. Akin to [32], coarse to fine optical flow is estimated using an incremental framework in Gaussian scale-space. Note that we have already obtained the binary fence mask O m corresponding to the segmented fence pixels in the observation y m . We insert this mask O m as occlusion operator inside the optical flow framework to deal with the motion inaccuracies at fence locations. At the fence locations data cost is assumed to be zero and only smoothness term in Eq. (3) guides optical flow estimation. We assume sparse gradient prior (modeled using l 1 norm) for both horizontal and vertical velocities. At every scale, the optimized values are up-scaled and used as initial estimate at the next fine scale.
Suppose w = [u, v] be the current estimate of horizontal and vertical flow fields andỹ r ,ỹ t be the reference and t th adjacent images, respectively. Under the incremental framework [32,33], one needs to estimate the best increment dw = (du, dv) as follows
E(du, dv) = arg min dw F w+dwỹt −ỹ r 1 +µ ∇(u + du) 1 +µ ∇(v + dv) 1 (2)
where F w+dw is the warping matrix corresponding to flow w + dw, ∇ is the gradient operator and µ is the regularization parameter. To use gradient based methods, we replace the l 1 norm with a differentiable approximation φ(x 2 ) = √
x 2 + 2 , = 0.001. To robustly estimate optical flow under the known fence occlusions we compute the combined binary mask O = F w+dw O t ||O r obtained by the logical OR operation between the reference fence mask and backwarped fence from the t th frame using warping matrix F w+dw . To estimate the optical flow increment in the presence of occlusions we disable the data fidelity term by incorporating O in Eq. (2) as E(du, dv) = arg min dw O(F w+dwỹt −ỹ r ) 1 +µ ∇(u+du) 1 +µ ∇(v +dv) 1
(3) By first-order Taylor series expansion,
F w+dwỹt ≈ F wỹt + Y x du + Y y dv(4)
where Y x = diag(F wỹtx ), Y y = diag(F wỹty ),ỹ tx = ∂ ∂xỹ t andỹ ty = ∂ ∂yỹ t . We can write Eq. (3) as arg min
dw OF wỹt + OY x du + OY y dv − Oỹ r ) 1 +µ ∇(u + du) 1 +µ ∇(v + dv) 1(5)
To estimate the best increments du, dv to the current flow u, v we equate the gradients ∂E ∂du ; ∂E ∂dv to zero.
Y T x O T W d OY x + µL Y T x O T W d OY y Y T y O T W d OY x Y T y O T W d OY y + µL du dv = −Lu − Y T x O T W d OF wỹt + Y T x O T W d Oỹ r −Lv − Y T y O T W d OF wỹt + Y T y O T W d Oỹ r where L = D T x W s D x +D T y W s D y , W s = diag(φ (|∇u| 2 )) and W d = diag(φ (|O F wỹt − Oỹ r | 2 )
). We define D x and D y are discrete differentiable operators along horizontal and vertical directions, respectively. We used conjugate gradient (CG) algorithm to solve for dw using iterative re-weighted least squares (IRLS) framework.
FISTA Optimization Framework
Once the relative motion between the frames has been estimated we need to fillin the occluded pixels in the reference image using the corresponding uncovered pixels from the additional frames. Reconstructing de-fenced image x from the occluded observations is an ill-posed inverse problem and therefore prior information for x has to be used to regularize the solution. Since natural images are sparse, we employed l 1 norm of the de-fenced image as regularization constraint in the optimization framework as follows,
x = arg min x m y obs m − O m F m x 2 +λ x 1 (6)
where λ is the regularization parameter.
Since the objective function contains l 1 norm as a regularization function, it is difficult to solve Eq. 6 with the conventional gradient-based algorithms.
Here, we employed one of the proximal algorithms such as FISTA [34] iterative framework to handle non-smooth functions for image de-fencing. The key step in FISTA iterative framework is the proximal operator [35] which operates on the combination of two previous iterates.
Algorithm 1 FISTA image de-fencing 1: Input:λ, α, z1 = x0 ∈ R M ×N , t1 = 1 2: repeat 3:
x k = proxα(g)(z k − α∇f (z k )) 4: t k+1 = 1+ √ 1+4t 2 k 2 5: z k+1 = x k + t k −1 t k+1 (x k − x k−1 ) 6: k ← k + 1 7: until ( x k − x k−1 2≤ )
The proximal operator is defined as the solution of the following convex optimization [36] prox α (g)(x) = arg min
y {g(y) + 1 2α y − x 2 }(7)
If g(y) is l 1 norm, then prox α (g)(x) = max(|x| − λα, 0)sign(x). The gradient for data matching cost f is given as follows
∇f (z) = m F T m O T m (O m F m z − y obs m )(8)
Experimental Results
Initially, we report both qualitative and quantitative results obtained using the proposed fence segmentation algorithm on various datasets. Subsequently, we show the impact of accounting for occlusions in the incremental flow framework. Finally, we report image de-fencing results obtained with the FISTA optimization framework. To demonstrate the efficacy of the proposed de-fencing system, we show comparison results with state-of-the-art fence segmentation, and defencing methods in the literature. We used only three frames from each captured video for all the image de-fencing results reported here using the proposed algorithm. For all our experiments, we fixed λ = 0.0005 in Eq. 6. We ran all our experiments on a 3.4 GHz Intel Core i7 processor with 16 GB of RAM.
Fence Segmentation
For validating the proposed algorithm for fence segmentation, we have evaluated our algorithm on state-of-the-art datasets [9,10,37]. We also show segmentation results on a proposed fenced image dataset consisting of 200 real-world images captured under diverse scenarios and complex backgrounds. We report quantitative results on PSU NRT [37] dataset and qualitative results on [9,10,37] datasets. As discussed in section 3.1, we have extracted features from 20, 000 fence, 40, 000 non-fence texel images using a pre-trained CNN to train an SVM classifier. The trained classifier is used to detect joint locations in images via a sliding window protocol. We compare the results obtained using a state-ofthe-art lattice detection algorithm [5] and the proposed algorithm on all the datasets.
Initially, in Fig. 5 (a) we show a fenced image from the PSU NRT dataset [37]. Fence texels are detected using our pre-trained CNN-SVM approach and are jointed by straight edges, as shown in Fig. 5 (f). Note that all fence texels are detected accurately in Fig. 5 (f). In contrast, the method of [5] failed completely to extract the fence pixels as seen in Fig. 5 (k). The output of Fig. 5 (f) is used to generate foreground and background scribbles which are fed to the image matting technique of [11]. The final binary fence mask obtained by thresholding the output of [11] is shown in Fig. 5 (p). Next, we have validated both the algorithms on image taken from a recent dataset [10] shown in Fig. 5 (b). In Fig. 5 (g), we show the fence texels detected using our pre-trained CNN-SVM approach and joined by straight edges. In contrast, the method of [5] failed completely to extract the fence pixels as seen in Fig. 5 (l). The output of Fig. 5 (g) is used to generate scribbles as outlined in section 3.1. These foreground and background scribbles are fed to the image matting technique of [11]. The final binary fence mask obtained by thresholding the output of [11] is shown in Fig. 5 (q). Finally, we perform experiments on images from the proposed fenced image dataset. Sample images taken from the dataset are shown in Figs. 5 (c)-(e). In Figs. 5 (h)-(j), we show the fence segmentations obtained using the proposed pre-trained CNN-SVM algorithm. We observe that the proposed algorithm detected all the fence texel joints accurately. The lattice detected using [5] are shown in Figs. 5 (m)-(o). We can observe that the approach of [5] partially segments the fence pixels in Fig. 5 (m). Note that in Fig. 5 (o) the algorithm of [5] completely failed to segment fence pixels. The final binary fence masks obtained by thresholding the output of [11] are shown in Figs. 5 (r)-(t).
A summary of the quantitative evaluation of the fence texel detection method of [5] and the pre-trained CNN-SVM based proposed algorithm is given in Table 1. The F-measure obtained for [5] on PSU NRT [37] dataset and proposed fenced image datasets are 0.62 and 0.41, respectively. In contrast, F-measure for the proposed method on PSU NRT dataset [37] and our fenced image datasets are 0.97 and 0.94, respectively.
(a) (f) (k) (p) (b) (g) (l) (q) (c) (h) (m) (r) (d) (i) (n) (s) (e) (j) (o) (t) Fig. 5.
First column: sample images from NRT [37], [10] and proposed fenced image datasets, respectively. Second column: fence masks generated using the proposed pretrained CNN-SVM algorithm. Third column: fence detection using [5]. Fourth column: final binary fence masks corresponding to images in the first column obtained by generating scribbles using fence detections in images of the second column which are fed to the method of [11].
Optical Flow under Known Occlusions
To demonstrate the robustness of proposed optical flow algorithm under known occlusions, we use frames from videos of fenced scenes in [6,9,10]. We show two frames from a video sequence named "football" from [9] in the first column of Fig. 6. The video sequences named "fence1" and "fence4" are taken from the work of [10]. Two frames from each of these videos are shown in second and third columns of Fig. 6, respectively. Video sequences named "lion" and "walking" are taken from [6] and a couple of observations from each of them are depicted in fourth and fifth columns of Fig. 6, respectively. In the third row of Fig. 6, we
show the color coded optical flows obtained using [24] between respective images shown in each column of first and second row of Fig. 6. Note that the images shown in third row of Fig. 6 contain regions of erroneously estimated optical flow due to fence occlusions. In contrast, the flow estimated using proposed algorithm under known fence occlusions are shown in the fifth row of Fig. 6. Note that the optical flows estimated using the proposed method contain no artifacts. Fig. 6. First and second row: frames taken from videos reported in [6,9,10]. Third row: optical flow computed between the first and second row images using [24]. Fourth row: de-fenced images obtained using the estimated flow shown in the third row. Fifth row: occlusion-aware optical flow obtained using the proposed algorithm.
Image De-fencing
To demonstrate the efficacy of the proposed image de-fencing algorithm, we conducted experiments with several real-world video sequences containing dynamic background objects. In Figs. 7 (a), (d), (g), and (j), we show the images taken from four different video sequences. The fence pixels corresponding to these observations are segmented using the proposed pre-trained CNN-SVM and the approach of [11]. In Figs. 7 (b), (e), (h), and (k), we show the inpainted images obtained using [2] which was the method used for obtaining the de-fenced image after fence segmentation in [6]. Note that we can see several artifacts in the inpainted images obtained using [2]. De-fenced images obtained using the proposed algorithm are shown in Figs. 7 (c), (f), (i), and (l), respectively. We observe that the proposed algorithm has effectively reconstructed image data even for dynamic real-world video sequences. Also, note that for all the results shown in Figs. 7 (c), (f), (i), and (l) we used only three observations from the captured video sequences. Fig. 7. First column: one frame each taken from challenging real-world videos. Second column: inpainted images obtained using exemplar-based image inpainting algorithm [2] which was the approach used in [6] for image de-fencing. Third column: de-fenced images obtained using the proposed algorithm corresponding to images in the first column.
(a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l)
Next, we compare the proposed algorithm with recent state-of-the-art methods [6,9,10]. In Fig. 8 (a), we show the de-fenced image obtained using [9]. The corresponding result obtained by the proposed algorithm is shown in Fig. 8 (e). Note that the de-fenced image obtained in [9] is blurred whereas the proposed algorithm generated a sharper image. We show a cropped region from both Figs. 8 (a) and (e) in the last row to confirm our observation. In Figs. 8 (b) and (f), we show the de-fenced results obtained by [10] and the proposed algorithm, respectively. The de-fenced image obtained using the method in [10] is distorted at some places which is apparent in Fig. 8 (b). In contrast, the fence has been removed completely with hardly any distortions in the result shown in Fig. 8 (f), which has been obtained using our algorithm. A cropped region from both Figs. 8 (b) and (f) are shown in the last row to prove our point. The de-fenced images obtained using a very recent technique [6] are shown in Figs. 8 (c) and (d), respectively. These results contain several artifacts. However, the de-fenced images recovered using the proposed algorithm hardly contain any artifacts as shown in Figs. 8 (g) and (h). A cropped regions from Figs. 8 (c) and (d) and Figs. 8 (g) and (h) are shown in the last row for comparison purpose. Since we use only three frames from the videos, our method is more computationally efficient than [9,10] which use 5 and 15 frames, respectively. [9]. (b) Recovered background image using [10]. (c), (d) Inpainted images obtained by [2] which was the method used in [6]. (e)-(h) De-fenced images obtained by the proposed algorithm using occlusion-aware-optical flow shown in fifth row of Fig. 6. Last row: Insets from the images of first and second rows, respectively, showing the superior reconstruction of the de-fenced image by our algorithm.
Conclusions
In this paper, we proposed an automatic image de-fencing system for real-world videos. We divided the problem of image de-fencing into three tasks and proposed an automatic approach for each one of them. We formulated an optimization framework and solved the inverse problem using the fast iterative shrinkage thresholding algorithm (FISTA) assuming l 1 norm of the de-fenced image as the regularization constraint. We have evaluated the proposed algorithm on various datasets and reported both qualitative and quantitative results. The obtained results show the effectiveness of proposed algorithm. As part of future work, we are investigating how to optimally choose the frames from the video for fence removal.
Fig. 1 .
1(a) A frame taken from a video. (b) Segmented binary fence mask obtained using proposed CNN-SVM algorithm. (c) Inpainted image corresponding to (a) using the method of [2]. (d) De-fenced image corresponding to (a) using the proposed algorithm.
Fig. 2 .
2The architecture of the pre-trained CNN[17].
Fig. 3 .
3(a) 96 learned filter kernels of size 11 × 11 × 3 extracted from the first convolutional layer. (b) Sample fence texel joints. (c) Examples of non-fence texel joints.
Fig. 4 .
4Schematic of fence mask segmentation.
Fig. 8 .
8Comparison with state-of-the-art image/video de-fencing methods [9,10,6] using video sequences from their works. (a) De-fenced image obtained by
Table 1 .
1Quantitative evaluation of fence segmentationNRT Database [37]
Our Database
Method
Precision Recall F-measure Precision Recall F-measure
Park et al. [5]
0.95
0.46
0.62
0.94
0.26
0.41
pre-trained CNN-SVM 0.96
0.98
0.97
0.90
0.98
0.94
Image inpainting. M Bertalmio, G Sapiro, V Caselles, C Ballester, Proc. ACM SIGGRAPH. ACM SIGGRAPHBertalmio, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: Proc. ACM SIGGRAPH. (2000) 417-424
Region filling and object removal by exemplarbased image inpainting. A Criminisi, P Perez, K Toyama, IEEE Trans. Image Process. 139Criminisi, A., Perez, P., Toyama, K.: Region filling and object removal by exemplar- based image inpainting. IEEE Trans. Image Process. 13(9) (2004) 1200-1212
Scene completion using millions of photographs. J Hays, A A Efros, ACM Trans. Graph. 263Hays, J., Efros, A.A.: Scene completion using millions of photographs. ACM Trans. Graph. 26(3) (2007) 1-7
Combined first and second order total variation inpainting using split bregman. K Papafitsoros, C B Schonlieb, B Sengul, Image Processing On Line. 3112136Papafitsoros, K., Schonlieb, C.B., Sengul, B.: Combined first and second order total variation inpainting using split bregman. Image Processing On Line 3 (2013) 112136
Deformed lattice detection in real-world images using mean-shift belief propagation. M Park, K Brocklehurst, R Collins, Y Liu, IEEE Trans. Pattern Anal. Mach. Intell. 31Park, M., Brocklehurst, K., Collins, R., Liu, Y.: Deformed lattice detection in real-world images using mean-shift belief propagation. IEEE Trans. Pattern Anal. Mach. Intell. 31 (2009) 1804-1816
Automatic fence segmentation in videos of dynamic scenes. R Yi, J Wang, P Tan, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR. Yi, R., Wang, J., Tan, P.: Automatic fence segmentation in videos of dynamic scenes. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (June 2016)
Image de-fencing. Y Liu, T Belkina, J Hays, R Lublinerman, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. IEEE Conf. Comput. Vis. Pattern RecognitLiu, Y., Belkina, T., Hays, J., Lublinerman, R.: Image de-fencing. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2008) 1-8
Seeing through the fence: Image de-fencing using a video sequence. V S Khasare, R R Sahay, M S Kankanhalli, Khasare, V.S., Sahay, R.R., Kankanhalli, M.S.: Seeing through the fence: Image de-fencing using a video sequence. (2013)
Video de-fencing. Y Mu, W Liu, S Yan, IEEE Trans. Circts. Sys. Vid. Tech. 247Mu, Y., Liu, W., Yan, S.: Video de-fencing. IEEE Trans. Circts. Sys. Vid. Tech. 24(7) (2014) 1111-1121
A computational approach for obstruction-free photography. T Xue, M Rubinstein, C Liu, W T Freeman, ACM Trans. Graph. 344Xue, T., Rubinstein, M., Liu, C., Freeman, W.T.: A computational approach for obstruction-free photography. ACM Trans. Graph. 34(4) (2015)
Learning based digital matting. Y Zheng, C Kambhamettu, International Conference on Computer Vision (ICCV). Zheng, Y., Kambhamettu, C.: Learning based digital matting. In: International Conference on Computer Vision (ICCV). (2009)
Image de-fencing revisited. M Park, K Brocklehurst, R T Collins, Y Liu, Park, M., Brocklehurst, K., Collins, R.T., Liu, Y.: Image de-fencing revisited. (2010)
A multimodal approach for image de-fencing and depth inpainting. S Jonna, V S Voleti, R R Sahay, M S Kankanhalli, Proc. Int. Conf. Advances in Pattern Recognition. Int. Conf. Advances in Pattern RecognitionJonna, S., Voleti, V.S., Sahay, R.R., Kankanhalli, M.S.: A multimodal approach for image de-fencing and depth inpainting. In: Proc. Int. Conf. Advances in Pattern Recognition. (2015) 1-6
My camera can see through fences: A deep learning approach for image de-fencing. S Jonna, K K Nakka, R R Sahay, 3rd IAPR Asian Conference on Pattern Recognition (ACPR). Jonna, S., Nakka, K.K., Sahay, R.R.: My camera can see through fences: A deep learning approach for image de-fencing. In: 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR). (Nov 2015) 261-265
Detection and removal of fence occlusions in an image using a video of the static/dynamic scene. S Jonna, K K Nakka, V S Khasare, R R Sahay, M S Kankanhalli, J. Opt. Soc. Am. A. 3310Jonna, S., Nakka, K.K., Khasare, V.S., Sahay, R.R., Kankanhalli, M.S.: Detection and removal of fence occlusions in an image using a video of the static/dynamic scene. J. Opt. Soc. Am. A 33(10) (2016) 1917-1930
Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, Proceedings of the IEEE. 8611Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11) (Nov 1998) 2278-2324
Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Advances in Neural Information Processing Systems 25. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems 25. (2012) 1097-1105
ImageNet: A Large-Scale Hierarchical Image Database. J Deng, W Dong, R Socher, L J Li, K Li, L Fei-Fei, In: CVPR09.Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A Large- Scale Hierarchical Image Database. In: CVPR09. (2009)
I J Goodfellow, D Warde-Farley, M Mirza, A Courville, Y Bengio, Maxout networks. In: In ICML. Goodfellow, I.J., Warde-farley, D., Mirza, M., Courville, A., Bengio, Y.: Maxout networks. In: In ICML. (2013)
Dropout: A simple way to prevent neural networks from overfitting. N Srivastava, G Hinton, A Krizhevsky, I Sutskever, R Salakhutdinov, J. Mach. Learn. Res. 151Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1) (January 2014) 1929-1958
Decaf: A deep convolutional activation feature for generic visual recognition. J Donahue, Y Jia, O Vinyals, J Hoffman, N Zhang, E Tzeng, T Darrell, CoRR abs/1310.1531Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: Decaf: A deep convolutional activation feature for generic visual recognition. CoRR abs/1310.1531 (2013)
Matconvnet -convolutional neural networks for MATLAB. A Vedaldi, K Lenc, abs/1412.4564Vedaldi, A., Lenc, K.: Matconvnet -convolutional neural networks for MATLAB. CoRR abs/1412.4564 (2014)
A computational approach to edge detection. J Canny, IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI. 86Canny, J.: A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-8(6) (Nov 1986) 679-698
Large displacement optical flow: Descriptor matching in variational motion estimation. T Brox, J Malik, IEEE Trans. Pattern Anal. Mach. Intell. 333Brox, T., Malik, J.: Large displacement optical flow: Descriptor matching in vari- ational motion estimation. IEEE Trans. Pattern Anal. Mach. Intell. 33(3) (2011) 500-513
Motion detail preserving optical flow estimation. L Xu, J Jia, Y Matsushita, IEEE Transactions on Pattern Analysis and Machine Intelligence. 349Xu, L., Jia, J., Matsushita, Y.: Motion detail preserving optical flow estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence 34(9) (Sept 2012) 1744-1757
T Brox, A Bruhn, N Papenberg, J Weickert, High accuracy optical flow estimation based on a theory for warping. SpringerBrox, T., Bruhn, A., Papenberg, N., Weickert, J.: High accuracy optical flow estimation based on a theory for warping, Springer (2004) 25-36
Sift flow: dense correspondence across different scenes. C Liu, J Yuen, A Torralba, J Sivic, W T Freeman, European Conference on Computer Vision. Liu, C., Yuen, J., Torralba, A., Sivic, J., Freeman, W.T.: Sift flow: dense corre- spondence across different scenes. In: European Conference on Computer Vision. (2008)
Determining optical flow. B K Horn, B G Schunck, Cambridge, MA, USATechnical reportHorn, B.K., Schunck, B.G.: Determining optical flow. Technical report, Cambridge, MA, USA (1980)
EpicFlow: Edge-Preserving Interpolation of Correspondences for Optical Flow. J Revaud, P Weinzaepfel, Z Harchaoui, C Schmid, Computer Vision and Pattern Recognition. Revaud, J., Weinzaepfel, P., Harchaoui, Z., Schmid, C.: EpicFlow: Edge-Preserving Interpolation of Correspondences for Optical Flow. In: Computer Vision and Pat- tern Recognition. (2015)
Deepflow: Large displacement optical flow with deep matching. P Weinzaepfel, J Revaud, Z Harchaoui, C Schmid, Proc. Int. Conf. Comput. Vis. Int. Conf. Comput. VisWeinzaepfel, P., Revaud, J., Harchaoui, Z., Schmid, C.: Deepflow: Large displace- ment optical flow with deep matching. In: Proc. Int. Conf. Comput. Vis. (Dec 2013) 1385-1392
Occlusion-aware optical flow estimation. S Ince, J Konrad, IEEE Transactions on Image Processing. 178Ince, S., Konrad, J.: Occlusion-aware optical flow estimation. IEEE Transactions on Image Processing 17(8) (Aug 2008) 1443-1451
Beyond pixels: Exploring new representations and applications for motion analysis. C Liu, Massachusetts Institute of TechnologyPhD thesisLiu, C.: Beyond pixels: Exploring new representations and applications for motion analysis. PhD thesis, Massachusetts Institute of Technology (2009)
On bayesian adaptive video super resolution. C Liu, D Sun, IEEE Transactions on Pattern Analysis and Machine Intelligence. 362Liu, C., Sun, D.: On bayesian adaptive video super resolution. IEEE Transactions on Pattern Analysis and Machine Intelligence 36(2) (Feb 2014) 346-360
A fast iterative shrinkage-thresholding algorithm for linear inverse problems. A Beck, M Teboulle, SIAM J. Imag. Sci. 21Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2(1) (2009) 183-202
Fast newton-type methods for total variation regularization. A Barbero, S Sra, ICML, OmnipressBarbero, A., Sra, S.: Fast newton-type methods for total variation regularization. In: ICML, Omnipress (2011) 313-320
Proximal algorithms. N Parikh, S Boyd, Foundations and Trends in Optimization. 13Parikh, N., Boyd, S.: Proximal algorithms. Foundations and Trends in Optimiza- tion 1(3) (2014)
. Psu Nrt Data, Set, PSU NRT data set: http://vision.cse.psu.edu/data/MSBPLattice.shtml.
| []
|
[
"Deriving the radial distances of wide coronal mass ejections from elongation measurements in the heliosphere -Application to CME-CME interaction",
"Deriving the radial distances of wide coronal mass ejections from elongation measurements in the heliosphere -Application to CME-CME interaction"
]
| [
"N Lugaz \nInstitute for Astronomy\nUniversity of Hawaii\n2680 Woodlawn Dr96822HonoluluHIUSA\n",
"A Vourlidas \nNaval Research Laboratory\n7663, 20375WashingtonDCUSA\n",
"I I Roussev \nInstitute for Astronomy\nUniversity of Hawaii\n2680 Woodlawn Dr96822HonoluluHIUSA\n"
]
| [
"Institute for Astronomy\nUniversity of Hawaii\n2680 Woodlawn Dr96822HonoluluHIUSA",
"Naval Research Laboratory\n7663, 20375WashingtonDCUSA",
"Institute for Astronomy\nUniversity of Hawaii\n2680 Woodlawn Dr96822HonoluluHIUSA"
]
| []
| We present general considerations regarding the derivation of the radial distances of coronal mass ejections (CMEs) from elongation angle measurements such as those provided by SECCHI and SMEI, focusing on measurements in the Heliospheric Imager 2 (HI-2) field of view (i.e. past 0.3 AU). This study is based on a three-dimensional (3-D) magneto-hydrodynamics (MHD) simulation of two CMEs observed by SECCHI on January 24-27, 2007. Having a 3-D simulation with synthetic HI images, we are able to compare the two basic methods used to derive CME positions from elongation angles, the so-called "Point-P" and "Fixedφ" approximations. We confirm, following similar works, that both methods, while valid in the most inner heliosphere, yield increasingly large errors in HI-2 field of view for fast and wide CMEs. Using a simple model of a CME as an expanding self-similar sphere, we derive an analytical relationship between elongation angles and radial distances for wide CMEs. This relationship is simply the harmonic mean of the "Point-P" and "Fixed-φ" approximations and it is aimed at complementing 3-D fitting of CMEs by cone models or flux rope shapes. It proves better at getting the kinematics of the simulated CME right when we compare the results of our line-of-sights to the MHD simulation. Based on this approximation, we re-analyze the J-maps (time-elongation maps) in January 26-27, 2007 and present the first observational evidence that the merging of CMEs is associated with a momentum exchange from the faster ejection to the slower one due to the propagation of the shock wave associated with the fast eruption through the slow eruption. | 10.5194/angeo-27-3479-2009 | [
"https://arxiv.org/pdf/0909.0534v1.pdf"
]
| 15,040,795 | 0909.0534 | 019785c289d8660d200a66b60be6c6e4d01878ab |
Deriving the radial distances of wide coronal mass ejections from elongation measurements in the heliosphere -Application to CME-CME interaction
Date: 2 September 2009 2 Sep 2009
N Lugaz
Institute for Astronomy
University of Hawaii
2680 Woodlawn Dr96822HonoluluHIUSA
A Vourlidas
Naval Research Laboratory
7663, 20375WashingtonDCUSA
I I Roussev
Institute for Astronomy
University of Hawaii
2680 Woodlawn Dr96822HonoluluHIUSA
Deriving the radial distances of wide coronal mass ejections from elongation measurements in the heliosphere -Application to CME-CME interaction
Date: 2 September 2009 2 Sep 2009Manuscript prepared for Ann. Geophys. with version 3.0 of the L A T E X class copernicus.cls. Correspondence to: N. Lugaz ([email protected])Interplanetary shocks (2139)Flares and mass ejections (7519)Instruments and techniques (7594)
We present general considerations regarding the derivation of the radial distances of coronal mass ejections (CMEs) from elongation angle measurements such as those provided by SECCHI and SMEI, focusing on measurements in the Heliospheric Imager 2 (HI-2) field of view (i.e. past 0.3 AU). This study is based on a three-dimensional (3-D) magneto-hydrodynamics (MHD) simulation of two CMEs observed by SECCHI on January 24-27, 2007. Having a 3-D simulation with synthetic HI images, we are able to compare the two basic methods used to derive CME positions from elongation angles, the so-called "Point-P" and "Fixedφ" approximations. We confirm, following similar works, that both methods, while valid in the most inner heliosphere, yield increasingly large errors in HI-2 field of view for fast and wide CMEs. Using a simple model of a CME as an expanding self-similar sphere, we derive an analytical relationship between elongation angles and radial distances for wide CMEs. This relationship is simply the harmonic mean of the "Point-P" and "Fixed-φ" approximations and it is aimed at complementing 3-D fitting of CMEs by cone models or flux rope shapes. It proves better at getting the kinematics of the simulated CME right when we compare the results of our line-of-sights to the MHD simulation. Based on this approximation, we re-analyze the J-maps (time-elongation maps) in January 26-27, 2007 and present the first observational evidence that the merging of CMEs is associated with a momentum exchange from the faster ejection to the slower one due to the propagation of the shock wave associated with the fast eruption through the slow eruption.
Motivation
With the launches of the two Solar Terrestrial Relations Observatory (STEREO) spacecraft and the Coriolis spacecraft in 2006 and 2003, respectively, coronal mass ejections (CMEs) can be, for the first-time, imaged continuously from the solar surface to 1 AU with coronagraphic and heliospheric imagers. The CME on January 25, 2007 was the fastest eruption imaged by the STEREO/Sun-Earth Connection Coronal and Heliospheric Investigation (SECCHI) suite to date . Although there was a 20-hr data gap in SECCHI coverage at the time of the ejection, these observations provide one of the best available tests for methods aimed at deriving CME dynamics from SECCHI observations for two main reasons.
First, in contrast to slow ejections which arrive at Earth with speeds comparable to that of the ambient solar wind, a CME with initial speed greater than 1,300 km s −1 should remain faster than the ambient solar wind in the entire HI-2 field of view (FOV). Because most of the models of CME deceleration invoke a "drag" term proportional to the difference between the ejection and the ambient solar wind speeds (Cargill, 2004;Tappin, 2006), the acceleration profile cannot be well constrained by the analysis of slow CMEs.
A second reason is the presence of a preceding ejection from the same active region. This ejection was launched 16.5 hours earlier and had a speed of about 600 km s −1 . According to previous analyses (Lugaz et al., 2009;Webb et al., 2009;Harrison et al., 2009), the two eruptions interacted in the heliosphere somewhere between 20 • and 30 • elongation from the Sun. It is expected that fast shock waves can propagate inside preceding ejections (Schmidt and Cargill, 2004;Lugaz et al., 2005) and merge with the preceding shock waves. However, the variation of the shock speed inside the preceding magnetic cloud(s) is not known precisely. Numerical simulations (Lugaz et al., 2005) have shown it can vary greatly due to the large variation in density, magnetic field and alfvénic speed inside the magnetic cloud. Therefore, a constant or near constant speed cannot be assumed for the January 25, 2007 CME; in fact, most methods tested by Webb et al. (2009) to explain the measurements, including cone models and numerical simulations, fared quite poorly past 25-30 • (see their Figure 7) for at least one of the two observed fronts, although the cone model proved quite accurate in fitting the faster front. The authors noted that "conversion techniques from distance to elongation may require more work." It is the goal of this article to continue this process in an attempt to analyze HI observations better.
SECCHI observations of the January 24-25 CMEs and numerical simulation
The two successive CMEs of January 24-25, 2007 were initially reported by Harrison et al. (2008). At the time, the two STEREO spacecraft were still in close proximity with Earth (within 0. Howard and Tappin (2008) for example. Based on the time-height profiles of the CMEs in the SOHO/LASCO FOV, and using the same position angle (PA 90) for both CMEs, the speed in the corona of the first CME was 600 km s −1 and it was 1350 km s −1 for the second one. The data gap in SECCHI coverage started after 04:53UT and 09:53UT on January 25, 2007 for STEREO-A and B, respectively and lasted until the start of January 26, 2007. Assuming no deceleration, the two ejections should have interacted during this time. After the SECCHI data gap, two or three bright fronts associated with the eruptions were tracked in HI-2 (Harrison et al., 2008;Lugaz et al., 2008Lugaz et al., , 2009Webb et al., 2009), the first front up to elongation angles of about 55 • with HI-2 and up to much larger elongation angles (∼ 90 • ) with SMEI (Webb et al., 2009). SMEI observations could not help during the SECCHI downtime, because the CMEs were inside the SMEI exclusion zone circle of 20 • around the Sun.
We performed a numerical simulation of these ejections with the Space Weather Modeling Framework (SWMF) (Tóth et al., 2005) using the solar wind model of Cohen et al. (2007). The simulation set-up and detailed results have been published in Lugaz et al. (2009). Simulations with the ENLIL model of Odstrcil et al. (2005) and the HAFv.2 model of Hakamada and Akasofu (1982) and Fry et al. (2001) have also been performed and published in Webb et al. (2009). Based on the numerical analyses, the fronts observed by HI-2 and SMEI have been associated with the two CMEs, validating the numerical models on one hand and helping the analysis of the complex observations on the other hand. The goal of the current study is to test the existing methods to derive CME radial distances from elongation angles with the help of a 3-D simulation.
3 Determining CME positions from elongation angles:
Testing the existing methods So far, CME positions have been determined from STEREO observations via 3-D forward fitting of a cone-model or a flux-rope-shaped density enhancement (Boursier et al., 2009;Thernisien et al., 2009), via 3-D reconstruction in COR-2 FOV (i.e. within 20 R ) (Mierla et al., 2008;de Koning et al., 2009), by mass conservation principles (Colaninno and Vourlidas, 2009), or by applying one of two simple approximations giving an analytical relation between elongation angles and CME positions (Wood et al., 2009;Rouillard et al., 2009;Davis et al., 2009). These analytical relations provide a quick and easy way to estimate CME dynamics in the heliosphere. 3-D reconstruction and forward modeling are expected to be more accurate than these simple relations, especially in the COR FOV where they have been mostly used so far, but they also have some limitations. For example, the 3-D reconstruction methods require multiple viewpoints, which might become less and less frequent as the STEREO spacecraft separate; when there are multiple observations, they assume that both SECCHI instruments observe the same structure, which is not true in the HI FOV. Additionally, forward modeling attempts to fit geometrical and kinematic information at the same time. To simplify the fit, a kinematic model (often constant acceleration or constant speed) is usually assumed. As noted above, these assumptions cannot be used for complex events, such as those involving CME-CME interactions.
The "Point-P" and "Fixed-φ" approximations
The intensity of the Thomson scattering depends on the angle between the scattering electron, the Sun and the observer (Minnaert, 1930). The loci of the ensemble of points where the intensity of Thomson scattered light is maximum is referred to as the "Thomson surface" (Vourlidas and Howard, 2006). In 3-D space, this surface lies on the surface of a sphere with the Sun-observer line as the diameter, and so we refer to this as the Thomson sphere from now on. A simple plane-of-the-sky approximation cannot be used with accuracy in the HI FOV (e.g., see Vourlidas and Howard, 2006). Therefore, to know which part of a CME is imaged, one The black and yellow circles illustrate the model of CMEs used to derive the relation described in the article and the Point-P approximation, respectively; the white circle is the Thomson sphere; the green dot and black disk are STEREO-A and the Sun, respectively (not to scale). The angle φ is set at 90 • to determine the CME distances but it is shown here as determined from the position of the active region at the start of the eruptions. Right: The model (expanding propagating sphere) proposed to derive CME position from SECCHI measurements is illustrated for a different simulation (August 24, 2002) with different models of the solar wind and CME initiation. The yellow sphere is centered at the Sun, the white translucent sphere is the model of the CME front and the actual simulated CME is shown as an isosurface of scaled density 20 cm −3 AU −2 color-coded with the speed.
needs to consider the complex interaction of the CME 3-D density structure with the Thomson sphere (e.g., see Lugaz et al., 2008). An additional problem is that the speed and acceleration should be calculated for the same plasma element (i.e. usually for a single radial trajectory). Even if the CME positions can be determined accurately from HI observations, further assumptions regarding the CME geometry must be made to derive kinematic information, since what is observed over time is not necessarily the same part of the CME, as shown in Lugaz et al. (2009) and Webb et al. (2009). There are two main simple approximations which have been used to replace the plane-of-sky approximation for heliospheric measurements: they are referred as "Point-P" and "Fixed-φ" (Kahler and Webb, 2007;Howard et al., 2007;Wood et al., 2009); the geometry of the observations and the reconstruction is illustrated in the left panel of Figure 1 for a plot of the simulations of the January 24-25, 2007 CMEs. The "Point-P" (PP) approximation is the simplest possible way to relate elongation angles to CME radial distances while taking into account the Thomson sphere geometry. Assuming a spherical front centered at the Sun, the elongation angle and the position of the CME R PP are related by:
R PP = d STEREO sin ,
where d STEREO (∼ 0.97 AU) is the heliocentric distance of STEREO-A for this event. The CME front obtained from this approximation is shown with the yellow circle in the left panel of Figure 1. Obviously, this approximation is poor for narrow CMEs such as the one studied by Wood et al. (2009) and for dense streams and corotating interacting regions (CIRs) which are structures of narrow azimuthal extent at 1 AU (the typical width is less than 20 • as inferred from Jian et al. (2006) for example). Even for wide CMEs, the CME fronts are not spherically symmetric, in part due to their interaction with the structured coronal magnetic field and solar wind. This has been shown by multiple-spacecraft observations (e.g. Möstl et al., 2009) and from simulations (Riley et al., 2003;Manchester et al., 2004;Odstrcil et al., 2005). Last but not least, the reconstructed CME position is independent of the propagation angle! Webb et al. (2009) remarked that the PP approximation is not adequate far from the Sun, e.g., in HI-2 and SMEI/camera 2 FOVs.
The "Fixed-φ" (Fφ) approximation, in turn, takes the opposite philosophy and considers that a single particle, propagating on a fixed-radial trajectory, is responsible for the Thomson scattered light. The elongation angle measurement must simply be "de-projected" from the Thomson sphere onto this radial trajectory, resulting in the relation
R Fφ = d STEREO sin sin( + φ) ,
where φ is the angle between the Sun-observer line and the trajectory of the particle. The position obtained from this approximation is noted as R Fφ in the left panel of Figure 1. Obviously, this approximation is well adapted for CIRs (Rouillard et al., 2008) and small "blobs" (Sheeley et al., 2008(Sheeley et al., , 2009Rouillard et al., 2009). However, since it assumes that what is tracked is a single point, the method is expected to work poorly for wide CMEs. The equation can be fitted for φ (assuming no or constant acceleration), giving the origin of the transient ) and/or the speed. The main limitation of this method is that it completely ignores the CME geometry. It also does not take into account the angle dependency of the Thomson scattering.
Comparison with 3-D simulated data
We test the two methods with our synthetic line-of-sight procedure and compare the resulting positions to the 3-D simulation for the second CME (January 25 CME) at PA 90. This work is the continuation of section 4.3 from Lugaz et al. (2009) and its associated Figure 6. We derive the elongation angles and radial distances of the CME front as follows: for the line-of-sight images, we use elongation angles measured at the point of maximum brightness at PA 90. For the numerical simulation, we use the position of maximum density along different radial trajectories (at longitudes 90 • , 80 • and 70 • east of the Sun-Earth line) and on the Thomson sphere, all of these in the ecliptic plane (PA 90). Results are shown in the top panel of Figure 2. Below approximately 100 R , the two methods give similar results differing by less than 10%. The Fφ method gives slightly better results when compared to the nose of the CME; the PP approximation works best if one assumes it tracks the intersection of the CME front with the Thomson sphere (see middle panel of Figure 2). Above 100 R , the two methods give increasingly different results. Compared to the simulation results along all three radial trajectories presented here, the PP approximation results in a too large deceleration of the CME, whereas the Fφ results in an apparent acceleration. This acceleration appears unphysical, since CMEs faster than the ambient solar wind are expected to monotonously decelerate due to a "drag" force. Similar results have been reported, most recently by Wood et al. (2009). The middle panel of Figure 2 shows the errors between the position of the CME front at the limb and the position from each of the two methods, as well as the error between the PP position and the intersection of the CME front and the Thomson sphere. Although the errors are fairly low, they can result in large errors in the velocity and acceleration of the CME (see bottom panel of Figure 2). These methods can provide an average speed of the CME front within the first 100 R , but they cannot be relied upon to study complex physical mechanisms such as CME-CME interaction.
4 Improved method to determine CME position Based on the relatively poor results for the PP and Fφ methods, we propose another analytical method based on simple geometric considerations and a simple model of CMEs. We construct this model on a few principles: first, it should take into account the geometry associated with the Thomson scattering as well as the CME propagation and second, it should have the lowest number of free parameters possible. To construct such a model, we start from the knowledge that CMEs are known to evolve self-similarly in the heliosphere (e.g., see Krall et al., 2006). The simplest approximation is to assume that the CME peak density maps out as a sphere connected to the center of the Sun; the center of the sphere propagates in a fixed, radial trajectory (see right panel of Figure 1). In contrast to the PP approximation, the sphere is not centered on the Sun. Consequently, this method takes into account the direction of propagation of the CME. This approximation is also the one used in Webb et al. (2009) to produce their Figure 1b.
If we assume no deflection of the CME in the corona or in the heliosphere, the angles defining the trajectory of the center of the sphere can be derived from the flare information (with an understanding of the limitations in the connection between flares and CMEs) or from forward modeling of the COR observations or mass analysis. For the January 25, 2007 CMEs, we will consider that the center propagates from the eastern limb at PA 90. There are many ways this sphere "interacts" with the Thomson sphere to produce the Thomson-scattered signal. We consider two hypotheses: the geometry associated with the Thomson scattering is dominant and the emission originates from the intersection of the sphere with the Thomson sphere or it is negligible and the emission originates from the line-of-sight tangent to the sphere (see left panel of Figure 1 for the geometry and the notation used). The first hypothesis gives d 1 = R F φ for the diameter of the circle representing the CME front at the PA where the measurement is made. This PA can differ from the latitude λ along which the center of the CME propagates. Correcting for this, the nose of the CME is at a distance of R Fφ / cos(PA − λ). This gives a new interpretation for the "Fixed-φ" approximation, namely that it gives the diameter of the circle representing the CME at each PA, assuming the emission originates from the intersection of this circle with the Thomson sphere.
The distance of the point tangent to the CME along the given PA (see the left panel of Figure 1 for the notation) is:
d 2 = d sin cos α = d sin cos 1 2 (φ + − π 2 )
.
The diameter of the circle representing the CME at this PA is simply given by:
d HM = d 2 cos α = 2 d sin 1 + sin( + φ) ,
which is the harmonic mean of the PP and Fφ approximations. To obtain the diameter of the sphere, this must also be corrected for the difference between the measured PA and the direction of propagation of the CME:
1 R HM = cos(PA − λ) 2 1 R Fφ + 1 R PP .
This correction is required because all parts of a CME cannot be assumed to move radially outward with the same speed. Thus, this hypothesis is most likely true for the nose of the CME, which is where the speed must be calculated. We plot the position, error and speed derived from this approximation (referred as the harmonic mean (HM) approximation) in the three panels of Figure 2. As can be seen, this simple model gives better results than the PP and Fφ approximations, especially for the speed of the CME at large elongation angles.
5 Revisiting the January 26-27, 2007 observations: CME-CME merging
Data analysis
With this method, we re-analyze the data from the two fronts observed by HI-2 on January 26-27, 2007. We analyze the data at PA 69, where the SECCHI's coverage is best for this event. There were only limited observations of the two ejections prior to the data gap. For the first ejection, all three methods agree and give an average speed between 550 and 600 km s −1 at 40 R , which is consistent with LASCO observations and also with the speed of 604 km s −1 reported by Harrison et al. (2008) for the front at PA 90 (i.e. the nose of the ejection). For the second eruption, we use LASCO data, which give a speed of approximatively 1,200-1,300 km s −1 at 20 R . Next, we analyze the two fronts after the data gap in HI-2 FOV. The top panel of Figure 3 shows the derived position for the two fronts according to the three methods. First, it is worth noting that the Fφ and the HM approximations differ by less than 10% up to approximatively 180 R (40 • ), but the HM approximation does not result in a large apparent acceleration at very large elongation angles. Next, we derive the speed of the two fronts according to these methods. We plot a running average over approximatively 5 hours to reduce the magnitude of the error in the speed. HI-2 resolution is 4 arcmin; assuming the elongation angles are measured with a precision of 5 pixels, the error in position is of the order of 1.5% and the resulting speed has an error of about 15%.
We believe the analysis of the numerical results from section 3.2 shows that the PP method cannot be used to study the speed of limb CMEs past 100 R , which is the approximate position of the two fronts after the data gap. According to the Fφ and HM methods, the second front, which fades out at about 33 • elongation, has an average speed of 680 and 605 km s −1 , respectively, with a general decelerating
Point-P Fixed-φ Harmonic Mean
Point-P Fixed-φ Harmonic Mean
Point-P Fixed-φ Harmonic Mean trend with an initial speed around 750-850 km s −1 around 100 R . The two methods are overall consistent with each other, and we believe this shows that the transient associated with the second front had a speed of 750-850 km s −1 around 100 R and decelerated to 500-600 km s −1 before disappearing around 140 R .
For the first front, which is tracked until 53 • , the Fφ re- Gopalswamy et al., 2001;Tappin, 2006). The HM method results in a speed more consistent with this fact than the Fφ method, although it shows a limited, unphysical acceleration at large elongation angles. The average speed obtained from the three methods is 490, 1340 and 845 km s −1 , respectively; the average speed of the Fφ and HM methods for observations between 28 • and 40 • is 880 km s −1 and 705 km s −1 . The analysis is more complicated than for the second front, but, we believe that the observations are consistent with a transient whose average speed is about 850-900 km s −1 (the average value of the HM method, and the average value of the Fφ within 40 • ).
Consequence for the process of CME-CME interaction
The derived speeds of the fronts are summarized in Table 1. We believe there are 4 scenarios consistent with the result that the first front is faster than the second front after the data gap; we analyze these scenarios with respect to the measured speeds of the two fronts. A schematic view of the 4 possibilities is shown in Figure 4. In the first scenario, the January 25 CME could have "passed" the January 24 CME without major interaction. This scenario is possible if the two eruptions have a large angular separation, and if they do not propagate along the same direction. Then, part of the fast front could, in the projected images, "pass" the slow front when in fact there is no interaction. This scenario is described in greater details in Webb et al. (2009). While it is plausible that only a small part of the two CMEs interacted and that the major part of the January 25 CME simply passed next to the January 24 CME without interaction, we believe this is very unlikely. First, it is hard to understand how the speed of the January 24 eruption could be faster after the data gap than before; also, the January 25 eruption shows a strong deceleration during the data gap, which tends to suggest some form of interaction. Second, the measured width of the eruptions -greater than 100 • in LASCO FOV as reported in Webb et al. (2009)-also makes a missed encounter implausible. Last, this is not supported by any MHD models, which tend to show that CMEs act as magnetic barriers. This scenario could however explain what happened if the two CMEs were associated with Fig. 4. The four scenarios for CME-CME collisions that might explain the fact that the first front after the collision is faster than the second front. In the sketches, the ellipses, the solid arcs, and the dashed arcs correspond to the ejecta, dense sheaths and to the shock waves, respectively. different active regions and, consequently, had a large angular separation. This separation could be as large as 35 • if the first CME was associated with the eastern most active region and the second CME with the western most active region present in January 24-25, 2007. Our arguments to associate both ejections with the same active region can be found in Lugaz et al. (2009).
In the second and third scenario, the two CMEs collide, the collision is associated with momentum transfer between the ejections (as Farrugia and Berdichevsky (2004) considered). The observations appear to be consistent with both eruptions having the same speed after the collision, i.e. a perfectly inelastic collision. However, it is hard to understand the evolution of the speed of the two CMEs after the collision according to this scenario. If the January 25 CME pushes the January 24 CME, both fronts should have a similar speed at all times after the collision. This scenario appears more plausible if one believes the speeds derived using the PP method. However, using the PP speeds and positions, the average transit speed of the two fronts during the data gap should be 500 and 650 km s −1 respectively. This scenario would therefore be consistent with a large deceleration of the January 25 (fast) CME and almost no acceleration of the January 24 (slow) CME, which, in turn, can only happen if the January 24 CME is much more massive than the January 25 CME. Webb et al. (2009) reported the mass of the January 24 and 25 CMEs being 4.3×10 15 g and 1.6×10 16 g, respectively, making this scenario very unlikely.
In the third scenario, the collision is elastic and there is a momentum transfer from the second to the first ejection on a time-scale of 12-20 hours. The momentum transfer has an unknown cause and continues until the second eruption becomes slower than the first one. This scenario is not fundamentally different from the last one, which does not require unknown processes and can explain the disappearance of the second front.
In the fourth scenario, the unknown process is, in fact, the compression and momentum transer associated with the shock wave from the January 25 CME. Before the CMEs collide, the shock wave driven by the January 25 CME propagates through the January 24 CME (ejecta and sheath), compressing and accelerating it, before merging with its associated shock wave. After the data gap, the first front corresponds to the sheath associated with the merged shocks. Due to its interaction with the January 24 CME and sheath, the shock wave initially associated with the January 25 CME has decelerated rapidly to a speed ∼ 850 km s −1 . There are two possibilities to explain the second front: it could be the remnant of the sheath associated with the January 25 shock wave which is "trapped" between the two CMEs and "forced" to propagate with a speed comparable to that of the January 24 CME. However, the distance between the two fronts is of the order of 20 R at PA 69. If the January 24 CME is between the two fronts, this would mean that the magnetic cloud has been compressed to less than 20 R , which does not seem reasonable. Moreover, the two fronts appear to merge along PA 90, which is inconsistent with this explanation. The other possibility is that it is associated with a transient phenomenon during the shock-CME or shock-sheath interaction or three-dimensional effects. For example, the core (or any part of the cloud) of the January 24 CME could be have been compressed and accelerated by the shock wave and it relaxes to slower speeds. Most likely, part of the sheath associated with the January 24 CME gets compressed to very high density and relaxes to the average value of the new sheath (similar to what has been discussed in Lugaz et al., 2005). However, each of these sub-scenarios involve the propagation of the January 25 shock through the January 24 CME. We note that this scenario does not require the presence of a shock wave driven by the January 24 CME, but simply a sheath of dense material (piled-up mass and/or compressed material) ahead of the CME. The only difference due to the possible absence of the first shock wave is that there is no instance of shock-shock merging. Therefore, the shock wave ahead of the merged CMEs after the interaction is simply the shock wave originally driven by the January 25 CME now propagating into an unperturbed solar wind.
Discussions and Conclusions
In the first part of this study, we have tested the two most common methods used to derive CME radial distances from elongation angle measurements, the Point-P and Fixed-φ methods. Confirming previous work by Kahler and Webb (2007) , Wood et al. (2009) and Webb et al. (2009) we find that, above 35 • , both methods yield poor results, especially for CME speed and acceleration. We propose an alternative analytical method to derive CME radial distances. We consider a very simple model, namely that the density peak maps out as a sphere whose center propagates radially outward from the flare location, and that the elongation angle corresponds to the angle of the line-of-sight tangent to this sphere. We find that the diameter of this sphere is given by the harmonic average of the Point-P and Fixed-φ approximations further corrected by 1/cos(PA app ) where PA app is the position angle with respect to the nose of the CME. For a limb ejection, this method gives results similar to the Fixedφ approximation up to about 40 • and more realistic results at larger elongation angles. The Point-P and Fixed-φ approximations are expected to yield a lower and upper-bound to the actual distance of a CME (e.g., see Webb et al., 2009). Any alternative methods to determine radial distances from elongation angles shall fall in-between, as is the case here. However, we find a particular physical interpretation for the harmonic mean of these two methods. We have also found a new interpretation of the position derived from the Fixed-φ approximation, namely that it is the diameter of the sphere representing the CME if the emission is assumed to originate directly from the Thomson sphere. This might explain why this approximation works fairly well even for wide CMEs.
We must be aware of the limitations of this method. First, this is most appropriate for wide CMEs such as the ones observed in January 24-25, 2007, because the assumed geometry is consistent with a CME whose angular extent is 90 • . This approximation, while arbitrary, is required to reduce the number of free parameters of the model to one. It also appears to be a better approximation for wide CMEs than the Point-P approximation which is consistent with a CME whose angular extent is 360 • . Secondly, this model assumes that the CME propagates on a fixed radial trajectory, ignoring heliospheric deflection. This is the same assumption made to derive the Fixed-φ approximation and is also required to reduce the number of free parameters. In future works, we shall investigate how stereoscopic observations of CMEs by the two STEREO spacecraft can help relax these two conditions. Last, the model of CMEs use to derive this approximation assumes that the CME front (part piled-up mass, part shocked material) maps out as a sphere. As noted in the introduction, CME fronts are known to be distorted and usually flattened from their interaction with the structured solar wind. In Figure 1, we have shown two simulated instances where this approximation is more or less appropriate; it is worth noting that the two simulations use different models of CME initiation and solar wind. Assuming a more complex shape (for example a "pancake") would require a fitting of the model and could not yield a direct analytical relationship such as the one derived here.
We have re-analyzed the HI-2 measurements in January 26-27, 2007 associated with two interacting CMEs with the three methods. We found that the first bright front after the interaction corresponds to a transient propagating with a speed of about 850 km s −1 , while the second front corresponds to a transient whose speed decreases from 850 km s −1 to 550 km s −1 in about 12-18 hours before ultimately disappearing. Among the 4 scenarios which could explain that the acceleration of the first front relative to the second one, we found that the only scenario consistent with the observations require that the part of the shock wave driven by the (faster) January 25 CME propagates first through the (slower) January 24 CME. The propagation of this fast shock wave inside the CME and dense material of the sheath results in its large deceleration. The most likely explanation for the origin of the second front is that it is part of the dense sheath associated with the January 24 ejection, which is compressed and accelerated by the shock and decelerates to the speed of the January 24 magnetic cloud.
Our analysis has been limited to one case of a fast, wide limb CME. It is for this particular geometry that the Fixed-φ approximation is expected to give the largest error at large elongation angles. However, we believe our new average method should provide an improvement over the existing methods, notably over the Point-P approximation for wide eruptions, and that it should be used complimentary to threedimensional fitting methods and numerical simulations. We plan to test and validate this relation for other heliospheric observations of wide and fast CMEs, starting in the near future with the April 26, 2008 eruption. Our analysis of the January 24-27, 2007 observations is the first heliospheric observational evidence of a shock wave propagating inside a CME. More observations without data gaps are required before we have a more definite understanding of CME-CME interaction.
Fig. 1 .
1Left: Geometry of the observations and the methods described in the article. The figure corresponds to the January 24-25 CMEs in the late phase of their merging. This illustrates the different CME positions obtained from one measurement of the elongation angle at .
Fig. 2 .
2Position (top), error (center) and speed (bottom) of the second CME front at PA 90 from the simulation and as derived from the synthetic SECCHI images with the different methods.
Fig. 3 .
3Position (top) and speed (middle and bottom) of the two fronts at PA 69 according to the three methods. The errors are typically 1.5% for the position and 15% for the speed. The averages are shown with dotted lines and the second front with dashed lines.
5 • ) and STEREO-A was rolled by about 22 • from solar north. Beyond COR-2 FOV, only STEREO-A/SECCHI observed these eruptions originating from an active region behind the eastern limb. The two eruptions were first imaged by COR-1 at 14:03UT on January 24, 2007 and 06:43UT on January 25, 2007. Based on their appearance in coronagraphic images, we determined in Lugaz et al. (2009) that they were associated with active region 10940 which was about 20 • behind the eastern limb at the time of the first eruption. Due to positions of the Solar and Heliospheric Observatory (SOHO) and STEREO spacecraft in January 2007, no triangulation of the source region of the eruptions is possible, as was done for later CMEs by
Table 1 .
1Summary of the speeds measured by SECCHI for the two fronts.Front Speed Before Collision
Speed After Collision
1
600 km s −1
850-900 km s −1
2
1200-1300 km s −1
800 km s −1 @ 80 R
550 km s −1 @ 140 R
sults in a strong acceleration after 40 • elongation and the PP
method in an almost constant low speed. Fast CMEs are not
expected to experience large acceleration in the heliosphere
(e.g.
Acknowledgements. The research for this manuscript was supported by NSF grants ATM-0639335 and ATM-0819653 as well as NASA grant NNX-08AQ16G. We would like to thank the reviewers, Tim Howard and an anonymous referee for helping us to improve and to clarify this manuscript. The SECCHI data are produced by an international consortium of Naval Research Laboratory, Lockheed Martin Solar and Astrophysics Lab, and NASA Goddard Space Flight Center (USA), Rutherford Appleton Laboratory, and University of Birmingham (UK), Max-Planck-Institut für Sonnensystemforschung (Germany), Centre Spatiale de Liege (Belgium), Institut d'Optique Théorique et Appliquée, and Institut d'Astrophysique Spatiale (France). SOHO is a project of international cooperation between ESA and NASA, and the SOHO LASCO/EIT catalogs are maintained by NASA, the Catholic University of America, and the US Naval Research Laboratory (NRL).
Three-Dimensional Kinematics of Coronal Mass Ejections from STEREO/SECCHI-COR2 Observations. Y Boursier, P Lamy, A Llebaria, 10.1007/s11207-009-9358-1Solar Phys. 256Boursier, Y., Lamy, P., and Llebaria, A.: Three-Dimensional Kinematics of Coronal Mass Ejections from STEREO/SECCHI- COR2 Observations in 2007-2008, Solar Phys., 256, 131-147, doi:10.1007/s11207-009-9358-1, 2009.
Geomagnetic storms caused by coronal mass ejections (CMEs. G E Brueckner, 10.1029/98GL00704Geophys. Res. Lett. 25Brueckner, G. E. et al.: Geomagnetic storms caused by coronal mass ejections (CMEs): March 1996 through June 1997, Geo- phys. Res. Lett., 25, 3019-3022, doi:10.1029/98GL00704, 1998.
On the Aerodynamic Drag Force Acting on Interplanetary Coronal Mass Ejections. P J Cargill, 10.1023/B:SOLA.0000033366.10725.a2Solar Phys. 221Cargill, P. J.: On the Aerodynamic Drag Force Acting on Inter- planetary Coronal Mass Ejections, Solar Phys., 221, 135-149, doi:10.1023/B:SOLA.0000033366.10725.a2, 2004.
A Semiempirical Magnetohydrodynamical Model of the Solar Wind. O Cohen, 10.1086/511154Astrophys. Journ. Lett. 654Cohen, O. et al..: A Semiempirical Magnetohydrodynamical Model of the Solar Wind, Astrophys. Journ. Lett., 654, L163-L166, doi: 10.1086/511154, 2007.
First Determination of the True Mass of Coronal Mass Ejections: A Novel Approach to Using the Two STEREO Viewpoints. R C Colaninno, A Vourlidas, 10.1088/0004-637X/698/1/852Astrophys. J. 698Colaninno, R. C. and Vourlidas, A.: First Determination of the True Mass of Coronal Mass Ejections: A Novel Approach to Using the Two STEREO Viewpoints, Astrophys. J., 698, 852-858 doi: 10.1088/0004-637X/698/1/852, 2009.
Stereoscopic imaging of an Earthimpacting solar coronal mass ejection: A major milestone for the STEREO mission. C J Davis, J A Davies, M Lockwood, A P Rouillard, C J Eyles, R A Harrison, doi:10Geophys. Res. Lett. 368102Davis, C. J., Davies, J. A., Lockwood, M., Rouillard, A. P., Eyles, C. J., and Harrison, R. A.: Stereoscopic imaging of an Earth- impacting solar coronal mass ejection: A major milestone for the STEREO mission, Geophys. Res. Lett., 36, 8102-+, doi:10. 1029/2009GL038021, 2009.
Geometric Localization of CMEs in 3D Space Using STEREO Beacon Data: First Results. C A Koning, V J Pizzo, D A Biesecker, doi:10.1007/ s11207-009-9344-7Solar Phys. 256Koning, C. A., Pizzo, V. J., and Biesecker, D. A.: Geomet- ric Localization of CMEs in 3D Space Using STEREO Bea- con Data: First Results, Solar Phys., 256, 167-181, doi:10.1007/ s11207-009-9344-7, 2009.
Evolutionary signatures in complex ejecta and their driven shocks. C Farrugia, D Berdichevsky, Annales Geophysicae. 22Farrugia, C. and Berdichevsky, D.: Evolutionary signatures in com- plex ejecta and their driven shocks, Annales Geophysicae, 22, 3679-3698, 2004.
Improvements to the HAF solar wind model for space weather predictions. C D Fry, W Sun, C S Deehr, M Dryer, Z Smith, S.-I Akasofu, M Tokumaru, M Kojima, 10.1029/2000JA000220J. Geophys. Res. 1062Fry, C. D., Sun, W., Deehr, C. S., Dryer, M., Smith, Z., Akasofu, S.- I., Tokumaru, M., and Kojima, M.: Improvements to the HAF so- lar wind model for space weather predictions, J. Geophys. Res., 106, 20 985-21 002, doi:10.1029/2000JA000220, 2001.
. N Gopalswamy, A Lara, S Yashiro, M L Kaiser, R A Howard, J. Geophys. Res. 10629207Gopalswamy, N., Lara, A., Yashiro, S., Kaiser, M. L., & Howard, R. A. 2001, J. Geophys. Res., 106, 29207
Simulation of three-dimensional solar wind disturbances and resulting geomagnetic storms. K Hakamada, S.-I Akasofu, 10.1007/BF00349000Space Sci. Rev. 31Hakamada, K. and Akasofu, S.-I.: Simulation of three-dimensional solar wind disturbances and resulting geomagnetic storms, Space Sci. Rev., 31, 3-70, doi:10.1007/BF00349000, 1982.
First Imaging of Coronal Mass Ejections in the Heliosphere Viewed from Outside the Sun Earth Line. R A Harrison, 10.1007/s11207-007-9083-6Solar Phys. 247Harrison, R. A. et al.: First Imaging of Coronal Mass Ejections in the Heliosphere Viewed from Outside the Sun Earth Line, Solar Phys., 247, 171-193, doi:10.1007/s11207-007-9083-6, 2008.
Two Years of the STEREO Heliospheric Imagers. R A Harrison, doi:10.1007/ s11207-009-9352-7Invited Review, Solar Phys. 256Harrison, R. A. et al.: Two Years of the STEREO Heliospheric Im- agers. Invited Review, Solar Phys., 256, 219-237, doi:10.1007/ s11207-009-9352-7, 2009.
Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI). R A Howard, 10.1007/s11214-008-9341-4Space Sci. Rev. 136Howard, R. A. et al.: Sun Earth Connection Coronal and Helio- spheric Investigation (SECCHI), Space Sci. Rev., 136, 67-115, doi:10.1007/s11214-008-9341-4, 2008.
Three-Dimensional Reconstruction of Two Solar Coronal Mass Ejections Using the STEREO Spacecraft. T A Howard, S J Tappin, doi:10.1007/ s11207-008-9262-0Solar Phys. 252Howard, T. A. and Tappin, S. J.: Three-Dimensional Recon- struction of Two Solar Coronal Mass Ejections Using the STEREO Spacecraft, Solar Phys., 252, 373-383, doi:10.1007/ s11207-008-9262-0, 2008.
On the Evolution of Coronal Mass Ejections in the Interplanetary Medium. T A Howard, C D Fry, J C Johnston, D F Webb, 10.1086/519758Astrophys. J. 667Howard, T. A., Fry, C. D., Johnston, J. C., and Webb, D. F.: On the Evolution of Coronal Mass Ejections in the Interplane- tary Medium, Astrophys. J., 667, 610-625, doi:10.1086/519758, 2007.
Properties of Stream Interactions at One AU During. L Jian, C T Russell, J G Luhmann, R M Skoug, 10.1007/s11207-006-0132-3Solar Phys. 239Jian, L., Russell, C. T., Luhmann, J. G., and Skoug, R. M.: Proper- ties of Stream Interactions at One AU During 1995-2004 , Solar Phys., 239, 337-392, doi:10.1007/s11207-006-0132-3, 2006.
V arc interplanetary coronal mass ejections observed with the Solar Mass Ejection Imager. S W Kahler, D F Webb, 10.1029/2007JA012358J. Geophys. Res. 1129103Kahler, S. W. and Webb, D. F. : V arc interplanetary coronal mass ejections observed with the Solar Mass Ejection Imager, J. Geo- phys. Res., 112, A11, 9103, doi:10.1029/2007JA012358, 2007.
Flux Rope Model of the 2003 October 28-30 Coronal Mass Ejection and Interplanetary Coronal Mass Ejection. J Krall, V B Yurchyshyn, S Slinker, R M Skoug, Chen , J , 10.1086/500822Astrophys. J. 642Krall, J., Yurchyshyn, V. B., Slinker, S., Skoug, R. M., and Chen, J.: Flux Rope Model of the 2003 October 28-30 Coronal Mass Ejection and Interplanetary Coronal Mass Ejection, Astrophys. J., 642, 541-553, doi:10.1086/500822, 2006.
Numerical Simulation of the Interaction of Two Coronal Mass Ejections from Sun to Earth. N Lugaz, W B Manchester, T I Gombosi, Astrophys. J. 634Lugaz, N., Manchester, W. B., and Gombosi, T. I.: Numerical Sim- ulation of the Interaction of Two Coronal Mass Ejections from Sun to Earth, Astrophys. J., 634, 651-662, 2005.
The Brightness of Density Structures at Large Solar Elongation Angles: What Is Being Observed by STEREO SECCHI?. N Lugaz, A Vourlidas, I I Roussev, C Jacobs, Manchester, W B Iv, O Cohen, 10.1086/592217Astrophys. Journ. Lett. 684Lugaz, N., Vourlidas, A., Roussev, I. I., Jacobs, C., Manchester, IV, W. B., and Cohen, O.: The Brightness of Density Structures at Large Solar Elongation Angles: What Is Being Observed by STEREO SECCHI?, Astrophys. Journ. Lett., 684, L111-L114, doi:10.1086/592217, 2008.
Solar-Terrestrial Simulation in the STEREO Era: The. N Lugaz, A Vourlidas, I I Roussev, Morgan , H , doi:10.1007/ s11207-009-9339-4Eruptions, Solar Phys. 256Lugaz, N., Vourlidas, A., Roussev, I. I., and Morgan, H.: Solar- Terrestrial Simulation in the STEREO Era: The January 24- 25, 2007 Eruptions, Solar Phys., 256, 269-284, doi:10.1007/ s11207-009-9339-4, 2009.
Modeling a space weather event from the Sun to the Earth: CME generation and interplanetary propagation. W B Manchester, T I Gombosi, I Roussev, A Ridley, D L De Zeeuw, I V Sokolov, K G Powell, G Tóth, J. Geophys. Res. 1092107Manchester, W. B., Gombosi, T. I., Roussev, I., Ridley, A., De Zeeuw, D. L., Sokolov, I. V., Powell, K. G., and Tóth, G.: Model- ing a space weather event from the Sun to the Earth: CME gen- eration and interplanetary propagation, J. Geophys. Res., 109, 2107, 2004.
A Quick Method for Estimating the Propagation Direction of Coronal Mass Ejections Using STEREO-COR1 Images. M Mierla, J Davila, W Thompson, B Inhester, N Srivastava, M Kramar, St, O C Cyr, G Stenborg, R A Howard, 10.1007/s11207-008-9267-8Solar Phys. 252Mierla, M., Davila, J., Thompson, W., Inhester, B., Srivastava, N., Kramar, M., St. Cyr, O. C., Stenborg, G., and Howard, R. A.: A Quick Method for Estimating the Propagation Direction of Coro- nal Mass Ejections Using STEREO-COR1 Images, Solar Phys., 252, 385-396, doi:10.1007/s11207-008-9267-8, 2008.
On the continuous spectrum of the corona and its polarisation. M Minnaert, Zeitschrift fur Astrophysik. Minnaert, M.: On the continuous spectrum of the corona and its polarisation, Zeitschrift fur Astrophysik, 1, 209, 1930.
Optimized Grad-Shafranov Reconstruction of a Magnetic Cloud Using STEREO-Wind Observations. C Möstl, C J Farrugia, H K Biernat, M Leitner, E K J Kilpua, A B Galvin, J G Luhmann, doi:10.1007/ s11207-009-9360-7Solar Phys. 256Möstl, C., Farrugia, C. J., Biernat, H. K., Leitner, M., Kilpua, E. K. J., Galvin, A. B., and Luhmann, J. G.: Optimized Grad- Shafranov Reconstruction of a Magnetic Cloud Using STEREO- Wind Observations, Solar Phys., 256, 427-441, doi:10.1007/ s11207-009-9360-7, 2009.
Propagation of the 12 May 1997 interplanetary coronal mass ejection in evolving solar wind structures. D Odstrcil, V J Pizzo, Arge , C N , doi:10.1029/ 2004JA010745J. Geophys. Res. 1102106Odstrcil, D., Pizzo, V. J., and Arge, C. N.: Propagation of the 12 May 1997 interplanetary coronal mass ejection in evolv- ing solar wind structures, J. Geophys. Res., 110, doi:10.1029/ 2004JA010745, , A02106, 2005.
Using an MHD simulation to interpret the global context of a coronal mass ejection observed by two spacecraft. P Riley, J A Linker, Z Mikić, D Odstrcil, T H Zurbuchen, D Lario, R P Lepping, 10.1029/2002JA009760J. Geophys. Res. 1081272Riley, P., Linker, J. A., Mikić, Z., Odstrcil, D., Zurbuchen, T. H., Lario, D., and Lepping, R. P.: Using an MHD simulation to interpret the global context of a coronal mass ejection ob- served by two spacecraft, J. Geophys. Res., 108, 1272-+, doi: 10.1029/2002JA009760, 2003.
First imaging of corotating interaction regions using the STEREO spacecraft. A P Rouillard, 10.1029/2008GL033767Geophys. Res. Lett. 3510110Rouillard, A. P. et al.: First imaging of corotating interaction re- gions using the STEREO spacecraft, Geophys. Res. Lett., 35, doi:10.1029/2008GL033767, , L10110, 2008.
A Multispacecraft Analysis of a Small-Scale Transient Entrained by Solar Wind Streams. A P Rouillard, 10.1007/s11207-009-9329-6Solar Phys. 256Rouillard, A. P. et al.: A Multispacecraft Analysis of a Small-Scale Transient Entrained by Solar Wind Streams, Solar Phys., 256, 307-326, doi:10.1007/s11207-009-9329-6, 2009.
A numerical study of two interacting coronal mass ejections. J Schmidt, P Cargill, Annales Geophysicae. 22Schmidt, J. and Cargill, P.: A numerical study of two interacting coronal mass ejections, Annales Geophysicae, 22, 2245-2254, 2004.
Heliospheric 3d Structure and CME Propagation as Seen from SOHO: Recent Lessons for Space Weather Predictions. R Schwenn, 10.1016/S0273-1177(99)01025-XAdv. Space Res. 26Schwenn, R.: Heliospheric 3d Structure and CME Propagation as Seen from SOHO: Recent Lessons for Space Weather Predic- tions, Adv. Space Res., 26, 43-53, doi:10.1016/S0273-1177(99) 01025-X, 2000.
The Structure of Streamer Blobs. N R Sheeley, D D Lee, .-H Casto, K P Wang, Y.-M Rich, N B , 10.1088/0004-637X/694/2/1471Astrophys. J. 694Sheeley, N. R., Lee, D. D.-H., Casto, K. P., Wang, Y.-M., and Rich, N. B.: The Structure of Streamer Blobs, Astrophys. J., 694, 1471-1480, doi:10.1088/0004-637X/694/2/1471, 2009.
Heliospheric Images of the Solar Wind at Earth. Jr Sheeley, N R , 10.1086/526422Astrophys. J. 675Sheeley, Jr., N. R. et al.: Heliospheric Images of the Solar Wind at Earth, Astrophys. J., 675, 853-862, doi:10.1086/526422, 2008.
The Deceleration of an Interplanetary Transient from the Sun to 5 Au. S J Tappin, doi:10Solar Phys. 233Tappin, S. J.: The Deceleration of an Interplanetary Transient from the Sun to 5 Au, Solar Phys., 233, 233-248, doi:10.1007/ s11207-006-2065-2, 2006.
Forward Modeling of Coronal Mass Ejections Using STEREO/SECCHI Data. A Thernisien, A Vourlidas, R A Howard, 10.1007/s11207-009-9346-5Solar Phys. 256Thernisien, A., Vourlidas, A., and Howard, R. A.: Forward Model- ing of Coronal Mass Ejections Using STEREO/SECCHI Data, Solar Phys., 256, 111-130, doi:10.1007/s11207-009-9346-5, 2009.
Space Weather Modeling Framework: A new tool for the space science community. G Tóth, 10.1029/2005JA011126A12226J. Geophys. Res. 110Tóth, G. et al.: Space Weather Modeling Framework: A new tool for the space science community, J. Geophys. Res., 110, doi: 10.1029/2005JA011126, , A12226, 2005.
The Proper Treatment of Coronal Mass Ejection Brightness: A New Methodology and Implications for Observations. A Vourlidas, R A Howard, 10.1086/501122Astrophys. J. 642Vourlidas, A. and Howard, R. A.: The Proper Treatment of Coro- nal Mass Ejection Brightness: A New Methodology and Impli- cations for Observations, Astrophys. J., 642, 1216-1221, doi: 10.1086/501122, 2006.
Study of CME Propagation in the Inner Heliosphere: SOHO LASCO, SMEI and STEREO HI Observations of the. D F Webb, 10.1007/s11207-009-9351-8Events, Solar Phys. 256Webb, D. F. et al.: Study of CME Propagation in the Inner Helio- sphere: SOHO LASCO, SMEI and STEREO HI Observations of the January 2007 Events, Solar Phys., 256, 239-267, doi: 10.1007/s11207-009-9351-8, 2009.
Comprehensive Observations of a Solar Minimum Coronal Mass Ejection with the Solar Terrestrial Relations Observatory, Astrophys. B E Wood, R A Howard, S P Plunkett, D G Socker, 10.1088/0004-637X/694/2/707J. 694Wood, B. E., Howard, R. A., Plunkett, S. P., and Socker, D. G.: Comprehensive Observations of a Solar Minimum Coronal Mass Ejection with the Solar Terrestrial Relations Observatory, As- trophys. J., 694, 707-717, doi:10.1088/0004-637X/694/2/707, 2009.
| []
|
[
"Technologies for 3D Wafer Level Heterogeneous Integration",
"Technologies for 3D Wafer Level Heterogeneous Integration"
]
| [
"M J Wolf \nFraunhofer IZM\n\n",
"P Ramm \nFraunhofer IZM\n\n",
"A Klumpp \nFraunhofer IZM\n\n",
"H Reichl \nFraunhofer IZM\n\n"
]
| [
"Fraunhofer IZM\n",
"Fraunhofer IZM\n",
"Fraunhofer IZM\n",
"Fraunhofer IZM\n"
]
| []
| 3D integration is a fast growing field that encompasses different types of technologies. The paper addresses one of the most promising technology which uses Through Silicon Vias (TSV) for interconnecting stacked devices on wafer level to perform high density interconnects with a good electrical performance at the smallest form factor for 3D architectures. Fraunhofer IZM has developed a post front-end 3D integration process which allows stacking of functional and tested FE-devices e.g. sensors, ASICs on wafer level as well as a technology portfolio for passive silicon interposer with redistribution layers and TSV. | 10.1109/dtip.2008.4752966 | [
"https://arxiv.org/pdf/0805.0917v1.pdf"
]
| 12,061,530 | 0805.0917 | 973d6f703be9f03fe53f20f4c47616d15da1239d |
Technologies for 3D Wafer Level Heterogeneous Integration
9-11 April 2008
M J Wolf
Fraunhofer IZM
P Ramm
Fraunhofer IZM
A Klumpp
Fraunhofer IZM
H Reichl
Fraunhofer IZM
Technologies for 3D Wafer Level Heterogeneous Integration
9-11 April 2008*Berlin/**Munich Contact: [email protected]
3D integration is a fast growing field that encompasses different types of technologies. The paper addresses one of the most promising technology which uses Through Silicon Vias (TSV) for interconnecting stacked devices on wafer level to perform high density interconnects with a good electrical performance at the smallest form factor for 3D architectures. Fraunhofer IZM has developed a post front-end 3D integration process which allows stacking of functional and tested FE-devices e.g. sensors, ASICs on wafer level as well as a technology portfolio for passive silicon interposer with redistribution layers and TSV.
I. DRIVERS FOR 3D SYSTEM INTEGRATION
Since several years packaging is driven by System in Package (SiP) solutions to meet the requirements of improved performance, miniaturization and cost reduction. This leads to a number of technologies where 3D system integration is one of the main potential drivers [1].
In general, the introduction of 3D integration technologies is driven by • Form factor: Reduction of system volume, weight and footprint • Performance: Improvement of integration density and reduction of interconnect length leading to improved transmission speed and reduced power consumption • High volume low cost production: Reduction of processing costs for, e.g., mixed technologies • New applications: e.g. ultra compact camera and detector systems and small wireless sensor nodes
In competition to Systems on Chip (SoC) solutions, the 3D wafer level system integration enables the combination of different optimized production technologies. In addition, 3D integration is a possible solution to overcome the "wiring crisis" caused by signal propagation delay, both, at board and at chip level, because it allows minimal interconnection lengths and the elimination of speed-limiting intra-and interchip interconnects. The introduction of very advanced microelectronic systems, as e.g. 3D image processors, will be mainly driven by the enhancement of performance. The potential for low cost fabrication will be a further key aspect for future applications of 3D integration as well. Today, the fabrication of Systems on Chip (SoC) is based on embedding multiple technologies by monolithic integration. But there are serious disadvantages: The chip partition with the highest complexity drives the process technology which leads to a "cost explosion" of the overall system. In contrast to this, suitable 3D integration technologies enable the combination of different optimized base technologies, e.g. MEMS, CMOS, etc., with the potential of low cost fabrication through high yield and high miniaturization degree.
II. ADVANCED 3D WAFER LEVEL SYSTEM INTEGRATION TECHNOLOGIES
THROUGH SILICON VIA (TSV) TECHNOLOGY
Wafer level packaging technologies, e.g. CSP with redistribution layers or flip chip mounted devices on wafer, are already introduced in high volume production. Currently, different technologies which use Through Silicon Vias (TSV) in active or passive silicon devices are in development to satisfy the need to increase performance and functionality while reducing size, power and cost of the system. Today, there are two mainstreams to realize TSVs. One is the implementation into the front-end CMOS process and the second is a post front end process (via first/via last) process. Both scenarios have pros and cons and the selection depends on application and infrastructure. The post front end process allows the realization of compact 3D system architectures as a packaging task with complete tested device wafers independent from the device wafer manufacturer. Key process technologies enabling 3D architectures with TSV interconnects include:
• via formation with high aspect ratio, • isolation, barrier and seed deposition, industry and require a FE/BE infrastructure. That's why 3D-IC architectures are today still at the R&D stage, even in the largest IC companies, but they are in focus as a potential solution with a high priority. Many of the key technical issues and challenges for TSV interconnects are not fully resolved yet. There are also a number of alternative technologies, e.g.:
•
• process integration: via-first vs. via-last • via filling: materials (e.g. poly Si, Cu, W, conductive polymer, metal paste) and techniques (e.g. electroplating, CVD, polymer coating), • wafer level assembly: chip-to-chip chip-to-wafer or wafer-to-wafer, • bonding: soldering, direct Cu-Cu, adhesive, direct fusion The development of selected technical parameters for TSVs is given in Table 1. This data are provided by ITRS and postulates volume production.
SI INTERPOSER AND THIN CHIP INTEGRATION
Silicon Interposer as a carrier substrate for flip chip assembled dies are in special focus for applications which require high density wiring and interconnects e.g. image sensor systems, memory, processor etc.
In combination with Through Silicon Vias, these Si carriers allows the realization of double side flip chip assembled modules with a very small form factor. In [2] an example is given for a silicon interposer with TSV´s for a RF transceiver module. Thin film technology (polymer-copper) with integrated passive devices (R,L,C) was used to realize the wiring for the flip chip assembled transceiver.
The transceiver was solder bumped (SnAg) using ECD. The bottom side of the carrier has IO terminals for larger solder balls (preforms) which provide the interconnection to the printed circuit board. Fig. 1a and Fig. 1b show the wafer level assembled module and a cross-section. For the realization of the Through Silicon Vias (TSV) in silicon interposer or silicon device wafers different approaches for the via etching (e.g. wet etching, DRIE or laser drilling) and via metallization (e.g. CVD W, CVD Cu, ECD-Cu doped silicon or metal paste) can be performed. The selection of the technology is determined by system requirements (via size and density, aspect ratio, electrical resistivity etc). Metal filling of TSV using electroplating is especially suited for via sizes between 5 µm and 20 µm, which are in special focus for silicon interposer with TSV as a passive carrier substrate. After DRIE and sidewall isolation, the seed layer can be applied by CVD (e.g. Cu or W) or an adequate sputtering processes, e.g. Ti/W:Cu. Fig. 2 shows a Through Silicon Via (diameter (15 µm) filled with Cu by electro deposition using a thin sputtered TiW/Cu seed layer [3,4]. Copper will also be deposited on the wafer front side during the via plating, which will be removed by a later etching step. Depending on the via sizes and depth (aspect ratio -ASR), a wafer thinning (grinding, CMP, etching) from the backside is required to get access to the metalized vias. The IO terminals on the backside are realized by standard thin film processing (polymer -Cu) followed by solder ball placement. A mechanical support of the interposer during backside processing can be provided by temporary bonding on a carrier substrate (e.g. silicon). [5] is characterized by bonding and very high density vertical inter-chip wiring of stacked thinned device substrates (Si) with freely positioned Through-Silicon-Vias (TSV) by using standard silicon wafer processes (mainly backend-of-line) (Fig. 3). The VSI-TSV [6] approach can provide the shortest and most plentiful z-axis connections. The TSV technology has various potential benefits: a) connection lengths can be as short as the thickness of a die, which has the potential to significantly reduce the average wire length of block-to-block interconnects by stacking functional blocks vertically instead of spreading them out horizontally, b) high-density, high-aspect-ratio connections are possible, which allow implanting complex, multi-chip systems entirely within silicon and c) RC delays of long, in-plane interconnects are avoided by bringing out-of-plane logic blocks electrically much closer together The so-called "Inter-Chip-Via (ICV)-SLID concept" [7] is well suited as a chip-to-wafer stacking approach. Starting point are completely processed wafers. Known good dice of the top wafer are aligned bonded to the known good dice of a bottom wafer after wafer-level testing, thinning and separation. This represents the only process step on chip level within the total vertical system integration sequence. The subsequent processing for vertical metallization is on wafer-scale again. Basically, there is no need for additional process steps on stack level.
Fig. 3 VSI concept;-W2W and D2W [7]
Fig. 4 Schematic ICV-SLID process
The ICV-SLID concept is based on the metal-metal bonding of top chips to a bottom wafer by very thin soldering pads (e.g. Cu/Sn) which provide both, the electrical and the mechanical interconnect by solid-liquid-interdiffusion (SLID). The ICV-SLID concept is a non-flip concept. The top surface of the chip to be added is the top surface after stacking it to the substrate. The Through-Si-Vias are fully processed -via formation and metallization -prior to the thinning sequence which has the advantage that the later stacking of the separated known good dice to the bottom device wafer is the final step of the 3D integration process flow. As a fully modular concept, it allows the formation of multiple device stacks. Fig. 4 shows the schematic cross section of a vertically integrated circuit in accordance with the modular "back-to-face" concept, also indicating the stacking of a next level chip. The first essential step of the ICV-SLID process flow is the formation of interchip vias. The via etch, lateral isolation and metal filling is performed on wafers with standard thickness, thus resulting in basically high-yield fabrication of inter-chip vias. The ICVs are connected to the contact wiring of the devices by standard metallization (aluminium or copper). The process sequence for the formation of the metalized inter-chip vias is as follows: The ICVs with typically 1-3 µm diameter are prepared on a fully processed and tested device wafer by dry etching (DRIE) through all passivation and multi-level dielectrics layers followed by a deep silicon trench etch. For lateral via isolation, a highly conformal CVD of O3/TEOS-oxide is applied and the inter-chip vias are metalized by using MOCVD of tungsten (MOCVD-TiN as barrier layer) and etched back for metal plug formation. The lateral electrical connection of the Tungsten filled inter-chip vias with the uppermost metal level of the device is performed by standard Al metallization. The devices are now ready for wafer level test and selection. The last process sequence performed on the top wafer with standard thickness is through-mask electroplating of Cu. The top wafer is then temporarily bonded to a handling wafer and thinned with very high uniformity using precision grinding, wet chemical spin etching and a final CMP step until the tungstenfilled vias are exposed from the rear. After deposition of dielectric layers for electrical isolation and opening to the tungsten-filled inter-chip vias, through-resist mask electroplating of Sn/Cu is applied. The surface is completely covered with the soldering metal, electrical contacts are formed by isolation trenches in the Cu/Sn layer and the remaining areas that are not used for electrical means serve as dummy areas for mechanical stabilization of the future stack. The bottom wafer is through-resist mask electroplated with Cu as the counterpart metal of the soldering metal system. After dicing, the selected known good dice -stabilized with the handling substrates -are picked and placed at the bottom wafer by using a chip-to-wafer bonding equipment with high throughput and an alignment accuracy of 10 µm. The mechanical bond and the electrical contact of the transferred chips are performed in one step by a soldering technology called Solid-Liquid Interdiffusion (SLID). [8] During the soldering step, at a temperature of approximately 300 °C and applying pressure, the liquid Sn is interdiffused by Cu, finally forming the intermetallic compound (IMC) Cu 3 Sn. This formed ε-phase is thermodynamically stable with a melting point above 600°C. Using appropriate film thicknesses, tin is consumed and the solidification is completed within a few minutes, leaving copper on both sides. Fig. 6 shows an FIB of a 3-D integrated test structure after soldering and removal of the handling substrate. The tungsten filled ICVs are interconnected by Al wiring to the metallization of the top device and CuSn metal system to the metallization of the bottom device.
IV. CONCLUSION
Besides the progress in silicon technology following Moore´s law, there is an increasing demand for highly miniaturized complex system architectures. 3D integration based on wafer level approaches has the only potential to meet the requirements of form factor, performance and cost reduction. The ICV stacking concept allows the combination of different devices (e.g. MEMS-sensor, DSP, RF transceiver, power supply). The ICV-SLID technology and micro bump interconnects meet the requirements for device stacking with very high interconnection density (10 4 -10 6 cm -2 ). The combination of silicon interposer with TSV, thin chip integration and VSI opens the way to a new generation of future 3D device architectures.
V. ACKNOWLEDGEMENT
Fig. 1a
1aWL assembled RF transceiver module with TSV-Siinterposer and integrated passive devices [2] Fig. 1b Cross-section of RF-Si module [2]
Fig. 2
2ECD-Cu filled TSV (18 µm diameter and 70 µm depth)[3]
Fig. 5
5High Aspect ratio W filled via
Fig. 6
6Cross-section of interconnect-ted devices with W filled TSV using SLID
Table 1 :
1Key technical parameters for stacked architectures using TSV. **This applies for small diameter vias. The larger diameter vias will have a smaller aspect ratio.
The authors would like to thank the staff involved in the 3D and Wafer Level System Integration program at Fraunhofer IZM. Special thanks to R. Wieland, Dr. H. Oppermann and K. Zoschke
International Technical Roadmap of Semiconductors. TWG A&P. International Technical Roadmap of Semiconductors, TWG A&P, www.itrs.org
Low Cost Si Carrier -3D for high density modules", 3D Architecture for Semiconductor Integration and Packaging. F Binder, San FranciscoF. Binder; "Low Cost Si Carrier -3D for high density modules", 3D Architecture for Semiconductor Integration and Packaging; San Francisco; Oct.22-24. 2007
3D-Integration TSV-Technology. M J Wolf, P Ramm, A Klumpp, EMC 3D Technical Symposium. Munich/Eindhoven, NetherlandsM.J. Wolf, P. Ramm, A. Klumpp, "3D-Integration TSV- Technology", EMC 3D Technical Symposium, Munich/Eindhoven, Netherlands, Oct. 2007
3D-System Integration on Wafer Level" SEMI Technology Symposium. M J Wolf, P Ramm, H , International Packaging Strategy Symposium. Japan, TokyoM.J. Wolf, P. Ramm, H. Reichl, "3D-System Integration on Wafer Level" SEMI Technology Symposium 2007, International Packaging Strategy Symposium 2007 Semicon Japan, Tokyo
3D System Integration: Enabling Technologies and Applications. P Ramm, International Conference SSDM. YokohamaP. Ramm, "3D System Integration: Enabling Technologies and Applications", International Conference SSDM 2006, Yokohama (2006) 318-319
Method of making a vertically integrated circuit. P Ramm, R Buchner, US Patent. 5DEP. Ramm and R. Buchner, "Method of making a vertically integrated circuit", US Patent 5,766,984, Sep. 22, 1994 [DE]
Vertical system integration by using inter-chip vias and solid-liquid-interdiffusion bonding. P Ramm, A Klumpp, R Merkel, J Weber, R Wieland, Japanese Journal of Applied Physics. 437AP. Ramm, A. Klumpp, R. Merkel, J. Weber, R. Wieland, "Vertical system integration by using inter-chip vias and solid-liquid-interdiffusion bonding", Japanese Journal of Applied Physics Vol. 43, No. 7A (2004) 829-830
Chip to Wafer Stacking by using Through Silicon Vias and Solid Liquid Interdifusion. A Klumpp, 2nd International IEEE Workshop on 3D System integration. MunichA. Klumpp et.al. "Chip to Wafer Stacking by using Through Silicon Vias and Solid Liquid Interdifusion", 2nd International IEEE Workshop on 3D System integration, Munich(D) Oct 2007
Thermal Stress and Strain in Microelectronic Packaging. J H Lau, Van Nostrand ReinholdNew YorkJ.H. Lau. "Thermal Stress and Strain in Microelectronic Packaging" Van Nostrand Reinhold, New York, 1993.
Thermo-Mechanical Reliability of 3D-Integrated Microstructures in Stacked Silicon. B Wunderle, R Mrossko, O Wittler, E Kaulfersch, P Ramm, B Michel, H , MRS 2006 Fall Meeting. C. A. Bower, P. E. Garrou, P. Ramm, K. TakahashiBoston; Warrendale, PennsylvaniaMaterials Research Society970B. Wunderle, R. Mrossko, O. Wittler, E. Kaulfersch, P. Ramm, B. Michel, H. Reichl, "Thermo-Mechanical Reliability of 3D-Integrated Microstructures in Stacked Silicon", Mater. Res. Soc. Symp. Proc. 970, MRS 2006 Fall Meeting, Boston, edited by C. A. Bower, P. E. Garrou, P. Ramm, K. Takahashi, Materials Research Society, Warrendale, Pennsylvania (2007) 67-78.
Wafer-Level 3-D System Integration" in "3-D IC Integration: Technology and Applications. P Ramm, J M Wolf, B Wunderle, P.E. Garrou, P. Ramm and C.A. BowerWiley-VCHP. Ramm, J.M. Wolf and B. Wunderle. "Wafer-Level 3-D System Integration" in "3-D IC Integration: Technology and Applications", P.E. Garrou, P. Ramm and C.A. Bower, Editors, Wiley- VCH, 2008
| []
|
[
"The excitation operator approach to non-Markovian dynamics of quantum impurity models in the Kondo regime",
"The excitation operator approach to non-Markovian dynamics of quantum impurity models in the Kondo regime"
]
| [
"Pei Wang \nInstitute of Applied Physics\nZhejiang University of Technology\n310023HangzhouP. R. China\n"
]
| [
"Institute of Applied Physics\nZhejiang University of Technology\n310023HangzhouP. R. China"
]
| []
| We present a numerical method for studying the real time dynamics of a small interacting quantum system coupled to an infinite fermionic reservoir. By building an orthonormal basis in the operator space, we turn the Heisenberg equation of motion into a system of linear differential equations, which is then solved iteratively by constructing excitation operators. The application of our method depends on a layer structure in the operator space, which help us to turn an infinite linear system into a series of small systems. We apply the method to investigate the decoherence dynamics of quantum impurity models in the Kondo regime with a non-Markovian reservoir. Taking full account of environmental back-actions and electron-electron interactions, we find that the coexistence of the Kondo correlation and a non-Markovian reservoir induces coherence ringings, which will be suppressed by either driving the system away from the particle-hole symmetric point or changing the reservoir into a Markovian one. | 10.1140/epjb/e2013-40702-2 | [
"https://arxiv.org/pdf/1209.3881v1.pdf"
]
| 119,183,702 | 1209.3881 | 615243454b1b63e3397438b3f0002a3ca0989fc3 |
The excitation operator approach to non-Markovian dynamics of quantum impurity models in the Kondo regime
18 Sep 2012
Pei Wang
Institute of Applied Physics
Zhejiang University of Technology
310023HangzhouP. R. China
The excitation operator approach to non-Markovian dynamics of quantum impurity models in the Kondo regime
18 Sep 2012numbers: 0365Yz0260Cb7215Qm7323-b
We present a numerical method for studying the real time dynamics of a small interacting quantum system coupled to an infinite fermionic reservoir. By building an orthonormal basis in the operator space, we turn the Heisenberg equation of motion into a system of linear differential equations, which is then solved iteratively by constructing excitation operators. The application of our method depends on a layer structure in the operator space, which help us to turn an infinite linear system into a series of small systems. We apply the method to investigate the decoherence dynamics of quantum impurity models in the Kondo regime with a non-Markovian reservoir. Taking full account of environmental back-actions and electron-electron interactions, we find that the coexistence of the Kondo correlation and a non-Markovian reservoir induces coherence ringings, which will be suppressed by either driving the system away from the particle-hole symmetric point or changing the reservoir into a Markovian one.
I. INTRODUCTION
The decoherence of a small quantum system coupled to a fermionic bath has recently attracted much attention [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17], due to the fact that the fermionic bath manifests as an important source of decoherence in a wide range of electronic devices designed for solid-state quantum computers. In spite of considerable effort, a thorough understanding of the coherence dynamics of fermionic baths is still lack in the non-Markovian regime. The non-Markovian dynamics is difficult to address theoretically, because the traditional Born-Markov approximation is invalid when the relaxation time of the environment is comparably long and then the back-action of the environment plays an important role in the dynamics of the system. To fully take into account the back-actions, the system-environment coupling must be treated in a non-perturbative way. In recent years, several non-perturbative approaches have been suggested to derive the master equation in the existence of strong back-actions [10][11][12][13][14][15][16][17], when the electron-electron interaction is absent or irrelevant to the non-Markovian dynamics.
The non-interacting models, however, fail to incorporate the physics in solid-state structures where the Coulomb interaction between electrons is greater than the electron kinetic energy. A well known paradigm is the Kondo effect, displayed in quantum dots in the Coulomb blockade regime. In the Kondo effect, the e-e interaction induces a strong correlation of electrons, which can only be understood from a many-particle point of view. Then it is obliged to study the interplay of correlation physics and non-Markovian dynamics.
In this paper, we study the coherence dynamics of * Electronic address: [email protected] quantum dots in the Kondo regime coupled to a non-Markovian fermionic reservoir. The model is described by the Anderson impurity Hamiltonian, which can be written asĤ
=Ĥ S +Ĥ B +Ĥ V .(1)
HereĤ S = ǫ d σĉ † 0σĉ 0σ + Uĉ † 0↑ĉ 0↑ĉ † 0↓ĉ 0↓ is the system Hamiltonian, where ǫ d denotes the gate potential and U the Coulomb repulsive interaction. AndĤ B = kσ ǫ kĉ † kσĉ kσ is the Hamiltonian for a non-Markovian fermionic reservoir, which is set with a finite bandwidth and a sharp edge. The coupling Hamiltonian is given bŷ
H V = kσ V k ĉ † 0σĉ kσ + h.c. .
To solve this problem, we develop the numerical excitation operator method on the basis of previous works by the author [18,19]. This method is designed for studying the real time dynamics of a strongly-correlated system driven out of equilibrium. As for quantum impurity models, it is distinguished from various approaches [20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38] devised recently by the fact that both the Coulomb interaction and the system-environment coupling are dealt with in its full extent and at the same time the reservoir is set to be infinite. These are difficult to be fully realized in present approaches.
The plan of the paper is the following. In Sec. II we introduce the excitation operator method. Its application in quantum impurity models is demonstrated in Sec. III. The results are discussed in Sec. IV. Especially, we will discuss the intrinsic correlation between the non-Markovian dynamics and the Kondo physics. In Sec. V, we discuss the suppression of the non-Markovian dynamics. We conclude with a summary and discussion of our method and results in Sec. VI.
II. THE EXCITATION OPERATOR METHOD
The excitation operator method is designed for solving the Heisenberg equation of motion:
dÔ(t) dt = i[Ĥ,Ô(t)],(2)
whereÔ is the observable that we are interested in. We choose an orthonormal basis {Ô i } in the operator space which contains all the linear operators mapping the Hilbert space into itself. Any two basis operators satisfy
Ô i ,Ô j = δ i,j ,(3)
where the bracket denotes the inner product between two operators and is generally defined as
Ô i ,Ô j := 1 N Tr[Ô † iÔ j ].(4)
Here N is the normalization factor. An arbitrary observable can be decomposed into the linear combination of the basis operators. Then our target is to solve the Heisenberg equations of the basis operators. The Heisenberg equation is solved by constructing the excitation operators i satisfying the eigen equations:
[Ĥ, i ] = λ iÂi ,(5)
whereĤ is the Hamiltonian of the system, and λ i the excitation energy of i . We suppose that the excitation operators are expressed by the basis operators aŝ
A i = j A i,jÔj .(6)
The coefficients matrix A needs to be determined. We then calculate the commutators between the Hamiltonian and the basis operators
[Ĥ,Ô i ] = j H j,iÔj ,(7)
and obtain a matrix H. By substituting Eq. 6 and 7 into Eq. 5, we find that A is in fact the unitary transformation to diagonalize the matrix H.
In principle, the elements of H can be written as
H i,j = Ô i , [Ĥ,Ô j ] .(8)
By using the definition of the inner product and the fact that the Hamiltonian is self-adjoint, we prove that H must be a Hermitian matrix and diagonalizable. The solution of the Heisenberg equation can be expressed by the coefficients A aŝ
O i (t) = j,i ′ A * j,i e iλj t A j,i ′Ô i ′ .(9)
Here we use the fact that A is unitary.
In practice, the dimension of H grows exponentially with the system size, so that directly diagonalizing it is impossible. However, there is a layer structure in the operator space, generated by the superoperator [Ĥ, ·]. This indicates that we could change the problem of diagonalizing H into the problem of diagonalizing a series of small matrices.
We consider the evolution of the basis operatorÔ i in a small time interval τ . The solution of the Heisenberg equationÔ i (τ ) is mostly limited in a subspace of the whole operator space, generated byÔ i and [Ĥ,Ô i ]. As τ → 0, we can calculateÔ i (τ ) in this subspace, the dimension of which is small. In other words, we express
[Ĥ,Ô i ] as [Ĥ,Ô i ] = jH j,iÔj ,(10)
whereH j,i is non-zero as j = i. Obviously,H is a submatrix of H. As τ → 0, the solution can be written as
lim τ →0Ô i (τ ) = j,i ′à * j,i e iλj τà j,i ′Ô i ′ ,(11)
whereλ i andà are the eigenvalues and the unitary matrix ofH respectively. To calculateÔ i (t) at a finite time, we divide the time t into N small intervals of length τ = t/N , and havê
O i (t) = e iĤτ e iĤτ · · · e iĤτÔ i e −iĤτ · · · e −iĤτ e −iĤτ .(12)
In each time interval, the evolution of the basis operators is calculated according to Eq. 11. As τ → 0, the result will go to the solution of the Heisenberg equation. The point of this method is to utilize the layer structure in the operator space. That is, the whole basis can be gradually generated from a single basis operatorÔ i by iteratively calculating the commutators between the Hamiltonian and the basis operators. But there is no such a structure in the Hilbert space. This is why we choose to solve the Heisenberg equation, instead of the Schrödinger equation. The number of basis operators which need to be stored in calculatingÔ i (N τ ) increases exponentially as N increasing. This is a problem for numerical calculations. It can be solved by a truncation scheme: after obtain-ingÔ i (N τ ) one keeps only the M basis operators with the largest amplitudes. In this way, the number of the stored basis operators is fixed to be M and the computation time increases linearly with N . Suitable values for the parameter M depends upon the model. It should be decided numerically by varying M .
III. THE MODEL AND THE ORTHONORMAL BASIS OF THE OPERATOR SPACE
Our system consists of an impurity site, which is coupled, via particle-particle exchanges, to an electron reservoir. The system plus environment is described by the Hamiltonian [1]. To facilitate applying the excitation operator method, we re-express the Hamiltonian of the reservoir in real space in terms of an infinite chain:
H B = −g ∞ σ,i=1 (ĉ † iσĉ i+1,σ + h.c.),(13)
whereĉ iσ is the electron annihilation operator at site i. We take the size of the reservoir to be infinite. This avoids any coherence oscillation due to the finite-size effects. The coupling Hamiltonian now becomeŝ
H V = V σ (ĉ † 0σĉ 1σ + h.c.),(14)
where the system is coupled only to the first site of the reservoir. The system Hamiltonian keeps invariant aŝ
H S = ǫ d σĉ † 0σĉ 0σ + Un 0↑n0↓ .(15)
We find an orthonormal basis in the operator space of this model by transforming it into a spin-3 2 chain by the Jordan-Wigner transformation.
The model contains a series of sites. The dimension of the local Hilbert space at each site is four with the basis vectors |0 , | ↑ , | ↓ and | ↑↓ . Keeping in mind that the basis operators are orthogonal to each other, we choose next sixteen 4 × 4 matrices as the local basis operators:
1 0 0 1 , 1 0 0 −1 , o α 0 0 o α , o α 0 0 −o α , 0 1 1 0 , 0 −1 1 0 , 0 o α o α 0 and 0 −o α o α 0 .
Here the 1 denotes the two-dimensional identity matrix, and o α with α = x, y, z the three generators of SU (2) algebra, which are
o x = 0 1 1 0 , o y = 0 −1 1 0 , o z = 1 0 0 −1 .(16)
The sixteen matrices form a complete basis of the local operator space. A basis operator can be expressed as the tensor product of the local operators:
O = ∞ i=0 ⊗σ i ,(17)
where σ i denotes the local operator at site i. Now we explicitly define the inner product as
Ô i ,Ô j := 1 4 L Tr[Ô † iÔ j ],(18)
where L denotes the total number of sites and is taken as L → ∞. It is easy to prove that the operators in Eq. 17 satisfy the orthogonal relations. The Hamiltonian is real, so thatH is a real symmetric matrix. And the diagonal elements ofH are all zero. Because the basis operators are either symmetric or antisymmetric, then the projection of [Ĥ,Ô i ] onÔ i is zero.
The Hamiltonian can be expressed in the basis operators by the Jordan-Wigner transformation, in which a phase factor is attached to each site to produce the anticommutative field operators. They arê
c † i↑ = j<i ⊗ o z 0 0 −o z j ⊗ 1 2 o x 0 0 o x i + 1 2 o y 0 0 o y i ,(19)andĉ † i↓ = j<i ⊗ o z 0 0 −o z j ⊗ 1 2 0 −o z o z 0 i + 1 2 0 o z o z 0 i ,(20)
where i, j = 0, 1, 2, · · · denote the sites and o z 0 0 −o z j the phase factor at site j. The hopping term in the Hamiltonian can then be expressed as σ ĉ † iσĉ i+1,σ + h.c.
= 1 2 o x 0 0 −o x i ⊗ o x 0 0 o x i+1 − 1 2 o y 0 0 −o y i ⊗ o y 0 0 o y i+1 + 1 2 0 1 1 0 i ⊗ 0 o z o z 0 i+1 − 1 2 0 −1 1 0 i ⊗ 0 −o z o z 0 i+1 .(21)
And the system Hamiltonian becomeŝ
H S = ǫ d + U 4 − 2ǫ d + U 4 1 0 0 −1 0 − 2ǫ d + U 4 o z 0 0 o z 0 + U 4 o z 0 0 −o z 0 .(22)
After solving the Heisenberg equation, we need to calculate the expectation value of the basis operators with respect to the initial state. This is done by transforming the basis operators into Majorana operators defined aŝ
γ σ± i =ĉ † iσ ±ĉ iσ .(23)
The sixteen local operators at site i are in one-toone correspondence with next Majorana operators: 1,
γ ↑+ i , γ ↑− i , γ ↓+ i , γ ↓− i , γ ↑+ i γ ↑− i , γ ↑+ i γ ↓+ i , γ ↑+ i γ ↓− i , γ ↑− i γ ↓+ i , γ ↑− i γ ↓− i , γ ↓+ i γ ↓− i , γ ↑+ i γ ↑− i γ ↓+ i , γ ↑+ i γ ↑− i γ ↓− i , γ ↑+ i γ ↓+ i γ ↓− i , γ ↑− i γ ↓+ i γ ↓− i and γ ↑+ i γ ↑− i γ ↓+ i γ ↓− i .
They are not exactly the same, since the product of an odd number of Majorana operators, such as γ ↓+ i , contains phase factors at the sites j < i. However, we can design an iterative algorithm to transform a basis operator into a product of Majorana operators. The algorithm begins from the largest site where the local operator is not the identity, and sweeps the chain in the descending order.
After the transformation, the expectation value is calculated by using the Wick's theorem. The contraction of a pair of Majorana operators at zero temperature is found to be
γ σ+ i γ σ+ j = − γ σ− i γ σ− j = δ i,j ,(24)
and
γ σ+ i γ σ− j = −2 sin(|i − j|π/2) |i − j|π ,(25)
as |i − j| is an odd number.
IV. INTERACTION-INDUCED COHERENCE RINGING IN A NON-MARKOVIAN ENVIRONMENT
We study the coherence dynamics of the system after its coupling to the reservoir is switched on at the time t = 0. The reduced density matrix of the system is obtained by calculating the expectation values of the sixteen local operators, and is formally written as i,j ρ ij |i j|, where i, j = 1, 2, 3, 4 and the corresponding states are |0 , | ↑ , | ↓ and | ↑↓ respectively. Our method is distinguished from the master equation approach by the fact that no approximation is made on solving the Heisenberg equation and the environmental back-actions are fully taken into account.
We set the reservoir at zero temperature, avoiding the thermal fluctuation which would suppress the Kondo resonance. The Fermi energy of the reservoir is set to be the energy zero. We employ the level-broadening at the impurity site Γ, generally defined as Γ = V 2 /g [28], as the energy unit. This is usually in studying the Anderson impurity model. And the time unit is set to be 1/Γ (the convention = 1 is used throughout the paper).
At the particle-hole symmetric point, i.e., ǫ d = −U/2, a large U provides a limit to the electron number of the system. The system is in the Kondo regime and can be described by a single spin. We suppose that its initial state is prepared as a superposition of the spin up and down states, i.e., α| ↑ + β| ↓ . In the decoherence theory considering a Markovian environment, the coherence of the initial state will be lost in an exponential way after coupled to the reservoir. However, the real environment in the experiments is usually not Markovian, and the back-actions from the environment to the spin cannot be neglected. Here we consider a non-Markovian reservoir by setting the bandwidth of the reservoir to be comparable with the level-broadening at the impurity, i.e., g ∼ Γ.
We first set the interaction U to zero to compare our result with the exact solution, obtained by exact diagonalization of the single-particle eigenmodes. The elements of the reduced density matrix are shown in Fig. 1. The result by the excitation operator method fits well with the exact solution, until the density matrix has relaxed to its equilibrium value. This proves that our method is a powerful tool in studying the real time dynamics of a quantum system coupled to a non-interacting reservoir. The errors can be controlled by letting τ → 0 and M → ∞. It provides a reliable way of understanding the dynamics of decoherence, especially in strongly-correlated systems, where no analytical method is available.
As the e-e interaction is absent, the off-diagonal element, i.e., the coefficient of the term | ↑ ↓ |, decays exponentially, as predicted by the decoherence theory. This is the feature of a Markovian dynamics. The back-actions of the environment are sufficiently suppressed. However, it is not the case as U ≫ Γ (see the top panel in Fig. 2). In the existence of a strong interaction, the exponential decays are replaced by oscillations. And the intermediate quasi-steady regimes are observed. The decoherence time significantly increases as U increasing. We then analyze the time evolution of the von Neumann entropy at different U (see the bottom panel in Fig. 2). As is well known, the equilibrium value of the entropy is 2 ln 2 [39]. As U = 0, the entropy increases monotonically from zero towards its equilibrium value, corresponding to the exponential decay of the off-diagonal elements. But as U ≫ Γ, we find strong oscillations in the entropy, which is the signal of non-Markovian dynamics. The coherence in the initial state is lost and recovered repeatedly, similar to the spin echo effect. However, in our model, the purification of states arises naturally from the e-e interaction and no external driving field is needed as in the spin echo or dynamical decoupling technique [40]. This provides a new perspective in protecting the quantum state.
The coherence ringing is an effect induced by the e-e interactions. It must be distinguished from the oscillations of coherence observed in the non-Markovian environments before [11,12], where the e-e interaction is absent. Without interactions, the electrons move independently, and the non-Markovian dynamics can be understood in the single particle picture. However, in the existence of a strong interaction, the single-particle picture breaks down due to the correlations between electrons. As U ≫ Γ, the dissipation process is controlled by the Kondo correlation. The steady state as the time goes to infinity is a spin singlet. The correlation between the spin in the system and the spins in the reservoir is built in course of time, accompanied by the loss of coherence in the system. The non-Markovian coherence dynamics is in fact related to the dynamics of spin correlations in a Kondo model.
The initial state is found to be indifferent to the dynamics of decoherence. As an example, we choose two different initial states, which are 1 as a function of time in Fig. 2. The results are exactly the same. This reflects the spin-flip symmetry in the Hamiltonian. The correlation between the non-Markovian dynamics and the Kondo physics is universal for the initial state α| ↑ + β| ↓ .
The period of the coherence ringing is obtained from the numerical result, and plotted as a function of U in Fig. 3. In a large regime of U , the oscillation period is found to be proportional to ln (U/Γ). As the interaction strength increasing, the period decreases in a logarithmic way. That the oscillation period depends on U can be understood by studying the energy levels of the system. We emphasize that the coherence ringing happens as the system is at the particle-hole symmetric point, i.e., ǫ d = −U/2. At the symmetric point, the Hamiltonian of the system changes intô
H S = U 4 1 0 0 0 0 −1 0 0 0 0 −1 0 0 0 0 1 .(26)
We see that the energy levels of the system are degenerate. The ground level is two-fold degenerate, containing the spin up and down states. It is separated by a gap of U/2 from the excited level, which is also two-fold degenerate, containing the vaccum state and the doublyoccupied state. An energy gap of U/2 protects the sub-Hilbert space containing the states | ↑ and | ↓ , and then protects the quantum coherence in the initial state. The gap is critical to the appearance of coherence ringing, which is obvious only as the gap is large.
V. SUPPRESSION OF THE COHERENCE RINGING
We attribute the coherence ringing to the coexistence of the Kondo correlation and the non-Markovian reservoir. Then it should disappear if any of the two condi- tions is broken. This is verified by the numerical results (see Fig. 4 and 5). In Fig. 4, we plot the time evolution of the entropy at different gate potentials. As the system is away from the particle-hole symmetric point, the coherence ringing is suppressed. This is due to the suppression of the Kondo resonance as the system is depleted or doubly-occupied. The suppression of coherence ringing can also be understood by the splitting of the excited level. The coherence ringing is distinguished from a simple Rabi oscillation because there are totally four levels in the system. By driving the system away from the symmetric point, we break the degeneracy at the excited level, which splits into the vacuum level and the doubly-occupied level. This is related to the disappear of the coherence ringing. According to our knowledge, it is the first time to find that the coherence in the ground state depends on the degeneracy at the excited level.
ε d =-U/2 ε d =-3Γ ε d =-2Γ ε d =0
The e-e interaction induces a coherence ringing only if the environment is non-Markovian, i.e., g ∼ Γ. In Fig. 5, we show the entropy functions at different g, the bandwidth of the reservoir. In the case of g = Γ, the non-Markovian dynamics is significant. The coupling between the neighbor sites in the reservoir is as same as the coupling between the system and the reservoir. The backaction is strong since the relaxation time in the system is comparable with that in the reservoir. As the interaction is much larger than the bandwidth, i.e., U ≫ g, the coherence ringing appears. If we keep the interaction invariant, at the same time increasing the bandwidth g, the coherence ringing is suppressed. As g ∼ U ≫ Γ, the oscillation is totally destroyed. In the limit of an infinite band, i.e., the Markovian limit, the entropy function recover the feature in the non-interacting model. That is, it increases monotonically towards the steady value: 2 ln 2. But the relaxation time is now controlled by the interaction U , instead of the impurity level-width Γ. We see the disappear of the coherence ringing as the reservoir changes gradually to the Markovian limit.
VI. CONCLUSIONS
We have presented the numerical excitation operator method to coherence dynamics of an interacting quantum system coupled to a fermionic reservoir. Compared with the present analytical approaches, it takes full account of the Coulomb interaction between electrons and the system-environment coupling, and then provides new information on the interplay of electron-electron correlations and environmental back-actions. At the same time, our method takes into account an infinite reservoir by utilizing the layer structure in the operator space, and then avoids the finite-size effects which threaten the present numerical methods. We have applied this method to an interacting quantum dot coupled to a fermionic reservoir, discovering the coherence ringing induced by the e-e interaction in the Kondo regime. The coherence ringing is a many-body effect and can only be observed in the presence of both Kondo resonance and non-Markovian reservoir. It will be suppressed as the system is away from the particle-hole symmetric point or the reservoir changes into the Markovian limit.
Although we concentrate in this paper the dynamics of decoherence in quantum impurity models. The method that we presented can be applied to investigate the real time dynamics in a wide range of models describing a quantum system coupled to spin, fermionic and bosonic reservoirs.
FIG. 1 :
1Time evolution of the elements of the reduced density matrix at the symmetric point as U = 0. The results by the excitation operator method, represented by the black circles, are compared with the exact solution, represented by the various types of lines. The initial state is set to be 1√ 2 (| ↑ + | ↓ ).The coupling in the reservoir is set to be g = Γ.
FIG. 2 :
2The off-diagonal element and the von Neumann entropy of the density matrix as a function of time at different U (top: off-diagonal element, bottom: entropy). We choose the initial state of the system to be
↓ is also studied at U = 16Γ, which is represented by the black circles for a comparison.
FIG. 3 :
3In this figure, we plot the period of coherence ringing obtained from the numerical result, represented by black circles, as a function of U . The data is fitted to a linear function, represented by the solid line.
FIG. 4 :FIG. 5 :
45The time evolution of the entropy at different ǫ d . The interaction is set to be U = 8Γ. Then the particle-hole symmetric point is at ǫ d = −4Γ. The time evolution of the entropy at the particle-hole symmetric point at different g. The interaction is set to be U = 8Γ. The bandwidth of the reservoir g → ∞ corresponds to the Markovian limit.
. J Restrepo, R Chitra, S Camalet, Dupont, Phys. Rev. B. 84245109J. Restrepo, R. Chitra, S. Camalet, andÉ. Dupont, Phys. Rev. B 84, 245109 (2011).
. E Paladino, L Faoro, G Falci, R Fazio, Phys. Rev. Lett. 88228304E. Paladino, L. Faoro, G. Falci, and R. Fazio, Phys. Rev. Lett. 88, 228304 (2002).
. A Grishin, I V Yurkevich, I V Lerner, Phys. Rev. B. 7260509A. Grishin, I. V. Yurkevich, and I. V. Lerner, Phys. Rev. B 72, 060509(R) (2005).
. R Sousa, K B Whaley, F K Wilhelm, J Von Delft, Phys. Rev. Lett. 95247006R. de Sousa, K. B. Whaley, F. K. Wilhelm, and J. von Delft, Phys. Rev. Lett. 95, 247006 (2005).
. D Segal, D R Reichman, A J Millis, Phys. Rev. B. 76195316D. Segal, D. R. Reichman, and A. J. Millis, Phys. Rev. B 76, 195316 (2007).
. F Marquardt, J Delft, R Smith, V Ambegaokar, Phys. Rev. B. 76195331F. Marquardt, J. von Delft, R. Smith, and V. Ambe- gaokar, Phys. Rev. B 76, 195331 (2007).
. I Neder, F Marquardt, New Journal of Physics. 9112I. Neder and F. Marquardt, New Journal of Physics 9, 112 (2007).
. R M Lutchyn, L Cywiński, C P Nave, S. Das Sarma, Phys. Rev. B. 7824508R. M. Lutchyn, L. Cywiński, C. P. Nave, and S. Das Sarma, Phys. Rev. B 78, 024508 (2008).
. N Yamada, A Sakuma, H Tsuchiura, J. Appl. Phys. 101N. Yamada, A. Sakuma, and H. Tsuchiura, J. Appl. Phys. 101, 09C110 (2007).
. F Marquardt, Phys. Rev. B. 74125319F. Marquardt, Phys. Rev. B 74, 125319 (2006).
. M W Y Tu, W M Zhang, Phys. Rev. B. 78235311M. W. Y. Tu and W. M. Zhang, Phys. Rev. B 78, 235311 (2008).
. W M Zhang, P Y Lo, H N Xiong, M W Y Tu, F Nori, arXiv:1206.4490W. M. Zhang, P. Y. Lo, H. N. Xiong, M. W. Y. Tu, and F. Nori, arXiv:1206.4490.
. W Shi, X Zhao, T Yu, arXiv:1203.2219W. Shi, X. Zhao, and T. Yu, arXiv:1203.2219.
. M W Y Tu, W M Zhang, J Jin, Phys. Rev. B. 83115318M. W. Y. Tu, W. M. Zhang, and J. Jin, Phys. Rev. B 83, 115318 (2011).
. C Lei, W M Zhang, Phys. Rev. A. 8452116C. U Lei and W. M. Zhang, Phys. Rev. A 84, 052116 (2011).
. P W Chen, C C Jian, H S Goan, Phys. Rev. B. 83115439P. W. Chen, C. C. Jian, and H. S. Goan, Phys. Rev. B 83, 115439 (2011).
. D Marcos, C Emary, T Brandes, R Aguado, Phys. Rev. B. 83125426D. Marcos, C. Emary, T. Brandes, and R. Aguado, Phys. Rev. B 83, 125426 (2011).
. P Wang, AIP Advances. 212194P. Wang, AIP Advances 2, 012194 (2012).
. P Wang, arXiv:1207.1861P. Wang, arXiv:1207.1861.
. T L Schmidt, P Werner, L Mühlbacher, A Komnik, Phys. Rev. B. 78235110T. L. Schmidt, P. Werner, L. Mühlbacher, and A. Kom- nik, Phys. Rev. B 78, 235110 (2008).
. M Schiró, M Fabrizio, Phys. Rev. B. 79153302M. Schiró and M. Fabrizio, Phys. Rev. B 79, 153302 (2009).
. P Werner, T Oka, A J Millis, Phys. Rev. B. 7935320P. Werner, T. Oka, and A. J. Millis, Phys. Rev. B 79, 035320 (2009).
. L Mühlbacher, E Rabani, Phys. Rev. Lett. 100176403L. Mühlbacher and E. Rabani, Phys. Rev. Lett. 100, 176403 (2008).
. F B Anders, A Schiller, Phys. Rev. Lett. 95196801F. B. Anders and A. Schiller, Phys. Rev. Lett. 95, 196801 (2005).
. F B Anders, A Schiller, Phys. Rev. B. 74245113F. B. Anders and A. Schiller, Phys. Rev. B 74, 245113 (2006).
. L G G V Dias Da Silva, F Heidrich-Meisner, A E Feiguin, C A Büsser, G B Martins, E V Anda, E Dagotto, Phys. Rev. B. 78195317L. G. G. V. Dias da Silva, F. Heidrich-Meisner, A. E. Feiguin, C. A. Büsser, G. B. Martins, E. V. Anda, and E. Dagotto, Phys. Rev. B 78, 195317 (2008).
. E Boulat, H Saleur, P Schmitteckert, Phys. Rev. Lett. 101140601E. Boulat, H. Saleur, and P. Schmitteckert, Phys. Rev. Lett. 101, 140601 (2008).
. F Heidrich-Meisner, A E Feiguin, E Dagotto, Phys. Rev. B. 79235336F. Heidrich-Meisner, A. E. Feiguin, and E. Dagotto, Phys. Rev. B 79, 235336 (2009).
. A Feiguin, P Fendley, M P A Fisher, C Nayak, Phys. Rev. Lett. 101236801A. Feiguin, P. Fendley, M. P. A. Fisher, and C. Nayak, Phys. Rev. Lett. 101, 236801 (2008).
. H Schoeller, Eur. Phys. J. Special Topics. 168179H. Schoeller, Eur. Phys. J. Special Topics 168, 179 (2009).
. C Karrasch, S Andergassen, M Pletyukhov, D Schuricht, L Borda, V Meden, H Schoeller, EPL. 9030003C. Karrasch, S. Andergassen, M. Pletyukhov, D. Schuricht, L. Borda, V. Meden, and H. Schoeller, EPL 90, 30003 (2010).
. S Andergassen, M Pletyukhov, D Schuricht, H Schoeller, L Borda, Phys. Rev. B. 83205103S. Andergassen, M. Pletyukhov, D. Schuricht, H. Schoeller, and L. Borda, Phys. Rev. B 83, 205103 (2011).
. M Pletyukhov, D Schuricht, H Schoeller, Phys. Rev. Lett. 104106801M. Pletyukhov, D. Schuricht, and H. Schoeller, Phys. Rev. Lett. 104, 106801 (2010).
. D M Kennes, S G Jakobs, C Karrasch, V Meden, Phys. Rev. B. 8585113D. M. Kennes, S. G. Jakobs, C. Karrasch, and V. Meden, Phys. Rev. B 85, 085113 (2012).
. D M Kennes, V Meden, Phys. Rev. B. 85245101D. M. Kennes and V. Meden, Phys. Rev. B 85, 245101 (2012).
. A Hackl, S Kehrein, Phys. Rev. B. 7892303A. Hackl and S. Kehrein, Phys. Rev. B 78, 092303 (2008).
. A Hackl, S Kehrein, J. Phys.: Condens. Matter. 2115601A. Hackl and S. Kehrein, J. Phys.: Condens. Matter 21, 015601 (2009).
. P Wang, S Kehrein, Phys. Rev. B. 82125124P. Wang and S. Kehrein, Phys. Rev. B 82, 125124 (2010).
. R Bulla, T A Costi, T Pruschke, Rev. Mod. Phys. 80395R. Bulla, T. A. Costi, and T. Pruschke, Rev. Mod. Phys. 80, 395 (2008).
. L Viola, S Lloyd, Phys. Rev. A. 582733L. Viola and S. Lloyd, Phys. Rev. A 58, 2733 (1998);
. M Ban, J. Mod. Opt. 452315M. Ban, J. Mod. Opt. 45, 2315 (1998);
. P Zanardi, Phys. Lett. A. 25877P. Zanardi, Phys. Lett. A 258, 77 (1999).
| []
|
[
"Superconductivity induced by structural reorganization in the electron-doped cuprate Nd 2−x Ce x CuO 4",
"Superconductivity induced by structural reorganization in the electron-doped cuprate Nd 2−x Ce x CuO 4"
]
| [
"Anita Guarino \nDipartimento di Fisica \"E. R. Caianiello\"\nUniversità degli Studi di Salerno\n84084FiscianoSalernoItaly\n\nConsiglio Nazionale delle Ricerche CNR-SPIN\nUOS Salerno\n84084Fisciano (Salerno)Italy\n",
"Carmine Autieri \nConsiglio Nazionale delle Ricerche CNR-SPIN\nUOS Salerno\n84084Fisciano (Salerno)Italy\n\nInternational Research Centre Magtop\nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/4602668WarsawPoland\n",
"Pasquale Marra \nGraduate School of Mathematical Sciences\nThe University of Tokyo\n3-8-1 Komaba153-8914Meguro, TokyoJapan\n\nDepartment of Physics, and Research and Education Center for Natural Sciences\nKeio University\n4-1-1 Hiyoshi223-8521YokohamaKanagawaJapan\n",
"Antonio Leo \nDipartimento di Fisica \"E. R. Caianiello\"\nUniversità degli Studi di Salerno\n84084FiscianoSalernoItaly\n\nConsiglio Nazionale delle Ricerche CNR-SPIN\nUOS Salerno\n84084Fisciano (Salerno)Italy\n\nNANO_MATES Research Center\nUniversità degli Studi di Salerno\n84084FiscianoSalernoItaly\n",
"Gaia Grimaldi \nDipartimento di Fisica \"E. R. Caianiello\"\nUniversità degli Studi di Salerno\n84084FiscianoSalernoItaly\n\nConsiglio Nazionale delle Ricerche CNR-SPIN\nUOS Salerno\n84084Fisciano (Salerno)Italy\n",
"Adolfo Avella \nDipartimento di Fisica \"E. R. Caianiello\"\nUniversità degli Studi di Salerno\n84084FiscianoSalernoItaly\n\nConsiglio Nazionale delle Ricerche CNR-SPIN\nUOS Salerno\n84084Fisciano (Salerno)Italy\n\nCNISM di Salerno\nUniversità degli Studi di Salerno\n84084Fisciano (Salerno)Italy\n",
"Angela Nigro \nDipartimento di Fisica \"E. R. Caianiello\"\nUniversità degli Studi di Salerno\n84084FiscianoSalernoItaly\n\nConsiglio Nazionale delle Ricerche CNR-SPIN\nUOS Salerno\n84084Fisciano (Salerno)Italy\n"
]
| [
"Dipartimento di Fisica \"E. R. Caianiello\"\nUniversità degli Studi di Salerno\n84084FiscianoSalernoItaly",
"Consiglio Nazionale delle Ricerche CNR-SPIN\nUOS Salerno\n84084Fisciano (Salerno)Italy",
"Consiglio Nazionale delle Ricerche CNR-SPIN\nUOS Salerno\n84084Fisciano (Salerno)Italy",
"International Research Centre Magtop\nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/4602668WarsawPoland",
"Graduate School of Mathematical Sciences\nThe University of Tokyo\n3-8-1 Komaba153-8914Meguro, TokyoJapan",
"Department of Physics, and Research and Education Center for Natural Sciences\nKeio University\n4-1-1 Hiyoshi223-8521YokohamaKanagawaJapan",
"Dipartimento di Fisica \"E. R. Caianiello\"\nUniversità degli Studi di Salerno\n84084FiscianoSalernoItaly",
"Consiglio Nazionale delle Ricerche CNR-SPIN\nUOS Salerno\n84084Fisciano (Salerno)Italy",
"NANO_MATES Research Center\nUniversità degli Studi di Salerno\n84084FiscianoSalernoItaly",
"Dipartimento di Fisica \"E. R. Caianiello\"\nUniversità degli Studi di Salerno\n84084FiscianoSalernoItaly",
"Consiglio Nazionale delle Ricerche CNR-SPIN\nUOS Salerno\n84084Fisciano (Salerno)Italy",
"Dipartimento di Fisica \"E. R. Caianiello\"\nUniversità degli Studi di Salerno\n84084FiscianoSalernoItaly",
"Consiglio Nazionale delle Ricerche CNR-SPIN\nUOS Salerno\n84084Fisciano (Salerno)Italy",
"CNISM di Salerno\nUniversità degli Studi di Salerno\n84084Fisciano (Salerno)Italy",
"Dipartimento di Fisica \"E. R. Caianiello\"\nUniversità degli Studi di Salerno\n84084FiscianoSalernoItaly",
"Consiglio Nazionale delle Ricerche CNR-SPIN\nUOS Salerno\n84084Fisciano (Salerno)Italy"
]
| []
| Electron-doped and hole-doped superconducting cuprates exhibit a symmetric phase diagram as a function of doping. This symmetry is however only approximate. Indeed, electron-doped cuprates become superconductors only after a specific annealing process: This annealing affects the oxygen content by only a tiny amount, but has a dramatic impact on the electronic properties of the sample. Here we report the occurrence of superconductivity in oxygen-deficient Nd2-xCexCuO4 thin films grown in an oxygen-free environment, after annealing in pure argon flow. As verified by x-ray diffraction, annealing induces an increase of the interlayer distance between CuO2 planes in the crystal structure. Since this distance is correlated to the concentration of oxygens in apical positions, and since oxygen content cannot substantially increase during annealing, our experiments indicate that the superconducting phase transition has to be ascribed to a migration of oxygen ions to apical positions during annealing. Moreover, as we confirm via first-principles density functional theory calculations, the changes in the structural and transport properties of the films can be theoretically described by a specific redistribution of the existing oxygen ions at apical positions with respect to CuO2 planes, which remodulates the electronic band structure and suppresses the antiferromagnetic order, allowing the emergence of hole superconductivity. arXiv:2012.13399v2 [cond-mat.supr-con] | 10.1103/physrevb.105.014512 | [
"https://arxiv.org/pdf/2012.13399v2.pdf"
]
| 246,706,419 | 2012.13399 | e97e7ee714674dde3b4b38aa68c7d08d2df06d13 |
Superconductivity induced by structural reorganization in the electron-doped cuprate Nd 2−x Ce x CuO 4
Anita Guarino
Dipartimento di Fisica "E. R. Caianiello"
Università degli Studi di Salerno
84084FiscianoSalernoItaly
Consiglio Nazionale delle Ricerche CNR-SPIN
UOS Salerno
84084Fisciano (Salerno)Italy
Carmine Autieri
Consiglio Nazionale delle Ricerche CNR-SPIN
UOS Salerno
84084Fisciano (Salerno)Italy
International Research Centre Magtop
Institute of Physics
Polish Academy of Sciences
Aleja Lotników 32/4602668WarsawPoland
Pasquale Marra
Graduate School of Mathematical Sciences
The University of Tokyo
3-8-1 Komaba153-8914Meguro, TokyoJapan
Department of Physics, and Research and Education Center for Natural Sciences
Keio University
4-1-1 Hiyoshi223-8521YokohamaKanagawaJapan
Antonio Leo
Dipartimento di Fisica "E. R. Caianiello"
Università degli Studi di Salerno
84084FiscianoSalernoItaly
Consiglio Nazionale delle Ricerche CNR-SPIN
UOS Salerno
84084Fisciano (Salerno)Italy
NANO_MATES Research Center
Università degli Studi di Salerno
84084FiscianoSalernoItaly
Gaia Grimaldi
Dipartimento di Fisica "E. R. Caianiello"
Università degli Studi di Salerno
84084FiscianoSalernoItaly
Consiglio Nazionale delle Ricerche CNR-SPIN
UOS Salerno
84084Fisciano (Salerno)Italy
Adolfo Avella
Dipartimento di Fisica "E. R. Caianiello"
Università degli Studi di Salerno
84084FiscianoSalernoItaly
Consiglio Nazionale delle Ricerche CNR-SPIN
UOS Salerno
84084Fisciano (Salerno)Italy
CNISM di Salerno
Università degli Studi di Salerno
84084Fisciano (Salerno)Italy
Angela Nigro
Dipartimento di Fisica "E. R. Caianiello"
Università degli Studi di Salerno
84084FiscianoSalernoItaly
Consiglio Nazionale delle Ricerche CNR-SPIN
UOS Salerno
84084Fisciano (Salerno)Italy
Superconductivity induced by structural reorganization in the electron-doped cuprate Nd 2−x Ce x CuO 4
Electron-doped and hole-doped superconducting cuprates exhibit a symmetric phase diagram as a function of doping. This symmetry is however only approximate. Indeed, electron-doped cuprates become superconductors only after a specific annealing process: This annealing affects the oxygen content by only a tiny amount, but has a dramatic impact on the electronic properties of the sample. Here we report the occurrence of superconductivity in oxygen-deficient Nd2-xCexCuO4 thin films grown in an oxygen-free environment, after annealing in pure argon flow. As verified by x-ray diffraction, annealing induces an increase of the interlayer distance between CuO2 planes in the crystal structure. Since this distance is correlated to the concentration of oxygens in apical positions, and since oxygen content cannot substantially increase during annealing, our experiments indicate that the superconducting phase transition has to be ascribed to a migration of oxygen ions to apical positions during annealing. Moreover, as we confirm via first-principles density functional theory calculations, the changes in the structural and transport properties of the films can be theoretically described by a specific redistribution of the existing oxygen ions at apical positions with respect to CuO2 planes, which remodulates the electronic band structure and suppresses the antiferromagnetic order, allowing the emergence of hole superconductivity. arXiv:2012.13399v2 [cond-mat.supr-con]
I. INTRODUCTION
Since the discovery of superconductivity in LaBaCuO by Bednorz and Müller in 1986 [1], the family of hightemperature cuprate superconductors has grown to include more than hundreds of compounds [2] with temperatures as high as 133 K at atmospheric pressure [3]. These compounds share a similar crystal structure made up of stacked layers of copper-oxygen planes and fit into a universal phase diagram, where superconductivity emerges on doping an antiferromagnetic Mott insulator [4,5]. Indeed, by doping the stoichiometric parent compound via ionic substitution, the antiferromagnetic phase is suppressed, and superconductivity appears. Ionic substitution may result in the creation of additional holes or electrons in the CuO 2 planes. Holedoped [1][2][3] (e.g., La 2 -x Sr x CuO 4 ) and electron-doped [6][7][8][9][10] (e.g., Nd 2 -x Ce x CuO 4 ) share a similar temperature-doping phase diagram, which indicates a common origin of the superconducting pairing. However, the symmetry between holeand electron-doped cuprates is only approximate. For example, superconductivity in electron-doped cuprates is much harder to achieve since the antiferromagnetic phase persists at higher doping levels [8,9,[11][12][13].
Perhaps the most puzzling anomaly of electron-doped cuprates is the fact that doping alone does not produce super- * These authors contributed equally to this work † [email protected] ‡ [email protected] conductivity [8,9,14]. As-grown samples are antiferromagnetic Mott insulators and become superconducting only after high-temperature oxygen-reducing annealing [6,7]. Annealing reduces the oxygen content by a small fraction [15][16][17][18][19][20] (between 0.1% and 2%), which decreases the interlayer distance [21][22][23][24][25][26] and contributes to additional electrons in CuO 2 layers [11,12,[27][28][29][30][31]. This results in a dramatic change of the electronic properties [32][33][34][35][36][37][38], including the emergence of the superconducting transition and a reduction of the Néel temperature [11,39,40], which cannot be achieved only by doping (e.g., adding extra cerium in Nd 2 -x Ce x CuO 4 [24]). Furthermore, single crystals of the undoped parent compound Nd 2 CuO 4 are never superconducting. Conversely, Nd 2 CuO 4 thin films exhibit superconductivity after annealing, even without doping [14,24]. In all cases, the annealing process must be carried in rather specific conditions that drive the samples almost to the limit of decomposition [41,42]. For these reasons, it is clear that the annealing process must have additional effects. These may be the consequence of a reorganization of the crystal structure and/or a change of the distribution of dislocations and defects in the sample, such as the removal of the interstitial apical oxygens (defects) [17,[43][44][45][46], the removal of intrinsic in-plane oxygens [47][48][49], or the migration of copper ions to repair and reduce copper vacancies [50,51]. A measurable effect of annealing is the change of the c-axis lattice parameter, which is 2 times the interlayer distance between CuO 2 planes: The lattice parameter decreases to an optimal value c SC at which superconductivity appears [14,52]. Generally, oxygen reduction produces a decrease of the c-axis parameter associated with the removal of apical oxygen [17,18]: Hence, the value of c is considered as a qualitative measure of the oxygen content [23,25,26].
In this work, we report superconductivity in oxygendeficient Nd 2 -x Ce x CuO 4 (NCCO) thin films obtained by annealing in oxygen-free atmosphere, and we provide a theoretical framework to describe the electronic properties and the structural changes before and after annealing. Our samples are grown by DC sputter deposition in oxygen-free atmosphere, and exhibit a c-axis parameter shorter than the optimal value c SC , which indicates oxygen deficiency and the presence of a negligible amount of apical oxygens. Remarkably, these samples become superconducting after annealing in pure argon atmosphere, with a simultaneous increase of the c-axis parameter. This strongly indicates that the superconducting phase transition cannot be ascribed to a change of the oxygen content, but a microscopic reorganization of the crystal structure induced by annealing. Moreover, to obtain a complete phase diagram as a function of the c-axis parameter, we have grown thin films also in oxygen/argon atmosphere. These samples exhibit a c-axis parameter longer than the optimal value c SC and, as expected, become superconducting after annealing, with a decrease of the c-axis, in agreement with previous studies [6][7][8][9]. In all samples, the superconductivity appears only when the c-axis parameter reaches the optimal value c SC = 12.08 Å. As we show using first-principles density functional theory (DFT), the evolution of the c-axis parameter and the presence of holes can be explained in terms of a microscopic structural modification, i.e., with existing oxygens ions partially migrating to apical positions with respect to CuO 2 planes. This induces a remodulation of the energy bands and the suppression of antiferromagnetic order, allowing the emergence of hole superconductivity, i.e., the pairing of hole carriers within the same electronic band [34,37,38].
II. FABRICATION AND CHARACTERIZATION
The undoped parent compound Nd 2 CuO 4 crystallizes in a tetragonal T crystal structure, containing CuO 2 layers stacked along the c-axis and sandwiched between the charge reservoir layers [see Fig. 1(a)]. Moreover, thin films of NCCO and other electron-doped typically exhibit disorder, with oxygen vacancies (in CuO 2 layers or charge reservoir layers) and excess oxygen at apical sites (above and below CuO 2 layers) [8,9,14] [see Fig. 1(b)]. In particular, the presence of in-plane oxygen vacancies is correlated to an increase of electrons in the conduction band [11,12,[27][28][29]31], whereas the concentration of oxygen ions on apical sites is correlated with the elongation of the c-axis parameter [23,25,26].
Our experiment used optimized DC sputtering to grow well-oriented NCCO films without spurious phases and with a fixed cerium content x = 0.17 ± 0.01. We obtained films with thickness 100-200 nm grown respectively in pure argon (type A samples) and mixed argon/oxygen atmosphere with O 2 /Ar> 2% (type B samples), at 1.7 mbar total pressure and heater temperature 850°C (see also Refs. 53 and 54). After initial in situ annealing, we performed a high-temperature ex situ annealing at 900-950°C for 0.5-2 hours, depending on FIG. 1. The NCCO parent compound (undoped Nd2CuO4) crystallizes in the tetragonal T structure, with CuO2 layers sandwiched between charge reservoir Nd2-xCexO2 layers, copper ions surrounded by square-planar arrangements of oxygen ions in the ab plane, and oxygen ions located within the CuO2 and charge reservoir layers. In doped compounds, oxygen ions may partially occupy apical sites above or below the Cu atoms, e.g., one apical oxygen for every two copper atoms, or every copper atom (T * structure). The concentration of apical oxygens is correlated to the elongation of the c-axis parameter.
the film thickness. We deliberately fabricated samples with different growing and annealing conditions to study the interplay between structural reorganization and superconductivity, regardless of other factors (see Appendix).
We measure the lattice parameters by x-ray diffraction (XRD) before and after high-temperature annealing. We found c = 12.04-12.07 Å and a = b ≈ 3.95-3.96 Å for as-grown type A samples, while c = 12.09-12.15 Å and a = b ≈ 3.94-3.97 Å for as-grown type B samples. After annealing, type A samples grown in oxygen-free atmosphere exhibit a slight elongation of the c-axis, whereas the in-plane lattice parameter remains unchanged. Conversely, type B samples grown in oxygen atmosphere exhibit a systematic decrease of the c-axis after annealing, in agreement with Refs. 21-26, and a small change of the in-plane lattice parameter in some samples. (00l) reflections in XRD patterns [55] give c SC = 12.080-12.088 Å for all superconducting films. As established by extensive studies on electrondoped films fabricated by molecular-beam epitaxy [56][57][58] and pulsed laser deposition [59][60][61][62], the c-axis parameter can be used as a measure of the oxygen content. In these studies, the c parameter is always larger than the optimal superconducting value c SC , as we also observe in type B overoxygenated samples, and decreases with the concurrent elimination of excess oxygen atoms during annealing. Hence, a value c < c SC in type A samples indicates oxygen deficiency.
III. TRANSPORT MEASUREMENTS
We measure the temperature dependence of the in-plane resistivity ρ(T ) with a four-probe method in the temperature range 1.6-300 K, before and after high-temperature an-nealing. As-grown type A samples (fabricated in oxygenfree atmosphere) exhibit a crossover between metallic and insulating regimes identified by the resistivity minimum at temperature T min , and a residual resistivity ratio RRR = ρ(300 K)/ρ(4.2 K) > 1 [see Fig. 2(a)]. Furthermore, the resistivity exhibits a quadratic temperature dependence in the metallic region above T min . In electron-doped compounds, a quadratic resistivity dependence is usually found even above room temperature [8,10,63,64]. As-grown type B samples (fabricated in mixed argon/oxygen atmosphere) exhibit instead a weak semiconductor-like temperature dependence of the resistivity with RRR < 1 and
ρ(T ) ∝ R(T ) ∝ T −α [see Fig. 2(b)].
Annealing induces a modification of the oxygen content and a structural reorganization and redistribution of crystal defects and dislocations. To disentangle these two effects, we performed high-temperature ex situ annealing in oxygen-free, pure argon flow. Despite different environmental growth conditions, structural, and electrical properties of type A and B samples, similar thermal treatments are needed to induce superconductivity. All samples become superconducting after high-temperature annealing, regardless of the specific annealing conditions, and with similar critical temperatures T c 24 K. In contrast, no superconducting transition and no structural change are detected after annealing at temperatures below 900°C and with the same environmental conditions, as reported elsewhere [65].
Figure 2(d-e)
show the phase diagram of our NCCO samples as a function of the c-axis parameter, before and after high-temperature annealing, which is the main experimental result of this work. In particular, Fig. 2(d) shows the residual resistivity ratio RRR as a function of the c-axis parameter. In the region c < c SC (sample type A), samples behave as weakly disordered metal with RRR 1 and exhibit a metal-insulator crossover with minimum resistivity at T min . We observe T min up to 250 K and RRR ≈ 1-2, with RRR increasing with decreasing T min . In the region c > c SC (sample type B), samples behave as disordered systems with RRR 1, with a weakly semiconductor-like temperature dependence on the resistivity. Most importantly, Fig. 2(e) shows the superconducting critical temperatures T c as a function of the c-axis parameter. All samples achieve superconductivity after high-temperature annealing, accompanied by a structural change: The c-axis increases in type A samples and decreases in type B samples. The superconducting regime is restricted to the value c SC = 12.08 Å. Hence, high-temperature annealing induces not only superconductivity, but also a concurrent and systematic increase (in type A samples) or decrease (in type B samples) of the c-axis parameter toward the optimal value c SC . This strongly suggests that the superconducting phase transition is induced by a structural reorganization and redistribution of oxygen atoms within the CuO 2 layers, charge reservoir layers, and in apical positions. Moreover, the correlation between the concentration of apical oxygens and the c-axis parameter clearly points to the crucial role and impact of apical oxygens on the electronic properties after annealing.
IV. DFT CALCULATIONS
To understand the role of the structural reorganization of oxygen atoms, and the effects of the presence/absence of apical oxygens on the properties of the three types of samples, we modeled the system by DFT using the VASP [66] package with plane-wave basis set and projector augmented wave method [67]. In particular, as-grown type A samples are modeled by the T structure as in Fig. 1, i.e., a crystal structure with no apical oxygens, according to its fabrication in oxygendeficient atmosphere. As-grown type B samples are instead modeled by the T * structure, i.e., a crystal structure with one apical oxygen for every copper atom, according to its fabrication in oxygen-rich atmosphere. Superconducting samples are modeled by a mixed T SC = 2T * + T structure, with two T * cells and one T cell alternating along the c-axis, i.e., a crystal structure with two apical oxygens for every three copper atoms. This is justified by the experimental evidence that the c-axis parameter and, consequently, the number of apical oxygens, assume intermediate values between those measured for type A and B samples. We performed DFT calculations by first relaxing the crystal structure to obtain the lattice parameters and compare them with the experimental ones and, in particular, with their characteristic hierarchical order. We also computed the antiferromagnetic moments m to monitor the intensity of antiferromagnetic correlations, which suppress superconductivity. (We study the T and T * structures of undoped Nd 2 CuO 4 in the Appendix.) The T * structure shows a larger bandgap than the T structure. The more correlated behavior of the T * structure is confirmed by the magnetic moments, m = 0.38µ B and m = 0.43µ B for T and T * structures, respectively. [34,37,38] in all samples after annealing. Moreover, confirming the validity and accuracy of the chosen modelization and the consistency of the obtained results, the relaxed values of the a and c-axis parameters for x = 1/6 are close to the experimental ones and, more importantly, in the same hierarchical order: For the T structure (as-grown type A samples) a = b = 3.91 Å and c = 12.01 Å, for the T * structure (as-grown type B samples) a = b = 3.83 Å and c = 12.26 Å, and for the T SC structure (superconducting samples) a = 3.85 Å and c = 12.18 Å. The variation of the c-axis parameter can be understood in terms of level repulsion between bands with dominant T and T * characters [see Fig. 3(d)] that leads to a remodulation of the band structure, which definitely weakens antiferromagnetic correlations of the T region and allows the emergence of holes right at the Fermi level (see also the Appendix).
V. CONCLUSIONS
Concluding, the structural and the transport properties of NCCO samples shed new light on the microscopic mechanism underlying the annealing process, which is responsible for the onset of the superconductivity. Indeed, our experiments indicate that the removal of the excess oxygen is not sufficient to trigger superconductivity: Our oxygen-deficient samples, grown in oxygen-free atmosphere, become superconducting only after high-temperature annealing, which always occurs together with a change of the c-axis parameter. This strongly indicates that the superconducting phase transition is induced by a microscopic structural reorganization, even in almostoptimally doped samples. Our theoretical analysis supports this conclusion, indicating a clear correlation between oxygen content, c-axis parameter, and superconductivity. In particular, no apical oxygens, or too many, stabilize strong antiferromagnetic correlations and keep holes away from the Fermi level. In contrast, an intermediate number of apical oxygens induces the suppression of antiferromagnetic correlations and allows to have holes available right at the Fermi level. Hence, the presence of a sizable number of apical oxygens is necessary to allow the emergence of hole superconductivity. This points to the relevance of interlayer hoppings mediated by apical oxygens, analogously to the scenario evidenced in holedoped cuprates [69,70]. Our experimental investigation and theoretical analysis provide strong evidence that the superconducting phase transition in electron-doped NCCO superconductors cannot be explained only in terms of changes of the oxygen content, but it necessarily requires a structural reorganization of the oxygen atoms in apical positions, which profoundly affects the electronic properties of the compound. CuO 4 compound has been used as a sputtering source in an on-axis configuration with the substrate [53]. Type A and type B samples have been fabricated with a thickness varying in the range 100-200 nm at a total pressure of 1.7 mbar and heater temperature 850°C, respectively in pure argon and in mixed oxygen/argon atmosphere with ratio O 2 /Ar in the range 2-14%. A first in situ annealing is performed at the same temperature. Type A films were in situ annealed at different dwell times (20,30, and 45 minutes) in the deposition chamber in vacuum at 10 −5 mbar or 0.7 mbar. Type B films were in situ annealed at different dwell times (45 and 120 minutes) in the deposition chamber either in vacuum (0.7 mbar) or in argon atmosphere (1.7 mbar). A subsequent annealing was performed ex situ in a Carbolite EHA 12/450B single-zone horizontal tube furnace with quartz/alumina tube and sealing flanges in flowing argon with 99.995% purity, at a temperature set to 900-950 • C, with a rate of 300 • C per hour for both heating and cooling ramps, and with a dwell time of 0.5-2 hours, depending on the film thickness. In all cases, the samples become superconducting after ex situ annealing. Despite different growing conditions and film thickness, the annealing temperature needed to induce superconductivity is the same for all films, while the annealing time depends on the film thickness. The heating ramp used during annealing was optimized only for one sample, which therefore exhibits the nominal critical temperature T c ≈ 24 K.
Morphology, phase composition, and sample purity were inspected by scanning electron microscopy combined with wavelength-dispersive spectroscopy [53,71], using an Oxford Scanning Electron Microscope Leo EVO 50 equipped with a wavelength-dispersive spectrometer. Structural properties are obtained by high-resolution X-ray diffraction technique in a Philips X'Pert-MRD diffractometer equipped with a four circle cradle. The electrical transport properties were investigated in a Cryogenic Ltd. cryogen-free cryostat equipped with an integrated cryogen-free variable-temperature insert operating in the range 1.6-300 K. In this system, the sample is cooled by a continuous helium gas flow and the temperature stability is within 0.01 K. Sample temperature is measured via a LakeShore Temperature Controller model 350 connected to a LakeShore Cernox sensor. The electrical resistance measurements as a function of the temperature have been performed by a four-probe method, using a Keithley model 2430 as current source and a Keithley model 2182 as voltage meter. On selected films, in order to evaluate the resistivity, we realized microbridges with length L = 1 mm, width W = 100 µm using a standard UV photolithography and wet etching in a 1% solution of H 3 PO 4 in pure water.
Appendix B: X-ray diffraction measurements
The structural properties of DC-sputtered NCCO films have been investigated by XRD technique. Figure 4 reports the typical θ-2θ pattern of as-grown type A, as-grown type B, and superconducting samples. Besides the substrate reflections, XRD patterns of as-grown, nonsuperconducting samples show only the four (00l) diffraction peaks with l = 2, 4, 6, 8, which are characteristic of the T tetragonal crystal structure, indicating a preferential growth with the c-axis perpendicular to the film surface and the absence of spurious phases.
The analysis of the (00l) reflections allows to obtain directly the c-axis lattice parameter from the Bragg law, 2d sin θ = λ, with λ = 1.540 56 Å, θ the half of the angular peak position 2θ, and with d = c/l in this case. Insets of Fig. 4 report the quantity 2 sin θ/λ as a function of the Miller index l together with the linear best fit of the data. The fitting parameter gives a different c-axis parameter for each sample. In particular, the c-axis parameter measured in the superconducting sample is c SC = 12.079 ± 0.005 Å. The value c = 12.069 ± 0.005 Å obtained for the type A film is shorter than c SC , while the c-axis parameter c = 12.09 ± 0.01 Å of the ty pe B sample is longer. Hence, the most oxygenated type B samples behave as typical samples reported in previous studies, where the measured as-grown c-axis parameter is longer than the value measured in superconducting samples [21][22][23][24][25][26]. A value c < c SC is the peculiarity of our type A films. Figure 5 shows the angular positions 2θ of the (004), (103), and (114) peaks respectively [(a) to (c)], for one type A and one type B as-grown samples compared with the value measured after annealing. We found for the as-grown type A sample c = 12.04 ± 0.01 Å and a, b = 3.96 ± 0.01 Å, while for the as-grown type B samples c = 12.11 ± 0.01 Å, a, b = 3.94 ± 0.02 Å. Figure 5(d) shows the ω-scans around the (004) reflection for type A and B samples, as-grown and superconducting (ω is the x-ray incident angle). We observe a small difference between type A and B samples. The full width at half maximum (FWHM) is in the range 0.5-0.6 • and 0.6-0.7 • respectively for as-grown type A and B samples, indicating a more uniform c-axis orientation for type A samples (i.e., better epitaxial growth). After annealing, we observe a slight reduction of the FWHM in all samples, which indicates a slight improvement in the mosaicity.
The decomposition products in NCCO thin films include NdCuO 2 , Nd 2 O 3 , and Nd-Ce-O phases [72], which may in principle be present in samples annealed at high temperatures. In some of our samples, we observe a small fraction of the Nd 1.85 Ce 0.15 O 3 phase in X-ray diffraction measurements.
Appendix C: Computational details
We have performed first-principles DFT calculations using the VASP [66] package based on plane-wave basis set and projector augmented wave (PAW) method [67]. A plane-wave energy cut-off of 450 eV has been used for the atomic relaxation and 530 eV for the other calculations. A k-point grid of 8 × 8 × 2 has been used for the atomic relaxation and 10 × 10 × 4 for the other calculations. For the treatment of exchange correlation, Perdew-Burke-Ernzerhof [73] generalized gradient approximation for solid has been considered, since it is accurate for the structural relaxation of the A 2 BO 4 oxides bulk [74] and other compounds with the transition-metal connected to 5 oxygen atoms [75,76].
The analysis of the structural phases of compounds with 4f electrons is a nontrivial problem in DFT due to the difficulties to catch the position of the energetic levels of the f electrons [77]. Few works studied electron-doped cuprate superconductors using ab initio techniques. Considering 4f electrons in the core level, Bansil and coworkers were able to obtain the correct insulating groundstate for the undoped cases [78][79][80]. We use the PAW with 3 frozen f electrons for the Nd and without frozen f electrons for the Ce. Using the PAW without frozen electrons for the Ce, the T phase is always the ground state. Using the PAW with three frozen f electrons for the Ce, we obtain the stabilization of the T * phase, but this does not allow the Ce +4 configuration experimentally observed.
We included the effects of the Hubbard U on the Cu sites. We scanned the values of U Cu from 1 to 4 eV for the undoped and used J H = 0.15U for the Cu-3d states, and we assumed the value of U Cu = 3.2 eV because for this value the undoped T phase is a narrow gap semiconductor. The Coulomb repulsion was applied also on the rare earth Nd and Ce (4 eV) and O (6 eV) but it is much less relevant since these electrons are far from the Fermi level.
To account for the G-type antimagnetism in Nd 2 CuO 4 we use a √ 2 × √ 2 × 2 supercell with 4 formula units. To investigate the structural properties as a function of doping, an additional calculation was done in the overdoped regime at x = 0.25. Using Vegard's law, we estimated the lattice constants for x = 0.17, which correspond to almost-optimally doping (as in our samples). Once we understood the structural properties, we study the electronic and magnetic properties of the compound relative to a value of the doping close to the experimental one. In order to do so, we used a √ 2 × √ 2 × 3 supercell with 6 formula units. One Ce atom in 6 formula units will give the concentration of x = 1/6 ≈ 0.17. This supercell can host 3 cuprate layers. However, in order to re- produce the T * phase we need an even number of layers. As a consequence, the √ 2 × √ 2 × 3 supercell cannot host the T * phase but it can host the T phase and a mixed phase with two T * cells and one T cell alternating along the c-axis. In this article, we call this phase the T SC = 2T * + T phase.
The most stable configuration of the Ce atoms is obtained when the Ce atoms are far from each other. This means that during growth the Ce atoms have a tendency to avoid each other, which points to a homogeneous distribution of these Ce atoms during the growth. In the most stable configuration of the T * phase, the Ce atoms are not in the apical oxygen layer. In the most stable configuration of the mixed T SC phase, the Ce atoms are closer to the CuO 2 layers of the T cell.
Appendix D: DFT study of undoped Nd2CuO4
In this Section, we present the results of the undoped Nd 2 CuO 4 . The T and T * phases of the Nd 2 CuO 4 have the same stoichiometry but a different atomic position of the oxygen atoms and consequently of the atomic layers. The T phase consists of 4 atomic layers CuO 2 /O/Nd 2 /O while the T * contains 3 atomic layers CuO 2 /NdO/NdO. The different atomic composition of the atomic layers has a large influence on the lattice constant c and consequently on the in-plane lattice constant too. Considering just the effect of the packaging, we would expect that the T phase with 4 atomic layers should have a larger c lattice constant, but we also need to consider the effect of the charge. In an oversimplified ionic picture, the CuO 2 layers have a total charge −2, the O layers have a charge −2, the Nd 2 layers have a charge +6 while the NdO layers have a charge +1. Therefore, the 4 layers of the T phase have charge −2/ − 2/ + 6/ − 2 while the 3 layers of the T * phase have −2/ + 1/ + 1. Due to the greater charge, the 4 layers of the T phase attract each other much more than the 3 layers of the T * phase resulting in a shorter c-axis of the T phase. Therefore, there is an interplay and competition between the charge and the volume effect; as a result, the T phase has a shorter c-axis than the T * phase. As a consequence of the shorter c, the T phase presents a larger value of the in-plane lattice constant a. This simplified picture was verified in our DFT results. We performed structural relaxation for the undoped case for the T and T * phases. We obtained a = 3.91 Å and c = 12.12 Å for the T phase: The total volume is 92.7 Å 3 per formula unit. We obtained a = 3.83 Å and c = 12.34 Å for the T * phase: The total volume is 90.3 Å 3 per formula unit.
The Cu states in T * are more ionic due to the larger number of nearest-neighbor oxygen atoms, indeed the Cu d orbitals are more localized and therefore the T * phase is more insulating. Instead, the T phase is a semiconductor. Once we fixed the equilibrium atomic positions, we investigate the electronic properties scanning the value of U Cu . We search for the critical value of U cr Cu such that the T phase is insulating, we get the value U cr Cu = 3.2 eV for the T phase and we assume this value for all the following calculations. The band structure of the semiconducting T phase is shown in Fig. 6(a). We have the completely unoccupied upper Hubbard band between 0 and +2.2 eV above the Fermi level due to the x 2 − y 2 orbital in the minority spin channel. The lower Hubbard band due to the x 2 − y 2 orbital in the majority spin channel is completely occupied and entangled with other occupied Cu d bands. The band structure shows an indirect band gap with the maximum of the valence band at the X point and the minimum of the conduction band at the M point. The gap in the DFT approach is opened by the interplay between U Cu and the antiferromagnetic order: Indeed, the T phase without magnetism shows a metallic phase with robust holes at the X point as shown in Fig. 6(b). In the nonmagnetic phase, we also have a nonsymmorphic symmetry that produces a double degenerate band along the MX direction and a semi-Dirac point in X. Performing the antiferromagnetic calculation for the T * phase, we obtain the band structure shown in Fig. 6(c). The band structure of the T * phase shows a larger band gap and flatter Cu d bands, but for the rest, we have the same properties as in the T phase.
At U Cu = 3.2 eV, the energy difference between the antiferromagnetic and the nonmagnetic phase is 22 meV per formula unit for the T phase and 70 meV per formula unit for the T * phase. Therefore, the T * phase has a larger gap and its antiferromagnetic ground state is more robust. Increasing the value of U , the antiferromagnetic phase will become more stable but the scenario described here does not change qualitatively.
FIG. 2 .
2(a) Resistivity as a function of temperature for type A samples, plotted on a log-log scale and normalized to the resistivity minimum. (b) Resistance as a function of temperature for type B samples, plotted on a log-log scale. Continuous lines R(T ) ∝ T −α are the best fit to the data. (c) Normalized resistivity as a function of temperature for type A and B samples after annealing, showing the superconducting transition at Tc 24 K. (d) Residual resistivity ratio RRR for as-grown samples (both types) and (e) superconducting critical temperature Tc of samples after annealing, as a function of the c-axis parameter. The dashed gray line is the average value of cSC and the continuous smooth curve is a guide for the eye. The annealing process was optimized for the sample reaching Tc ≈ 24 K.
Figure 3 (
3a) and (b) show the results for the T * structure for x = 0.125 and x = 0.25, respectively. In both cases and reasonably for all intermediate doping (including x = 0.17), we have (i) an indirect bandgap ∆E XM between the maximum of the lower Hubbard band at X (hole pocket) and the minimum of the upper Hubbard band at M (electron pocket), (ii) strong antiferromagnetic correlations (m = 0.34 µ B and 0.24 µ B , respectively, for x = 0.125 and 0.25), and (iii) hole pockets away from the Fermi level. This strongly correlated scenario accounts for the insulating behavior of as-grown type B samples, in agreement with DMFT studies [68]. Figure 3(c) shows the results for the T structure for x = 1/6 ≈ 0.17. The gap ∆E XM closes, although antiferromagnetic correlations are still quite large (m = 0.27 µ B ). However, hole pockets are still far from the Fermi level. This scenario accounts for the (poor) metallic behavior of as-grown type A samples. Finally, Fig. 3(d) shows the results for the T SC structure for x = 1/6. The bandgap ∆E XM completely disappears, as well as antiferromagnetic correlations for the T region (m = 0.04 µ B ), while hole pockets are available right at the Fermi level (at the symmetry point X) coexisting with electron pockets (at the symmetry point M). However, antiferromagnetic correlations in the T * region are still relevant, being m = 0.36 µ B . The presence of holes at the Fermi level and the suppression of antiferromagnetic correlations indicate the emergence of hole superconductivity
FIG. 3 .
3DFT electronic band structure of NCCO for: (a) T * structure with x = 0.125, (b) T * structure with x = 0.25, (c) T structure with x = 1/6 ≈ 0.17, and (d) TSC = 2T * + T structure with x = 1/6, where we highlighted the dominant T and T * bands with larger contributions. The Fermi level is set to zero. The flat bands at 0.5-1.5 eV above the Fermi level are the cerium 4f bands. All other lower-energy bands are copper 3d bands.
ACKNOWLEDGMENTS A. G., A. L., and G. G. performed the experiments. A. G., A. N., and P. M. performed the experimental data analysis. C. A. and A. A. performed the DFT calculations. P. M., A. N., C. A., and A. A. wrote the manuscript. A. A. supervised the theoretical part. A. N. supervised the experimental part and the overall project. All coauthors contributed to the scientific discussion and final version of the manuscript. We thank M. Wysokinski for useful discussions. C. A. is supported by the Foundation for Polish Science through the IRA Programme co-financed by EU within SG OP. C. A. acknowledges the CINECA award under the IsC81 "DISTANCE" Grant for the availability of high-performance computing resources and support, and the support of the Interdisciplinary Centre for Mathematical and Computational Modeling (ICM), University of Warsaw, under Grants No. G73-23 and No. G75-10. P. M. is supported by the Japan Science and Technology Agency (JST) of the Ministry of Education, Culture, Sports, Science and Technology (MEXT), JST CREST Grant. No. JP-MJCR19T, by the (MEXT)-Supported Program for the Strategic Research Foundation at Private Universities Topological Science, Grant No. S1511006, and by Japan Society for the Promotion of Science (JSPS) Grant-in-Aid for Early-Career Scientists, Grant No. 20K14375. A. A. acknowledges support by MIUR under Project PRIN 2017RKWTMY. Appendix A: Sample preparation Nd 2 -x Ce x CuO 4 (NCCO) films with a fixed cerium content in the range 0.16-0.18 have been grown on (100) SrTiO 3 substrates by DC sputtering technique. A single target of the stoichiometric Nd 1.85 Ce 0.15
FIG. 4 .
4X-ray diffraction patterns in units of counts per second for an as-grown type A sample, an as-grown type B sample, and a superconducting sample. The (001) and (002) peaks are due to the presence of the SrTiO3 substrate. Insets show the linear fit of 2 sin θ/λ as a function of the Miller index l. The line slopes give the c-axis lattice parameter values.
FIG. 5 .
5Angular positions 2θ of the (004), (103), and (114) peaks in panels (a), (b) and (c) respectively, for a type A (red circles) and a type B (blue squares) as-grown sample. Dashed lines correspond to the 2θ value measured in both samples after the annealing process. (d) ω-scans around the (004) reflections for a type A as-grown sample (red circles), a type A superconducting sample (purple circles), a type B as-grown sample (blue squares), and a type B superconducting sample (green squares), normalized to the maximum value.
FIG. 6 .
6Band structure of the Nd2CuO4. (a) T phase with antiferromagnetic order, (b) T phase with nonmagnetic atoms, (c) T * phase with antiferromagnetic order. The value of the Coulomb repulsion is UCu = 3.2 eV.
Possible high Tc superconductivity in the Ba-La-Cu-O system. J G Bednorz, K A Müller, 10.1007/BF01303701Z. Physik B. 64189J. G. Bednorz and K. A. Müller, Possible high Tc superconduc- tivity in the Ba-La-Cu-O system, Z. Physik B 64, 189 (1986).
Hole-doped cuprate high temperature superconductors. C W Chu, L Z Deng, B Lv, 10.1016/j.physc.2015.02.047Physica C. 514290C. W. Chu, L. Z. Deng, and B. Lv, Hole-doped cuprate high temperature superconductors, Physica C 514, 290 (2015).
Superconductivity above 130 K in the Hg-Ba-Ca-Cu-O system. A Schilling, M Cantoni, J D Guo, H R Ott, 10.1038/363056a0Nature. 36356A. Schilling, M. Cantoni, J. D. Guo, and H. R. Ott, Supercon- ductivity above 130 K in the Hg-Ba-Ca-Cu-O system, Nature 363, 56 (1993).
Doping a Mott insulator: Physics of high-temperature superconductivity. P A Lee, N Nagaosa, X.-G Wen, 10.1103/RevModPhys.78.17Rev. Mod. Phys. 7817P. A. Lee, N. Nagaosa, and X.-G. Wen, Doping a Mott insula- tor: Physics of high-temperature superconductivity, Rev. Mod. Phys. 78, 17 (2006).
Recent progress in physics of high-temperature superconductors. T Tohyama, 10.1143/JJAP.51.010004Jpn. J. Appl. Phys. 5110004T. Tohyama, Recent progress in physics of high-temperature superconductors, Jpn. J. Appl. Phys 51, 010004 (2012).
Superconductivity produced by electron doping in CuO2-layered compounds. H Takagi, S Uchida, Y Tokura, 10.1103/PhysRevLett.62.1197Phys. Rev. Lett. 621197H. Takagi, S. Uchida, and Y. Tokura, Superconductivity pro- duced by electron doping in CuO2-layered compounds, Phys. Rev. Lett. 62, 1197 (1989).
Electron and hole doping in Nd-based cuprates with single-layer CuO2 sheets: Role of doped Ce ions and 30-K superconductivity. Y Tokura, A Fujimori, H Matsubara, H Watabe, H Takagi, S Uchida, M Sakai, H Ikeda, S Okuda, S Tanaka, 10.1103/PhysRevB.39.9704Phys. Rev. B. 399704Y. Tokura, A. Fujimori, H. Matsubara, H. Watabe, H. Takagi, S. Uchida, M. Sakai, H. Ikeda, S. Okuda, and S. Tanaka, Elec- tron and hole doping in Nd-based cuprates with single-layer CuO2 sheets: Role of doped Ce ions and 30-K superconduc- tivity, Phys. Rev. B 39, 9704 (1989).
Progress and perspectives on electron-doped cuprates. N P Armitage, P Fournier, R L Greene, 10.1103/RevModPhys.82.2421Rev. Mod. Phys. 822421N. P. Armitage, P. Fournier, and R. L. Greene, Progress and perspectives on electron-doped cuprates, Rev. Mod. Phys. 82, 2421 (2010).
T' and infinite-layer electron-doped cuprates. P Fournier, 10.1016/j.physc.2015.02.036Physica C. 514314P. Fournier, T' and infinite-layer electron-doped cuprates, Phys- ica C 514, 314 (2015).
The strange metal state of the electron-doped cuprates. R L Greene, P R Mandal, N R Poniatowski, T Sarkar, 10.1146/annurev-conmatphys-031119-050558Annu. Rev. Condens. Matter Phys. 11213R. L. Greene, P. R. Mandal, N. R. Poniatowski, and T. Sarkar, The strange metal state of the electron-doped cuprates, Annu. Rev. Condens. Matter Phys, 11, 213 (2020).
Suppression of the antiferromagnetic pseudogap in the electron-doped high-temperature superconductor by protect annealing. M Horio, T Adachi, Y Mori, A Takahashi, T Yoshida, H Suzuki, L C C Ambolode, K Okazaki, K Ono, H Kumigashira, H Anzai, M Arita, H Namatame, M Taniguchi, D Ootsuki, K Sawada, M Takahashi, T Mizokawa, Y Koike, A Fujimori, 10.1038/ncomms10567Nat. Commun. 710567M. Horio, T. Adachi, Y. Mori, A. Takahashi, T. Yoshida, H. Suzuki, L. C. C. Ambolode, K. Okazaki, K. Ono, H. Kumi- gashira, H. Anzai, M. Arita, H. Namatame, M. Taniguchi, D. Ootsuki, K. Sawada, M. Takahashi, T. Mizokawa, Y. Koike, and A. Fujimori, Suppression of the antiferromagnetic pseudogap in the electron-doped high-temperature superconductor by protect annealing, Nat. Commun. 7, 10567 (2016).
Electron number-based phase diagram of Pr1−xLaCexCuO 4−δ and possible absence of disparity between electron-and hole-doped cuprate phase diagrams. D Song, G Han, W Kyung, J Seo, S Cho, B S Kim, M Arita, K Shimada, H Namatame, M Taniguchi, Y Yoshida, H Eisaki, S R Park, C Kim, 10.1103/PhysRevLett.118.137001Phys. Rev. Lett. 118137001D. Song, G. Han, W. Kyung, J. Seo, S. Cho, B. S. Kim, M. Arita, K. Shimada, H. Namatame, M. Taniguchi, Y. Yoshida, H. Eisaki, S. R. Park, and C. Kim, Electron number-based phase diagram of Pr1−xLaCexCuO 4−δ and possible absence of disparity between electron-and hole-doped cuprate phase di- agrams, Phys. Rev. Lett. 118, 137001 (2017).
Correlation between Fermi surface transformations and superconductivity in the electron-doped high-Tc superconductor Nd2−xCexCuO4. T Helm, M V Kartsovnik, C Proust, B Vignolle, C Putzke, E Kampert, I Sheikin, E.-S Choi, J S Brooks, N Bittner, W Biberacher, A Erb, J Wosnitza, R Gross, 10.1103/PhysRevB.92.094501Phys. Rev. B. 9294501T. Helm, M. V. Kartsovnik, C. Proust, B. Vignolle, C. Putzke, E. Kampert, I. Sheikin, E.-S. Choi, J. S. Brooks, N. Bittner, W. Biberacher, A. Erb, J. Wosnitza, and R. Gross, Correlation be- tween Fermi surface transformations and superconductivity in the electron-doped high-Tc superconductor Nd2−xCexCuO4, Phys. Rev. B 92, 094501 (2015).
Reassessment of the electronic state, magnetism, and superconductivity in high-Tc cuprates with the Nd2CuO4 structure. M Naito, Y Krockenberger, A Ikeda, H Yamamoto, 10.1016/j.physc.2016.02.012Physica C. 52328M. Naito, Y. Krockenberger, A. Ikeda, and H. Yamamoto, Reassessment of the electronic state, magnetism, and super- conductivity in high-Tc cuprates with the Nd2CuO4 structure, Physica C 523, 28 (2016).
Extra oxygen in electron superconductors: Ce and Th doped Nd2CuO 4+δ and Gd2CuO 4+δ. E Moran, A I Nazzal, T C Huang, J B Torrance, 10.1016/0921-4534(89)90448-6Physica C. 16030E. Moran, A. I. Nazzal, T. C. Huang, and J. B. Torrance, Extra oxygen in electron superconductors: Ce and Th doped Nd2CuO 4+δ and Gd2CuO 4+δ , Physica C 160, 30 (1989).
Growth, structural, and physical properties of superconducting Nd2−xCexCuO4 crystals. J.-M Tarascon, E Wang, L H Greene, B G Bagley, G W Hull, S M D'egidio, P F Miceli, Z Z Wang, T W Jing, J Clayhold, D Brawner, N P Ong, 10.1103/PhysRevB.40.4494Phys. Rev. B. 404494J.-M. Tarascon, E. Wang, L. H. Greene, B. G. Bagley, G. W. Hull, S. M. D'Egidio, P. F. Miceli, Z. Z. Wang, T. W. Jing, J. Clayhold, D. Brawner, and N. P. Ong, Growth, structural, and physical properties of superconducting Nd2−xCexCuO4 crystals, Phys. Rev. B 40, 4494 (1989).
Evidence of apical oxygen in Nd2CuOy determined by single-crystal neutron diffraction. P G Radaelli, J D Jorgensen, A J Schultz, J L Peng, R L Greene, 10.1103/PhysRevB.49.15322Phys. Rev. B. 4915322P. G. Radaelli, J. D. Jorgensen, A. J. Schultz, J. L. Peng, and R. L. Greene, Evidence of apical oxygen in Nd2CuOy deter- mined by single-crystal neutron diffraction, Phys. Rev. B 49, 15322 (1994).
Single-crystal neutron-diffraction structures of reduced and oxygenated Nd2−xCexCuOy. A J Schultz, J D Jorgensen, J L Peng, R L Greene, 10.1103/PhysRevB.53.5157Phys. Rev. B. 535157A. J. Schultz, J. D. Jorgensen, J. L. Peng, and R. L. Greene, Single-crystal neutron-diffraction structures of reduced and oxygenated Nd2−xCexCuOy, Phys. Rev. B 53, 5157 (1996).
On the properties of Pr2CuO 4±δ and RE2−xPrxCuO 4±δ synthesized under elevated oxygen pressure conditions. P W Klamut, A Sikora, Z Bukowski, B Dabrowski, J Klamut, 10.1016/S0921-4534(97)00340-7541P. W. Klamut, A. Sikora, Z. Bukowski, B. Dabrowski, and J. Klamut, On the properties of Pr2CuO 4±δ and RE2−xPrxCuO 4±δ synthesized under elevated oxygen pressure conditions, Physica C 282-287, 541 (1997).
Oxygen content influence in the superconducting and electronic properties of Nd1.85Ce0.15Cu1.01Oy ceramics. E Navarro, D Jaque, J E Villegas, J I Martín, A Serquis, F Prado, A Caneiro, J L Vicent, 10.1016/S0925-8388(01)01198-7J. Alloys Compd. 580E. Navarro, D. Jaque, J. E. Villegas, J. I. Martín, A. Serquis, F. Prado, A. Caneiro, and J. L. Vicent, Oxygen content in- fluence in the superconducting and electronic properties of Nd1.85Ce0.15Cu1.01Oy ceramics, J. Alloys Compd. 323-324, 580 (2001).
Structure and oxygen stoichiometry for the electron-doped cuprate superconductor Nd1.85Ce0.15CuO 4−δ. G H Kwei, S.-W Cheong, Z Fisk, F H Garzon, J A Goldstone, J D Thompson, 10.1103/PhysRevB.40.9370Phys. Rev. B. 409370G. H. Kwei, S.-W. Cheong, Z. Fisk, F. H. Garzon, J. A. Goldstone, and J. D. Thompson, Structure and oxygen sto- ichiometry for the electron-doped cuprate superconductor Nd1.85Ce0.15CuO 4−δ , Phys. Rev. B 40, 9370(R) (1989).
Pulsed-laser deposition of Pr2−xCexCuO4−y thin films and the effect of hightemperature post-annealing. E Maiser, P Fournier, J.-L Peng, F Araujo-Moreira, T Venkatesan, R Greene, G Czjzek, 10.1016/S0921-4534(97)01858-3Physica C. 29715E. Maiser, P. Fournier, J.-L. Peng, F. Araujo-Moreira, T. Venkatesan, R. Greene, and G. Czjzek, Pulsed-laser deposi- tion of Pr2−xCexCuO4−y thin films and the effect of high- temperature post-annealing, Physica C 297, 15 (1998).
Role of impurity oxygen in superconductivity of "non-doped" T. A Tsukada, M Noda, H Yamamoto, M Naito, 10.1016/j.physc.2005.01.034459La,RE)2CuO4, Physica CA. Tsukada, M. Noda, H. Yamamoto, and M. Naito, Role of impurity oxygen in superconductivity of "non-doped" T'- (La,RE)2CuO4, Physica C 426-431, 459 (2005).
O Matsumoto, A Utsuki, A Tsukada, H Yamamoto, T Manabe, M Naito, 10.1103/PhysRevB.79.100508Synthesis and properties of superconducting T'-R2CuO4. R=Pr, Nd, Sm, Eu, Gd79100508O. Matsumoto, A. Utsuki, A. Tsukada, H. Yamamoto, T. Man- abe, and M. Naito, Synthesis and properties of superconduct- ing T'-R2CuO4 (R=Pr, Nd, Sm, Eu, Gd), Phys. Rev. B 79, 100508(R) (2009).
Generic phase diagram of "electron-doped" T' cuprates. O Matsumoto, A Utsuki, A Tsukada, H Yamamoto, T Manabe, M Naito, 10.1016/j.physc.2009.05.100Physica C. 469924O. Matsumoto, A. Utsuki, A. Tsukada, H. Yamamoto, T. Man- abe, and M. Naito, Generic phase diagram of "electron-doped" T' cuprates, Physica C 469, 924 (2009).
Emerging superconductivity hidden beneath charge-transfer insulators. Y Krockenberger, H Irie, O Matsumoto, K Yamagami, M Mitsuhashi, A Tsukada, M Naito, H Yamamoto, 10.1038/srep02235Sci. Rep. 32235Y. Krockenberger, H. Irie, O. Matsumoto, K. Yamagami, M. Mitsuhashi, A. Tsukada, M. Naito, and H. Yamamoto, Emerg- ing superconductivity hidden beneath charge-transfer insula- tors, Sci. Rep. 3, 2235 (2013).
Electron doping of the parent cuprate La2CuO4 without cation substitution. H I Wei, C Adamo, E A Nowadnick, E B Lochocki, S Chatterjee, J P Ruf, M R Beasley, D G Schlom, K M Shen, 10.1103/PhysRevLett.117.147002Phys. Rev. Lett. 117147002H. I. Wei, C. Adamo, E. A. Nowadnick, E. B. Lochocki, S. Chatterjee, J. P. Ruf, M. R. Beasley, D. G. Schlom, and K. M. Shen, Electron doping of the parent cuprate La2CuO4 without cation substitution, Phys. Rev. Lett. 117, 147002 (2016).
Angle-resolved photoemission spectroscopy of the low-energy electronic structure of superconducting Pr2CuO4 driven by oxygen nonstoichiometry. M Horio, Y Krockenberger, K Koshiishi, S Nakata, K Hagiwara, M Kobayashi, K Horiba, H Kumigashira, H Irie, H Yamamoto, A Fujimori, 10.1103/PhysRevB.98.020505Phys. Rev. B. 9820505M. Horio, Y. Krockenberger, K. Koshiishi, S. Nakata, K. Hagi- wara, M. Kobayashi, K. Horiba, H. Kumigashira, H. Irie, H. Ya- mamoto, and A. Fujimori, Angle-resolved photoemission spec- troscopy of the low-energy electronic structure of supercon- ducting Pr2CuO4 driven by oxygen nonstoichiometry, Phys. Rev. B 98, 020505(R) (2018).
Electronic structure of Ce-doped and -undoped Nd2CuO4 superconducting thin films studied by hard x-ray photoemission and soft x-ray absorption spectroscopy. M Horio, Y Krockenberger, K Yamamoto, Y Yokoyama, K Takubo, Y Hirata, S Sakamoto, K Koshiishi, A Yasui, E Ikenaga, S Shin, H Yamamoto, H Wadati, A Fujimori, 10.1103/PhysRevLett.120.257001Phys. Rev. Lett. 120257001M. Horio, Y. Krockenberger, K. Yamamoto, Y. Yokoyama, K. Takubo, Y. Hirata, S. Sakamoto, K. Koshiishi, A. Yasui, E. Ikenaga, S. Shin, H. Yamamoto, H. Wadati, and A. Fujimori, Electronic structure of Ce-doped and -undoped Nd2CuO4 su- perconducting thin films studied by hard x-ray photoemission and soft x-ray absorption spectroscopy, Phys. Rev. Lett. 120, 257001 (2018).
Extended superconducting dome revealed by angle-resolved photoemission spectroscopy of electron-doped cuprates prepared by the protect annealing method. C Lin, T Adachi, M Horio, T Ohgi, M A Baqiya, T Kawamata, H Sato, T Sumura, K Koshiishi, S Nakata, G Shibata, K Hagiwara, M Suzuki, K Ono, K Horiba, H Kumigashira, S Ideta, K Tanaka, Y Koike, A Fujimori, 10.1103/PhysRevResearch.3.013180Phys. Rev. Research. 313180C. Lin, T. Adachi, M. Horio, T. Ohgi, M. A. Baqiya, T. Kawa- mata, H. Sato, T. Sumura, K. Koshiishi, S. Nakata, G. Shibata, K. Hagiwara, M. Suzuki, K. Ono, K. Horiba, H. Kumigashira, S. Ideta, K. Tanaka, Y. Koike, and A. Fujimori, Extended su- perconducting dome revealed by angle-resolved photoemission spectroscopy of electron-doped cuprates prepared by the protect annealing method, Phys. Rev. Research 3, 013180 (2021).
Post-growth annealing effects on charge and spin excitations in Nd2−xCexCuO4. K Ishii, S Asano, M Ashida, M Fujita, B Yu, M Greven, J Okamoto, D.-J Huang, J Mizuki, 10.1103/PhysRevMaterials.5.024803Phys. Rev. Materials. 524803K. Ishii, S. Asano, M. Ashida, M. Fujita, B. Yu, M. Greven, J. Okamoto, D.-J. Huang, and J. Mizuki, Post-growth anneal- ing effects on charge and spin excitations in Nd2−xCexCuO4, Phys. Rev. Materials 5, 024803 (2021).
Different roles of cerium substitution and oxygen reduction in transport in Pr2−xCexCuO4 thin films. J Gauthier, S Gagné, J Renaud, M.-E Gosselin, P Fournier, P Richard, 10.1103/PhysRevB.75.024424Phys. Rev. B. 7524424J. Gauthier, S. Gagné, J. Renaud, M.-E. Gosselin, P. Fournier, and P. Richard, Different roles of cerium substitution and oxy- gen reduction in transport in Pr2−xCexCuO4 thin films, Phys. Rev. B 75, 024424 (2007).
Competition between antiferromagnetism and superconductivity in the electron-doped cuprates triggered by oxygen reduction. P Richard, M Neupane, Y.-M Xu, P Fournier, S Li, P Dai, Z Wang, H Ding, 10.1103/PhysRevLett.99.157002Phys. Rev. Lett. 99157002P. Richard, M. Neupane, Y.-M. Xu, P. Fournier, S. Li, P. Dai, Z. Wang, and H. Ding, Competition between antiferromagnetism and superconductivity in the electron-doped cuprates triggered by oxygen reduction, Phys. Rev. Lett. 99, 157002 (2007).
Hole superconductivity in the electron-doped superconductor Pr2−xCexCuO4. Y Dagan, R L Greene, 10.1103/PhysRevB.76.024506Phys. Rev. B. 7624506Y. Dagan and R. L. Greene, Hole superconductivity in the electron-doped superconductor Pr2−xCexCuO4, Phys. Rev. B 76, 024506 (2007).
Antiferromagnetic fluctuations and the Hall effect of electron-doped cuprates: Possibility of a quantum phase transition at underdoping. S Charpentier, G Roberge, S Godin-Proulx, X Béchamp-Laganière, K D Truong, P Fournier, P Rauwel, 10.1103/PhysRevB.81.104509Phys. Rev. B. 81104509S. Charpentier, G. Roberge, S. Godin-Proulx, X. Béchamp- Laganière, K. D. Truong, P. Fournier, and P. Rauwel, Antifer- romagnetic fluctuations and the Hall effect of electron-doped cuprates: Possibility of a quantum phase transition at under- doping, Phys. Rev. B 81, 104509 (2010).
Ce substitution and reduction annealing effects on electronic states in Pr2−xCexCuO4 studied by Cu K-edge x-ray absorption spectroscopy. S Asano, K Ishii, D Matsumura, T Tsuji, T Ina, K M Suzuki, M Fujita, 10.7566/JPSJ.87.094710J. Phys. Soc. Jpn. 8794710S. Asano, K. Ishii, D. Matsumura, T. Tsuji, T. Ina, K. M. Suzuki, and M. Fujita, Ce substitution and reduction anneal- ing effects on electronic states in Pr2−xCexCuO4 studied by Cu K-edge x-ray absorption spectroscopy, J. Phys. Soc. Jpn. 87, 094710 (2018).
Hole pocket-driven superconductivity and its universal features in the electron-doped cuprates. Y Li, W Tabis, Y Tang, G Yu, J Jaroszynski, N Barišić, M Greven, 10.1126/sciadv.aap7349Sci. Adv. 57349Y. Li, W. Tabis, Y. Tang, G. Yu, J. Jaroszynski, N. Barišić, and M. Greven, Hole pocket-driven superconductivity and its universal features in the electron-doped cuprates, Sci. Adv. 5, eaap7349 (2019).
Understanding electron-doped cuprate superconductors as hole superconductors. J Hirsch, F Marsiglio, 10.1016/j.physc.2019.04.013Physica C. 56429J. Hirsch and F. Marsiglio, Understanding electron-doped cuprate superconductors as hole superconductors, Physica C 564, 29 (2019).
Magnetic order, spin correlations, and superconductivity in single-crystal Nd1.85Ce0.15CuO 4+δ. M Matsuda, Y Endoh, K Yamada, H Kojima, I Tanaka, R J Birgeneau, M A Kastner, G Shirane, 10.1103/PhysRevB.45.12548Phys. Rev. B. 4512548M. Matsuda, Y. Endoh, K. Yamada, H. Kojima, I. Tanaka, R. J. Birgeneau, M. A. Kastner, and G. Shirane, Magnetic or- der, spin correlations, and superconductivity in single-crystal Nd1.85Ce0.15CuO 4+δ , Phys. Rev. B 45, 12548 (1992).
Spin correlations and magnetic order in nonsuperconducting Nd2−xCexCuO 4±δ. P K Mang, O P Vajk, A Arvanitaki, J W Lynn, M Greven, 10.1103/PhysRevLett.93.027002Phys. Rev. Lett. 9327002P. K. Mang, O. P. Vajk, A. Arvanitaki, J. W. Lynn, and M. Greven, Spin correlations and magnetic order in nonsuper- conducting Nd2−xCexCuO 4±δ , Phys. Rev. Lett. 93, 027002 (2004).
The phase stability diagrams for the systems Nd2CuO 4−δ and Nd1.85Ce0.15CuO 4−δ. J S Kim, D R Gaskell, 10.1016/0921-4534(93)90549-6Physica C. 209381J. S. Kim and D. R. Gaskell, The phase stability diagrams for the systems Nd2CuO 4−δ and Nd1.85Ce0.15CuO 4−δ , Physica C 209, 381 (1993).
. P K Mang, S Larochelle, A Mehta, O P Vajk, A S Erickson, L Lu, W J L Buyers, A F Marshall, K Prokes, M , P. K. Mang, S. Larochelle, A. Mehta, O. P. Vajk, A. S. Erick- son, L. Lu, W. J. L. Buyers, A. F. Marshall, K. Prokes, and M.
Phase decomposition and chemical inhomogeneity in Nd2−xCexCuO 4±δ. Greven, 10.1103/PhysRevB.70.094507Phys. Rev. B. 7094507Greven, Phase decomposition and chemical inhomogeneity in Nd2−xCexCuO 4±δ , Phys. Rev. B 70, 094507 (2004).
Oxygen dependence of the transport properties of Nd1.78Ce0.22CuO 4±δ. X Q Xu, S N Mao, W Jiang, J L Peng, R L Greene, 10.1103/PhysRevB.53.871Phys. Rev. B. 53871X. Q. Xu, S. N. Mao, W. Jiang, J. L. Peng, and R. L. Greene, Oxygen dependence of the transport properties of Nd1.78Ce0.22CuO 4±δ , Phys. Rev. B 53, 871 (1996).
Role of the Madelung energy in hole conductivity in copper oxides: Difference between semiconductors and high-Tc superconductors. J B Torrance, R M Metzger, 10.1103/PhysRevLett.63.1515Phys. Rev. Lett. 631515J. B. Torrance and R. M. Metzger, Role of the Madelung en- ergy in hole conductivity in copper oxides: Difference between semiconductors and high-Tc superconductors, Phys. Rev. Lett. 63, 1515 (1989).
Apex oxygen and critical temperature in copper oxide superconductors: Universal correlation with the stability of local singlets. Y Ohta, T Tohyama, S Maekawa, 10.1103/PhysRevB.43.2968Phys. Rev. B. 432968Y. Ohta, T. Tohyama, and S. Maekawa, Apex oxygen and crit- ical temperature in copper oxide superconductors: Universal correlation with the stability of local singlets, Phys. Rev. B 43, 2968 (1991).
Role of oxygen in the electron-doped superconducting cuprates. J S Higgins, Y Dagan, M C Barr, B D Weaver, R L Greene, 10.1103/PhysRevB.73.104510Phys. Rev. B. 73104510J. S. Higgins, Y. Dagan, M. C. Barr, B. D. Weaver, and R. L. Greene, Role of oxygen in the electron-doped superconducting cuprates, Phys. Rev. B 73, 104510 (2006).
Infrared transmission study of Pr2CuO4 crystal-field excitations. G Riou, S Jandl, M Poirier, V Nekvasil, M Diviš, P Fournier, R Greene, D Zhigunov, S Barilo, 10.1007/s100510170066Eur. Phys. J. B. 23179G. Riou, S. Jandl, M. Poirier, V. Nekvasil, M. Diviš, P. Fournier, R. Greene, D. Zhigunov, and S. Barilo, Infrared transmission study of Pr2CuO4 crystal-field excitations, Eur. Phys. J. B 23, 179 (2001).
Pr 3+ crystal-field excitation study of apical oxygen and reduction processes in Pr2−xCexCuO 4±δ. G Riou, P Richard, S Jandl, M Poirier, P Fournier, V Nekvasil, S N Barilo, L A Kurnevich, 10.1103/PhysRevB.69.024511Phys. Rev. B. 6924511G. Riou, P. Richard, S. Jandl, M. Poirier, P. Fournier, V. Nek- vasil, S. N. Barilo, and L. A. Kurnevich, Pr 3+ crystal-field excitation study of apical oxygen and reduction processes in Pr2−xCexCuO 4±δ , Phys. Rev. B 69, 024511 (2004).
Role of oxygen nonstoichiometry and the reduction process on the local structure of Nd2−xCexCuO 4±δ. P Richard, G Riou, I Hetel, S Jandl, M Poirier, P Fournier, 10.1103/PhysRevB.70.064513Phys. Rev. B. 7064513P. Richard, G. Riou, I. Hetel, S. Jandl, M. Poirier, and P. Fournier, Role of oxygen nonstoichiometry and the reduction process on the local structure of Nd2−xCexCuO 4±δ , Phys. Rev. B 70, 064513 (2004).
Heat treatment effects on the superconductivity and crystal structure of Nd1.85Ce0.15CuO4 studied using a single crystal. K Kurahashi, H Matsushita, M Fujita, K Yamada, 10.1143/JPSJ.71.910J. Phys. Soc. Jpn. 71910K. Kurahashi, H. Matsushita, M. Fujita, and K. Yamada, Heat treatment effects on the superconductivity and crystal structure of Nd1.85Ce0.15CuO4 studied using a single crystal, J. Phys. Soc. Jpn. 71, 910 (2002).
Microscopic annealing process and its impact on superconductivity in T'-structure electron-doped copper oxides. H J Kang, P Dai, B J Campbell, P J Chupas, S Rosenkranz, P L Lee, Q Huang, S Li, S Komiya, Y Ando, 10.1038/nmat1847Nat. Mater. 6224H. J. Kang, P. Dai, B. J. Campbell, P. J. Chupas, S. Rosenkranz, P. L. Lee, Q. Huang, S. Li, S. Komiya, and Y. Ando, Micro- scopic annealing process and its impact on superconductivity in T'-structure electron-doped copper oxides, Nat. Mater. 6, 224 (2007).
As-grown superconducting Pr2CuO4 under thermodynamic constraints. Y Krockenberger, M Horio, H Irie, A Fujimori, H Yamamoto, 10.7567/apex.8.053101Appl. Phys. Express. 853101Y. Krockenberger, M. Horio, H. Irie, A. Fujimori, and H. Ya- mamoto, As-grown superconducting Pr2CuO4 under thermo- dynamic constraints, Appl. Phys. Express 8, 053101 (2015).
Fabrication of superconducting Nd2−xCexCuO 4±δ films by automated dc sputtering technique. A Guarino, G Patimo, A Vecchione, T Di Luccio, A Nigro, 10.1016/j.physc.2013.09.010Physica C. 495146A. Guarino, G. Patimo, A. Vecchione, T. Di Luccio, and A. Ni- gro, Fabrication of superconducting Nd2−xCexCuO 4±δ films by automated dc sputtering technique, Physica C 495, 146 (2013).
Pinning mechanism in electron-doped HTS Nd1.85Ce0.15CuO 4−δ epitaxial films. A Guarino, A Leo, G Grimaldi, N Martucciello, C Dean, M N Kunchur, S Pace, A Nigro, 10.1088/0953-2048/27/12/124011Supercond. Sci. Technol. 27124011A. Guarino, A. Leo, G. Grimaldi, N. Martucciello, C. Dean, M. N. Kunchur, S. Pace, and A. Nigro, Pinning mechanism in electron-doped HTS Nd1.85Ce0.15CuO 4−δ epitaxial films, Su- percond. Sci. Technol. 27, 124011 (2014).
See Supplemental Material below for more information. See Supplemental Material below for more information.
Ce)2CuO4 thin films. M Naito, H Sato, H Yamamoto, 10.1016/S0921-4534(97)01510-4MBE growth of (La,Sr)2CuO4 and. Nd29336M. Naito, H. Sato, and H. Yamamoto, MBE growth of (La,Sr)2CuO4 and (Nd,Ce)2CuO4 thin films, Physica C 293, 36 (1997).
Epitaxy-stabilizedntype superconducting cuprates. M Naito, S Karimoto, A Tsukada, 10.1088/0953-2048/15/12/306Supercond. Sci. Technol. 151663M. Naito, S. Karimoto, and A. Tsukada, Epitaxy-stabilizedn- type superconducting cuprates, Supercond. Sci. Technol. 15, 1663 (2002).
Universal superconducting ground state in Nd1.85Ce0.15CuO4 and Nd2CuO4. Y Krockenberger, H Yamamoto, M Mitsuhashi, M Naito, 10.1143/jjap.51.010106Jpn. J. Appl. Phys. 5110106Y. Krockenberger, H. Yamamoto, M. Mitsuhashi, and M. Naito, Universal superconducting ground state in Nd1.85Ce0.15CuO4 and Nd2CuO4, Jpn. J. Appl. Phys. 51, 010106 (2011).
Deposition of epitaxial thin films of Nd1.85Ce0.15CuO4−y by laser ablation. A Gupta, G Koren, C C Tsuei, A Segmüller, T R Mcguire, 10.1063/1.102321Appl. Phys. Lett. 551795A. Gupta, G. Koren, C. C. Tsuei, A. Segmüller, and T. R. McGuire, Deposition of epitaxial thin films of Nd1.85Ce0.15CuO4−y by laser ablation, Appl. Phys. Lett. 55, 1795 (1989).
Deposition and reduction of Nd1.85Ce0.15CuO4−y superconducting thin films. S N Mao, X X Xi, S Bhattacharya, Q Li, T Venkatesan, J L Peng, R L Greene, J Mao, D H Wu, S M Anlage, 10.1063/1.108242Appl. Phys. Lett. 612356S. N. Mao, X. X. Xi, S. Bhattacharya, Q. Li, T. Venkatesan, J. L. Peng, R. L. Greene, J. Mao, D. H. Wu, and S. M. Anlage, Depo- sition and reduction of Nd1.85Ce0.15CuO4−y superconducting thin films, Appl. Phys. Lett. 61, 2356 (1992).
Improving the growth of electrondoped Pr2−xCexCuO 4+δ thin films made by pulsed-laser deposition using excess CuO. G Roberge, S Charpentier, S Godin-Proulx, P Rauwel, K Truong, P Fournier, 10.1016/j.jcrysgro.2009.01.017J. Cryst. Growth. 3111340G. Roberge, S. Charpentier, S. Godin-Proulx, P. Rauwel, K. Truong, and P. Fournier, Improving the growth of electron- doped Pr2−xCexCuO 4+δ thin films made by pulsed-laser de- position using excess CuO, J. Cryst. Growth 311, 1340 (2009).
Effect of high oxygen pressure annealing on superconducting Nd1.85Ce0.15CuO4 thin films by pulsed laser deposition from Cu-enriched targets. M Hoek, F Coneri, D P Leusink, P D Eerkes, X R Wang, H Hilgenkamp, 10.1088/0953-2048/27/4/044017Supercond. Sci. Technol. 2744017M. Hoek, F. Coneri, D. P. Leusink, P. D. Eerkes, X. R. Wang, and H. Hilgenkamp, Effect of high oxygen pressure anneal- ing on superconducting Nd1.85Ce0.15CuO4 thin films by pulsed laser deposition from Cu-enriched targets, Supercond. Sci. Technol. 27, 044017 (2014).
High-temperature resistivity in the iron pnictides and the electron-doped cuprates. P L Bach, S R Saha, K Kirshenbaum, J Paglione, R L Greene, 10.1103/PhysRevB.83.212506Phys. Rev. B. 83212506P. L. Bach, S. R. Saha, K. Kirshenbaum, J. Paglione, and R. L. Greene, High-temperature resistivity in the iron pnictides and the electron-doped cuprates, Phys. Rev. B 83, 212506 (2011).
Anomalous normalstate resistivity in superconducting La2−xCexCuO4: Fermi liquid or strange metal?. T Sarkar, R L Greene, S. Das Sarma, 10.1103/PhysRevB.98.224503Phys. Rev. B. 98224503T. Sarkar, R. L. Greene, and S. Das Sarma, Anomalous normal- state resistivity in superconducting La2−xCexCuO4: Fermi liquid or strange metal?, Phys. Rev. B 98, 224503 (2018).
Correlation between structural and transport properties in epitaxial films of Nd2−xCexCuO 4±δ. A Guarino, R Fittipaldi, A Romano, A Vecchione, A Nigro, 10.1016/j.tsf.2012.09.057Thin Solid Films. 524282A. Guarino, R. Fittipaldi, A. Romano, A. Vecchione, and A. Nigro, Correlation between structural and transport properties in epitaxial films of Nd2−xCexCuO 4±δ , Thin Solid Films 524, 282 (2012).
Efficiency of ab-initio total energy calculations for metals and semiconductors using a planewave basis set. G Kresse, J Furthmüller, 10.1016/0927-0256(96)00008-0Comput. Mater. Sci. 615G. Kresse and J. Furthmüller, Efficiency of ab-initio total en- ergy calculations for metals and semiconductors using a plane- wave basis set, Comput. Mater. Sci. 6, 15 (1996).
From ultrasoft pseudopotentials to the projector augmented-wave method. G Kresse, D Joubert, 10.1103/PhysRevB.59.1758Phys. Rev. B. 591758G. Kresse and D. Joubert, From ultrasoft pseudopotentials to the projector augmented-wave method, Phys. Rev. B 59, 1758 (1999).
Strength of correlations in electron-and hole-doped cuprates. C Weber, K Haule, G Kotliar, 10.1038/nphys1706Nature Physics. 6574C. Weber, K. Haule, and G. Kotliar, Strength of correlations in electron-and hole-doped cuprates, Nature Physics 6, 574 (2010).
Band-structure trend in hole-doped cuprates and correlation with Tcmax. E Pavarini, I Dasgupta, T Saha-Dasgupta, O Jepsen, O K Andersen, 10.1103/PhysRevLett.87.047003Phys. Rev. Lett. 8747003E. Pavarini, I. Dasgupta, T. Saha-Dasgupta, O. Jepsen, and O. K. Andersen, Band-structure trend in hole-doped cuprates and correlation with Tcmax, Phys. Rev. Lett. 87, 047003 (2001).
Apical charge fluxmodulated in-plane transport properties of cuprate superconductors. S Kim, X Chen, W Fitzhugh, X Li, 10.1103/PhysRevLett.121.157001Phys. Rev. Lett. 121157001S. Kim, X. Chen, W. Fitzhugh, and X. Li, Apical charge flux- modulated in-plane transport properties of cuprate supercon- ductors, Phys. Rev. Lett. 121, 157001 (2018).
Transport properties of over-doped epitaxial NdCeCuO films. A Guarino, C Cirillo, A Leo, S Santandrea, G Grimaldi, A Polcari, R Fittipaldi, C Attanasio, P Romano, A Romano, A Vecchione, A Nigro, 10.1007/s10948-010-0913-7J. Supercond. Nov. Magn. 24169A. Guarino, C. Cirillo, A. Leo, S. Santandrea, G. Grimaldi, A. Polcari, R. Fittipaldi, C. Attanasio, P. Romano, A. Romano, A. Vecchione, and A. Nigro, Transport properties of over-doped epitaxial NdCeCuO films, J. Supercond. Nov. Magn. 24, 169 (2011).
Parametric study of in situ growth of NdCeCuO thin films by laser ablation. W.-T Lin, G.-J Chen, 10.1557/JMR.1995.2422J. Mater. Sci. Res. 102422W.-T. Lin and G.-J. Chen, Parametric study of in situ growth of NdCeCuO thin films by laser ablation, J. Mater. Sci. Res. 10, 2422 (1995).
Restoring the density-gradient expansion for exchange in solids and surfaces. J P Perdew, A Ruzsinszky, G I Csonka, O A Vydrov, G E Scuseria, L A Constantin, X Zhou, K Burke, 10.1103/PhysRevLett.100.136406Phys. Rev. Lett. 100136406J. P. Perdew, A. Ruzsinszky, G. I. Csonka, O. A. Vydrov, G. E. Scuseria, L. A. Constantin, X. Zhou, and K. Burke, Restoring the density-gradient expansion for exchange in solids and sur- faces, Phys. Rev. Lett. 100, 136406 (2008).
Structural and electronic properties of Sr2RuO4/Sr 3 Ru2O7 heterostructures. C Autieri, M Cuoco, C Noce, 10.1103/PhysRevB.89.075102Phys. Rev. B. 8975102C. Autieri, M. Cuoco, and C. Noce, Structural and electronic properties of Sr2RuO4/Sr 3 Ru2O7 heterostructures, Phys. Rev. B 89, 075102 (2014).
Interdiffusion-driven synthesis of tetragonal chromium (III) oxide on BaTiO3. M Asa, G Vinai, J L Hart, C Autieri, C Rinaldi, P Torelli, G Panaccione, M L Taheri, S Picozzi, M Cantoni, 10.1103/PhysRevMaterials.2.033401Phys. Rev. Materials. 233401M. Asa, G. Vinai, J. L. Hart, C. Autieri, C. Rinaldi, P. Torelli, G. Panaccione, M. L. Taheri, S. Picozzi, and M. Cantoni, Interdiffusion-driven synthesis of tetragonal chromium (III) ox- ide on BaTiO3, Phys. Rev. Materials 2, 033401 (2018).
Detecting antiferromagnetism in tetragonal Cr2O3 by electrical measurements. M Asa, C Autieri, C Barone, C Mauro, S Picozzi, S Pagano, M Cantoni, 10.1103/PhysRevB.100.174423Phys. Rev. B. 100174423M. Asa, C. Autieri, C. Barone, C. Mauro, S. Picozzi, S. Pagano, and M. Cantoni, Detecting antiferromagnetism in tetragonal Cr2O3 by electrical measurements, Phys. Rev. B 100, 174423 (2019).
Band structure of overdoped cuprate superconductors: Density functional theory matching experiments. K P Kramer, M Horio, S S Tsirkin, Y Sassa, K Hauser, C E Matt, D Sutter, A Chikina, N B M Schröter, J A Krieger, T Schmitt, V N Strocov, N C Plumb, M Shi, S Pyon, T Takayama, H Takagi, T Adachi, T Ohgi, T Kawamata, Y Koike, T Kondo, O J Lipscombe, S M Hayden, M Ishikado, H Eisaki, T Neupert, J Chang, 10.1103/PhysRevB.99.224509Phys. Rev. B. 99224509K. P. Kramer, M. Horio, S. S. Tsirkin, Y. Sassa, K. Hauser, C. E. Matt, D. Sutter, A. Chikina, N. B. M. Schröter, J. A. Krieger, T. Schmitt, V. N. Strocov, N. C. Plumb, M. Shi, S. Pyon, T. Takayama, H. Takagi, T. Adachi, T. Ohgi, T. Kawa- mata, Y. Koike, T. Kondo, O. J. Lipscombe, S. M. Hayden, M. Ishikado, H. Eisaki, T. Neupert, and J. Chang, Band structure of overdoped cuprate superconductors: Density functional theory matching experiments, Phys. Rev. B 99, 224509 (2019).
Intermediate coupling model of the cuprates. T Das, R Markiewicz, A Bansil, 10.1080/00018732.2014.940227Adv. Phys. 63151T. Das, R. Markiewicz, and A. Bansil, Intermediate coupling model of the cuprates, Adv. Phys. 63, 151 (2014).
Electronic band structure of optimal superconductors: From cuprates to ferropnictides and back again. A A Kordyuk, 10.1063/1.5037550Low Temp. Phys. 44477A. A. Kordyuk, Electronic band structure of optimal supercon- ductors: From cuprates to ferropnictides and back again, Low Temp. Phys. 44, 477 (2018).
An accurate firstprinciples treatment of doping-dependent electronic structure of high-temperature cuprate superconductors. J W Furness, Y Zhang, C Lane, I G Buda, B Barbiellini, R S Markiewicz, A Bansil, J Sun, 10.1038/s42005-018-0009-4Commun. Phys. 111J. W. Furness, Y. Zhang, C. Lane, I. G. Buda, B. Barbiellini, R. S. Markiewicz, A. Bansil, and J. Sun, An accurate first- principles treatment of doping-dependent electronic structure of high-temperature cuprate superconductors, Commun. Phys. 1, 11 (2018).
| []
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.